AI Safety & What to Watch For
AI is powerful, but it has limits. Learn about hallucinations, bias, privacy risks, and how to use AI responsibly.
AI Gets Things Wrong
This is the most important thing to know: AI makes mistakes. Confidently.
When ChatGPT tells you something that sounds perfectly true but is completely wrong, that’s called a hallucination. It happens because AI is predicting words, not looking up facts.
Common hallucination examples:
- Making up fake statistics with specific numbers
- Citing books or research papers that don’t exist
- Giving outdated information confidently
- Inventing historical events or quotes
Rule of thumb: The more specific and factual a claim is, the more you should verify it.
Protect Your Privacy
Every time you type something into an AI chatbot, that text might be:
- Stored on the company’s servers
- Reviewed by employees for quality
- Used to improve future AI models
What NOT to share:
- Passwords or login details
- Credit card or bank information
- Medical records or diagnoses
- Confidential work documents
- Private personal details you wouldn’t post publicly
What’s generally fine:
- General questions and learning topics
- Creative writing and brainstorming
- Public information you’d search for anyway
Watch for Bias
AI was trained on human-created content — and humans have biases. This means AI can:
- Reinforce stereotypes
- Give different quality answers about different cultures
- Default to certain perspectives and overlook others
When AI gives you information about people, cultures, or social topics, think critically. Ask yourself: “Is this showing a balanced view?”
The Golden Rules
- Verify important facts — Don’t blindly trust AI for medical, legal, or financial advice
- Protect your privacy — Don’t share sensitive personal data
- Think critically — AI sounds confident even when it’s wrong
- You’re responsible — Anything you publish from AI is your responsibility
- It’s a tool, not an authority — Use it to help think, not to think for you
Remember: AI is like a very knowledgeable but sometimes unreliable assistant. Use it wisely, verify what matters, and never outsource your critical thinking.
Quick Quiz
Test what you just learned. Pick the best answer for each question.
Q1 What is an AI 'hallucination'?
Q2 Should you share sensitive personal information with AI chatbots?
Q3 What is AI bias?
Q4 If AI writes something for you, who is responsible for it?