Over the past few years, AI technology has become more and more prominent. Most of us probably interact with AI-based content daily, even if we don’t realize it. There are AI videos going viral on TikTok and Instagram, AI-written posts on websites, AI assistants responding to emails and phone calls, and even AI drive-thru systems at fast food chains.
Whenever new technology hits the market, there’s a lot of excitement and skepticism about it — and that’s never felt more true than with the introduction of AI into our households. People have reasonable fears about AI, like job displacement, privacy invasion, and malicious misuse, but there are also a lot of great use cases for AI. AI chatbots like ChatGPT, Google Gemini, and Claude, can simplify daily life in a lot of ways, from creating schedules, to adjusting recipes, to planning trips, to organizing lists. This type of technology can also help level the playing field for students, help analyze information, and support educational goals.
Although AI can be incredibly useful, we shouldn’t rely on it for everything. As people have become more comfortable with AI recently, we’ve started to trust it more, but it’s actually good to keep a healthy amount of skepticism. Here are six times you can’t trust AI:
1. Medical Advice
When it comes to medical advice or health concerns, you should always seek help from a qualified medical professional. It might feel like no big deal to type your symptoms into ChatGPT and see what it tells you, but AI can’t accurately diagnose you, nor can it treat you. If you have questions or concerns about your health (or the health of a loved one), it’s best to contact your medical providers.
2. Privacy Concerns
If you’re wondering whether you can trust AI with your personal information, the answer is no. According to a recent study, “six developers appear to employ their users’ chat data to train and improve their models by default, and that some retain this data indefinitely. Developers may collect and train on personal information disclosed in chats, including sensitive information such as biometric and health data, as well as files uploaded by users. Four of the six companies we examined appear to include children’s chat data for model training, as well as customer data from other products. On the whole, developers’ privacy policies often lack essential information about their practices, highlighting the need for greater transparency and accountability.” The author of the study explained that AI users should opt out of having their data used for training and really consider what information they’re sharing in AI chat conversations.

3. When Complex Questions Have Simple Answers
One of the benefits of AI is how quickly it can process information — but that can also be its downfall. Often, AI oversimplifies problems in order to give concise answers, but that can omit important context and nuance. To get more accurate responses, break down your question into smaller, simpler queries.
4. When There’s Obvious Bias
AI models pull from all over the web, so it’s not uncommon for AI to pull from biased (or just incorrect) data sets. Time and again, AI has provided offensive and stereotypical responses to questions, drawing concerns about racism, sexism, homophobia, and more. If an AI answer strikes you as biased, do some additional research.
5. When The ‘Source’ Is Untrustworthy
Today, when you type a query into a search engine (like Google), the first answers you’ll typically see are in the “AI Overview” section. These answers are not fact checked, nor are they necessarily from trustworthy sources — you’ll commonly see answers pulled from YouTube, Reddit, and blogs. Before blindly believing the answer you see, trace it back to the original source and find out whether that source provides reliable information.
6. Accurate Information
Although AI can provide accurate information, accuracy isn’t guaranteed. AI pulls information from all over the internet, meaning it can also pull misinformation and provide incorrect answers. According to the University of Maryland, “Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn’t – and the AI cannot interpret or distinguish between the two. It’s up to you to make the distinction.”
AI is has the power to broaden our worldview, but it also has the ability to reduce our media literacy. We need to consciously question AI and think critically about the answers it gives us. In the next few years, AI will become a normal part of our daily lives, so we need to make sure we’re using it with intentionality.