Artificial Intelligence

A Shocking Study Made Me Rethink How I Use AI — and You Should Too

Published

on

A Shocking Study Made Me Rethink How I Use AI — and You Should Too

I have always considered myself a cautious AI user. I do not let ChatGPT write my emails or shape my stories. Instead, I use AI primarily to look up quick facts or recall something at the tip of my tongue. To me, this felt like the responsible approach — especially as a journalist aware of AI’s hallucination issues and the constant burden of truth verification. However, a recent AI dependency study has made me question even this limited use of tools like Google Gemini for everyday tasks.

The Findings Are Harder to Dismiss Than You Think

The research, conducted through three separate randomized experiments involving math and reading comprehension tasks, revealed a startling pattern. After just ten minutes of AI-assisted problem-solving, participants who then lost access to the AI performed worse and gave up more frequently than those who never used it at all. This was not after months of dependency — only ten minutes.

What makes this AI dependency study particularly compelling is that the effects appeared across both math and reading tasks. These are fundamentally different cognitive skills, suggesting the issue is not a quirk of one type of task but a general consequence of how we use these tools. Building on this, the study found that the cause was not the AI itself — it was how people used it.

Now, on an ordinary day, I might have dismissed such research as another swing in the ongoing debate about AI’s benefits and pitfalls. But this study comes from a joint effort by Carnegie Mellon University, the University of Oxford, the Massachusetts Institute of Technology, and the University of California, Los Angeles.

How You Use AI Matters More Than How Much You Use It

The majority of participants used AI to get answers directly. These individuals showed the largest declines in performance and persistence — not only compared to the control group but also compared to those who used AI for hints and clarifications. Participants who used AI for hints showed no significant impairments relative to the control group.

In other words, people who asked AI to solve the problem outright became worse at solving problems themselves. Meanwhile, those who used it for a nudge in the right direction or for clarity remained fine — statistically indistinguishable from people who had not used AI at all. This is a meaningful distinction that reframes the conversation around AI making people less intelligent. It shifts the question from “should I use AI?” to “what am I actually doing when I use it?” That question matters whether you use AI occasionally or rely on it daily for work or school.

The Cognitive Outsourcing Trap

If you have been using AI for cognitive outsourcing — essentially handing off your problem until you get an answer back — this research suggests the habit may be quietly training you to expect rescue at moments of difficulty rather than learning to push through them. The researchers warn that if these effects accumulate with sustained AI use, current AI systems risk eroding the very human capabilities they are meant to support. You will not notice it right away, but it will become apparent the next time you are on your own.

It Might Be Time to Change Your Habits

I do not think this means you should stop using AI tools altogether. But starting today, I am going to be more deliberate about what I am actually asking for when I open a chat window. Am I looking for a fact? A direction? A sanity check? Or am I just tired of thinking and hoping the chatbot will do it for me? The first few are probably fine. The last one, not so much.

For more on balancing AI use and critical thinking, check out our guide on using AI wisely. Additionally, explore how AI tools can boost productivity without harming cognitive skills.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version