Connect with us

Artificial Intelligence

Even Brief AI Use Could Hurt Your Ability to Think, a New Study Finds

Published

on

Even Brief AI Use Could Hurt Your Ability to Think, a New Study Finds

Could a short session with an AI chatbot actually dull your mind? A new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA suggests that using an AI assistant for as little as 10 minutes might impair your ability to think critically and solve problems. The findings raise serious questions about the AI use thinking skills relationship in our daily lives.

As reported by Wired, the study asked participants to tackle problems like simple fractions and reading comprehension tasks. Some were given access to an AI assistant that could solve the problems for them. When the AI was suddenly removed, those participants were far more likely to give up or get the answer wrong. In other words, the moment the AI crutch was gone, people struggled.

How AI Use Affects Critical Thinking and Persistence

This research highlights a troubling trend: relying on AI may weaken our critical thinking AI abilities. When the AI was taken away, participants who had used it showed less persistence—the willingness to keep trying when things get hard. Persistence is a key part of how humans learn and develop new skills over time. AI, it seems, is quietly chipping away at that.

Building on this, the study suggests that even brief exposure to AI problem-solving can create a dependency. Participants became accustomed to quick answers, reducing their own cognitive effort. This could have long-term implications for education and workplace training, where AI chatbot problem solving is increasingly common.

What the Researchers Say About AI and Learning

Michiel Bakker, an assistant professor at MIT who worked on the study, is careful not to sound like a doomsayer. “The takeaway is not that we should ban AI in education or workplaces,” he says. “AI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.”

In addition, Bakker believes AI tools need to be redesigned to work like a good teacher. Instead of just handing the answer, they should coach users through the problem. “Systems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user,” he adds. This is a crucial insight for anyone concerned about cognitive effects AI in learning environments.

Practical Steps to Protect Your Thinking Skills

So, what can you do to avoid the pitfalls of AI dependency? First, use AI as a tool for guidance, not a shortcut. For example, instead of asking a chatbot to solve a math problem, ask it to explain the steps. This way, you engage your own problem-solving abilities.

Second, set limits on AI use for complex tasks. The study shows that even 10 minutes of AI assistance can reduce your own cognitive effort. Try solving problems on your own first, then use AI to check your work. This approach preserves AI education impact while maintaining your skills.

Third, consider using AI tools that are designed to scaffold learning. Some platforms now offer interactive coaching, which can help you learn rather than just get answers. For more on this, check out our guide on how to use AI for learning without losing your edge.

The Bigger Picture: AI in Education and Workplaces

It’s a tricky balance, and AI companies are already grappling with related issues. For now, it might be worth asking yourself: is your AI assistant helping you grow, or just doing your thinking for you? The AI assistant learning dynamic is complex, but awareness is the first step.

Ultimately, this study serves as a wake-up call. While AI offers incredible benefits, we must be intentional about how we use it. By prioritizing active learning and critical thinking, we can harness AI’s power without sacrificing our own cognitive abilities. For further reading, see our article on balancing AI and human intelligence in the modern world.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Snapchat and Perplexity AI Part Ways: Inside the $400 Million Deal That Fell Apart

Published

on

Snapchat and Perplexity AI Part Ways: Inside the $400 Million Deal That Fell Apart

The Snapchat Perplexity AI deal is officially dead. Snap confirmed in its Q1 2026 investor letter that both companies “amicably ended the relationship in Q1,” terminating a $400 million cash-and-equity agreement announced in November. The partnership would have brought Perplexity‘s AI answering engine directly into Snapchat’s Chat interface, allowing users to ask questions and receive conversational, source-backed answers without leaving the app. However, the integration never materialized, leaving many wondering why.

Why Did Snapchat and Perplexity End Their Partnership?

Initial signs of trouble emerged in February, when Snap announced that it had not yet mutually agreed on a broader rollout plan with Perplexity. The deal also raised concerns about how AI search would function inside private messaging, particularly for younger users and sensitive topics. With hundreds of millions of daily users on Snapchat, integrating a chatbot seemed promising on paper, but practical hurdles proved insurmountable.

Furthermore, Snap’s latest sales guidance now assumes no contribution from Perplexity, a stark contrast to earlier projections that the partnership would begin generating revenue in 2026. The abrupt cancellation highlights the challenges tech companies face when embedding AI assistants into existing social platforms. Learn more about the rise of AI in messaging apps.

What Does This Mean for Snapchat Users?

The end of the Snapchat Perplexity AI deal does not mean Snapchat is abandoning AI altogether. The company recently introduced AI Sponsored Snaps, an ad format that lets brands place interactive AI chatbots inside Chat. So chatbot-style conversations may still appear in Snapchat, but through advertisements rather than a dedicated search engine.

Snap is also expanding features around Snap Map. Its new Place Loyalty feature ranks users based on how often they visit certain locations over the past year, awarding Gold, Silver, and Bronze status levels. Snap emphasizes that rankings remain private to the user, and location sharing is off by default. These moves suggest Snap is focusing on organic engagement rather than third-party AI integrations.

How Does Snapchat’s AI Strategy Compare to Competitors?

Tech giants are racing to embed AI assistants across their ecosystems. Meta has added Meta AI to WhatsApp, Instagram, Facebook, and Messenger. Google has integrated Gemini into Search, Android, Gmail, and other products. Snapchat seemed to be following a similar path with Perplexity, but the cancellation signals a different approach.

Instead of a universal AI chatbot, Snap is opting for controlled, ad-driven AI interactions. This strategy may appeal to advertisers but limits the scope of AI functionality for users. As a result, Snapchat’s AI capabilities remain narrower than those of competitors like Meta and Google. Compare Snapchat’s AI features with Meta’s offerings.

Is Snapchat Still Growing Without the Perplexity Deal?

Absolutely. In Q1, Snap’s global daily active users rose 5% year-over-year to 483 million, while monthly active users increased 5% to 965 million. The company credited growth to features across Snap Map, Lenses, and other parts of the app. This indicates that Snapchat can thrive without the Perplexity integration, relying on its core strengths in visual communication and augmented reality.

Building on this momentum, Snap is likely to continue investing in native AI tools rather than external partnerships. For now, the Snapchat Perplexity AI deal serves as a cautionary tale about the complexities of integrating AI into social platforms. Explore what’s next for Snapchat and AI.

Continue Reading

Artificial Intelligence

Google Responds to Chrome’s Silent Gemini Nano Install, Sidesteps Consent Issue

Published

on

Google Responds to Chrome’s Silent Gemini Nano Install, Sidesteps Consent Issue

Google has finally broken its silence on the growing controversy surrounding Chrome’s automatic download of a 4GB AI model. However, the company’s response leaves a critical question hanging: why did it install the Gemini Nano model without asking users first?

Parisa Tabriz, Google’s Vice President and General Manager for Chrome, took to social media to address the backlash. She framed the Gemini Nano auto-install as a core part of Chrome’s security and developer roadmap. Yet, privacy advocates argue that the lack of explicit consent violates both user trust and European law.

How the Gemini Nano Auto-Install Sparked Outrage

The controversy erupted after privacy researcher Alexander Hanff documented Chrome’s behavior in detail. He discovered that the browser silently downloads the Gemini Nano model—roughly 4GB in size—onto compatible devices without any prompt. Even more troubling, manually deleting the file triggers an automatic re-download upon the next browser restart.

This means that users are stuck with a large file consuming storage and bandwidth, whether they want it or not. The situation worsened when critics noticed a glaring inconsistency: Chrome’s new “AI Mode” in the address bar does not even use the local model. Instead, it sends queries to Google’s cloud servers. As a result, users absorb the cost of a 4GB download that has no connection to the browser’s most visible AI feature.

Privacy advocates have also flagged potential violations of the EU’s ePrivacy Directive, which requires user consent before storing data on a device. The Chrome AI model download appears to bypass this requirement entirely.

Google’s Defense: Security and Developer APIs

In a series of posts on X, Tabriz acknowledged the concerns but stopped short of apologizing or offering a clear opt-in mechanism. She explained that Google has been offering Gemini Nano in Chrome since 2024 as a lightweight, on-device model. According to her, it is central to Chrome’s developer APIs and security features, including scam detection.

“On-device AI is core to our developer and security strategy,” Tabriz wrote. She emphasized that the model processes data locally rather than sending it to Google’s servers, which theoretically enhances privacy. She also noted that the model automatically uninstalls when a device is low on storage.

However, Tabriz did not address the consent question directly. She also failed to explain why the model reinstalls itself after a user deletes it. Google has separately stated that users can disable and remove the model through Chrome’s settings, and that once disabled, it will not re-download. But this requires users to know about the setting in the first place.

Privacy Implications of Silent AI Downloads

The Google Gemini Nano privacy debate highlights a broader tension between convenience and consent. On-device AI models can improve user experience by enabling faster, offline features. However, forcing a 4GB download without notice raises serious questions about user autonomy.

For European users, the issue is particularly acute. The ePrivacy Directive mandates that any storage of information on a user’s device requires prior consent. By automatically downloading the Gemini Nano model, Chrome may be in violation of this law. Privacy advocates argue that Google’s response fails to address this legal risk.

Building on this, the re-download behavior is especially concerning. If a user actively removes the file, they are signaling a clear preference. An automatic re-download undermines that preference and could be seen as a form of digital coercion.

How to Disable the Gemini Nano Model

For users who want to take control, Google has provided a way to disable the feature. Navigate to Chrome’s settings, then to the “AI and privacy” section. From there, you can toggle off the Gemini Nano model. Once disabled, it should not re-download. For a step-by-step guide, check out our article on how to disable Gemini Nano in Chrome.

Alternatively, you can manage your browser’s storage settings to prevent automatic downloads. For more tips on protecting your privacy, read our guide on Chrome privacy settings every user should know.

What This Means for the Future of On-Device AI

The Chrome silent install backlash may force Google to rethink its approach. As AI becomes more integrated into browsers, the line between helpful features and intrusive practices will blur. Companies like Google must balance innovation with transparency.

Tabriz’s response suggests that Google views on-device AI as non-negotiable for Chrome’s future. However, the company’s reluctance to address consent directly could erode user trust. Moving forward, clearer communication and opt-in mechanisms will be essential.

In conclusion, while Google has explained the rationale behind the Gemini Nano auto-install, it has not fully resolved the privacy concerns. Users who value control over their devices should take proactive steps to manage these settings. The debate is far from over, and regulators may yet have the final word.

Continue Reading

Artificial Intelligence

A Shocking Study Made Me Rethink How I Use AI — and You Should Too

Published

on

A Shocking Study Made Me Rethink How I Use AI — and You Should Too

I have always considered myself a cautious AI user. I do not let ChatGPT write my emails or shape my stories. Instead, I use AI primarily to look up quick facts or recall something at the tip of my tongue. To me, this felt like the responsible approach — especially as a journalist aware of AI’s hallucination issues and the constant burden of truth verification. However, a recent AI dependency study has made me question even this limited use of tools like Google Gemini for everyday tasks.

The Findings Are Harder to Dismiss Than You Think

The research, conducted through three separate randomized experiments involving math and reading comprehension tasks, revealed a startling pattern. After just ten minutes of AI-assisted problem-solving, participants who then lost access to the AI performed worse and gave up more frequently than those who never used it at all. This was not after months of dependency — only ten minutes.

What makes this AI dependency study particularly compelling is that the effects appeared across both math and reading tasks. These are fundamentally different cognitive skills, suggesting the issue is not a quirk of one type of task but a general consequence of how we use these tools. Building on this, the study found that the cause was not the AI itself — it was how people used it.

Now, on an ordinary day, I might have dismissed such research as another swing in the ongoing debate about AI’s benefits and pitfalls. But this study comes from a joint effort by Carnegie Mellon University, the University of Oxford, the Massachusetts Institute of Technology, and the University of California, Los Angeles.

How You Use AI Matters More Than How Much You Use It

The majority of participants used AI to get answers directly. These individuals showed the largest declines in performance and persistence — not only compared to the control group but also compared to those who used AI for hints and clarifications. Participants who used AI for hints showed no significant impairments relative to the control group.

In other words, people who asked AI to solve the problem outright became worse at solving problems themselves. Meanwhile, those who used it for a nudge in the right direction or for clarity remained fine — statistically indistinguishable from people who had not used AI at all. This is a meaningful distinction that reframes the conversation around AI making people less intelligent. It shifts the question from “should I use AI?” to “what am I actually doing when I use it?” That question matters whether you use AI occasionally or rely on it daily for work or school.

The Cognitive Outsourcing Trap

If you have been using AI for cognitive outsourcing — essentially handing off your problem until you get an answer back — this research suggests the habit may be quietly training you to expect rescue at moments of difficulty rather than learning to push through them. The researchers warn that if these effects accumulate with sustained AI use, current AI systems risk eroding the very human capabilities they are meant to support. You will not notice it right away, but it will become apparent the next time you are on your own.

It Might Be Time to Change Your Habits

I do not think this means you should stop using AI tools altogether. But starting today, I am going to be more deliberate about what I am actually asking for when I open a chat window. Am I looking for a fact? A direction? A sanity check? Or am I just tired of thinking and hoping the chatbot will do it for me? The first few are probably fine. The last one, not so much.

For more on balancing AI use and critical thinking, check out our guide on using AI wisely. Additionally, explore how AI tools can boost productivity without harming cognitive skills.

Continue Reading

Trending