Connect with us

Artificial Intelligence

ChatGPT Ads Face Early Skepticism as Brands Question Effectiveness

Published

on

The Unproven Frontier of AI Advertising

OpenAI has opened the advertising floodgates within ChatGPT, but the initial splash hasn’t convinced everyone. Brands testing these novel conversational ads are finding themselves in unfamiliar territory. Traditional metrics like click-through rates and conversions don’t translate neatly when ads appear alongside AI-generated responses.

Imagine asking ChatGPT for recipe suggestions and seeing a sponsored message for kitchenware. That’s the new reality for free-tier users. One industry observer noted seeing ads on “literally every single prompt” in their free account. The rollout is accelerating, yet advertisers remain cautious.

Why the hesitation? OpenAI currently charges based on ad views rather than clicks. Without clear engagement data, brands can’t easily calculate their return on investment. They’re spending money without knowing if these ads actually influence user behavior.

Why OpenAI Needs Ads to Succeed

This advertising push isn’t just an experiment—it’s a financial necessity. Running advanced AI models at ChatGPT’s scale requires enormous infrastructure costs. Servers, computing power, and research don’t come cheap.

OpenAI is expanding ads to broader audiences, including free and “Go” plan users in the United States. The company is building relationships with advertising partners like Criteo, encouraging brands to allocate significant budgets. But there’s a catch: if advertisers can’t prove these ads work, that revenue stream could dry up quickly.

The company faces a delicate balancing act. Generate too little advertising revenue, and the business model struggles. Push ads too aggressively, and users might abandon the platform. ChatGPT’s appeal has always been its utility-driven, neutral assistance. Introducing commercial messages changes that dynamic fundamentally.

The Measurement Problem No One Has Solved

Here’s the core challenge: how do you measure success in a conversation? Traditional digital advertising offers clear signals—clicks, impressions, conversions. ChatGPT ads exist in a dialogue where users might read, consider, and act later without any trackable interaction.

Brands are essentially flying blind. They know their ads are being shown, but they don’t know if those views translate to brand awareness, consideration, or sales. This uncertainty makes advertisers hesitant to commit larger budgets.

OpenAI promises that ads remain separate from core responses and that user data won’t be sold. Still, questions linger about integration. Can ads be woven into conversations without compromising trust? Will users perceive ChatGPT differently once commercial messages become commonplace?

What Comes Next for AI Advertising

The current phase is just the beginning. OpenAI will likely refine its advertising approach based on early feedback. Future iterations might include more interactive formats where users can engage directly with sponsored content within conversations.

The company appears to be working toward a scalable, self-service advertising platform that could expand globally. Success depends on solving that fundamental measurement problem. Clearer metrics, better targeting, and performance data that advertisers trust will be essential.

For now, ChatGPT’s advertising experiment highlights both potential and uncertainty. Conversational AI represents uncharted territory for marketers. The rules are still being written, and everyone—OpenAI, advertisers, and users—is figuring them out together. The platform that cracks the code for effective, measurable AI advertising could redefine digital marketing entirely.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Artificial Intelligence

Google Chrome Is Silently Installing a 4 GB AI Model on Your Device. Here’s How to Stop It

Published

on

Google Chrome Is Silently Installing a 4 GB AI Model on Your Device. Here’s How to Stop It

Google Chrome remains the world’s most popular browser, but it is facing increasing competition from a new generation of AI-powered browsers like Perplexity Comet and Dia. In an effort to stay ahead, Google has been integrating artificial intelligence into Chrome. However, a recent discovery has raised serious concerns about privacy and storage. Chrome is now quietly downloading a massive 4 GB AI model onto users’ devices without asking for permission. This Google Chrome AI model, known as Gemini Nano, is automatically installed on compatible hardware, and many users have no idea it is there.

What Is the Google Chrome AI Model and How Does It Install?

If you open your file manager and look for a folder named “OptGuideOnDeviceModel”, you may find a file called “weights.bin”. This file is roughly 4 GB in size and contains Gemini Nano, Google’s on-device AI model. Privacy expert Alexander Hanff discovered this behavior using macOS filesystem event logs, which track every file created or modified at the operating system level.

According to Hanff’s findings, on a freshly created Chrome profile that received zero human input, the entire 4 GB model downloaded in under 15 minutes while a tab was simply left open. Chrome does not ask for permission before installing the Google Chrome AI model. It automatically initiates the download once it determines that your hardware meets the requirements, even if you have never used any AI feature.

Why Is This a Problem for Users and the Environment?

This silent installation consumes significant storage space without user consent. Even worse, if you delete the file, Chrome re-downloads it the next time it runs. Hanff noted that “the user’s deletion is treated as a transient state to be corrected, not as a directive to be respected.”

Interestingly, the most visible AI feature in Chrome—the “AI Mode” pill in the address bar—does not use the local model at all. Instead, it sends your queries to Google Gemini servers. The on-device model powers less visible features like “Help me write” in text boxes and on-device scam detection. This raises the question: why download a 4 GB model for features most users never touch?

Beyond storage concerns, the environmental impact is staggering. Hanff estimates that if 500 million devices download this model, the bandwidth alone translates to roughly 30,000 tonnes of CO2 emissions. That is equivalent to around 6,500 cars running for an entire year—and that is just for the delivery, not actual usage.

How to Disable the Google Chrome AI Model Download

Google should make this download require user confirmation. Until then, you can stop it manually. Follow these steps to disable Google Chrome AI features:

  1. Open Chrome and type chrome://flags in the address bar.
  2. Search for “Enables optimization guide on device”.
  3. Change the setting from “Default” to “Disabled”.
  4. Restart Chrome for the change to take effect.

This method takes more steps than it should, but it effectively prevents Chrome from downloading the Gemini Nano model. For more tips on managing browser storage, check out our guide on clearing Chrome cache.

What Does This Mean for Chrome Users?

This incident highlights a growing trend of browsers adding features without user consent. As AI becomes more integrated into everyday software, users must remain vigilant. The Google Chrome AI model is just one example of how convenience can come at the cost of privacy and control.

If you value your storage space and want to avoid unnecessary data usage, disabling this feature is a smart move. For those concerned about privacy, consider exploring alternative browsers that prioritize transparency. Learn more about privacy-focused browsers that put you in control.

In the meantime, keep an eye on your system for unexpected files. Your device’s storage is yours, not Google’s server room. Take back control today.

Continue Reading

Artificial Intelligence

Google’s Remy AI Agent: A 24/7 Personal Assistant to Rival OpenClaw

Published

on

Google’s Remy AI Agent: A 24/7 Personal Assistant to Rival OpenClaw

Google is quietly building a new artificial intelligence tool that could change how people manage their daily tasks. According to an internal document reviewed by Business Insider, the tech giant is developing an autonomous AI agent codenamed Remy. This Google Remy AI agent is currently being tested by employees within a staff-only version of the Gemini app. While Google has declined to comment on the project, the document describes Remy as a “24/7 personal agent for work, school, and daily life.”

What Makes Google’s Remy AI Agent Different?

Unlike traditional chatbots that simply respond to commands, Remy is designed to take proactive actions on your behalf. It can monitor important events, handle complex tasks without constant input, and learn your preferences over time. This means the agent could automatically manage your calendar, sort emails, or even conduct research—all without waiting for a direct request.

Building on this, Google’s approach appears to focus on seamless integration. Since Remy is being tested inside the Gemini app, it will likely leverage Google’s existing ecosystem of services like Gmail, Google Calendar, and Google Drive. This could give it a significant edge over standalone AI agents that require complex setup.

The AI Agent Race Heats Up

The emergence of Remy AI assistant comes at a time when the market for autonomous agents is exploding. Earlier this year, an open-source project called OpenClaw took the tech world by storm, amassing over 100,000 GitHub stars in less than a week. It can respond to messages, manage files, and automate tasks across a computer without any human input.

OpenClaw’s popularity was so immense that Nvidia CEO Jensen Huang called it “definitely the next ChatGPT.” The demand even pushed secondhand MacBook prices up by 15% in China. OpenAI ultimately hired OpenClaw’s creator, signaling the strategic importance of this technology.

However, security researchers have raised concerns about OpenClaw, warning of exposed admin panels, prompt injection risks, and credentials stored in plain text. This is where Google’s polished, privacy-conscious approach could make a difference. A trusted platform like Google might be exactly what wins the AI agent market.

Competitors Are Also Moving Fast

Every major player is now in the AI agent race. Anthropic launched Claude Cowork, which can handle PC tasks without the complex setup that OpenClaw requires. Meta acquired Manus AI and launched My Computer, a desktop agent that sorts files, runs apps, and sends emails on your behalf. Meanwhile, Nvidia is building NemoClaw, an open-source platform that lets businesses deploy autonomous AI agents regardless of hardware.

This means that Google’s Remy is entering a crowded field. Yet the company’s vast user base and deep integration with everyday tools could give it a unique advantage. As a result, the battle for the best autonomous AI agent is far from over.

When Will Google Remy Launch?

Currently, Google Remy AI agent is in a dogfooding phase—a standard practice at tech companies where employees test products before public release. This allows Google to iron out bugs and refine the user experience. The company will hold its Google I/O event later this month (May 19-20), where it is widely expected to showcase its next wave of AI products.

Agents are likely to be a centerpiece at this event, and Remy may well make its first public appearance there if Google is ready to show its hand. However, no official launch timeline has been confirmed. For now, the tech world is watching closely to see how Google’s answer to OpenClaw will shape the future of personal AI assistants.

For more insights on AI trends, check out our article on the rise of AI assistants in 2025. You can also explore how Google Gemini is evolving to meet user needs. Finally, learn about security risks in open-source AI agents to stay informed.

Continue Reading

Artificial Intelligence

Your ChatGPT history is a personality test you didn’t know you were taking

Published

on

Your ChatGPT history is a personality test you didn’t know you were taking

Every time you ask ChatGPT to draft an email, vent about a relationship problem, or look up symptoms, you might be handing over more than just a query. Researchers at ETH Zurich have trained an AI model to predict personality traits directly from real ChatGPT conversation logs, and it was scarily good at recognizing personality traits. This breakthrough raises serious questions about privacy, data ethics, and how companies might use your digital footprint.

As reported by TechXplore, the study collected 62,090 real conversations from 668 ChatGPT users. Participants also completed a standard personality test, giving the researchers a baseline to measure against. The AI was then trained to classify each user as low, medium, or high across the five traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The fine-tuned model beat random chance across all five traits, with extraversion being the easiest to predict, achieving up to 44% higher accuracy than guessing.

Does it matter what you talk about with ChatGPT?

Absolutely. The study found that chats involving mental health topics made extraversion particularly easy to infer. Discussions about religion were strongly linked to conscientiousness inference, and conversations about mental state and mood made openness more predictable. Even seemingly casual conversations contained enough signal to be useful. The researchers also found that the more you use ChatGPT, the easier you become to profile.

This means that your everyday questions—from asking for cooking recipes to complaining about work—are not as neutral as they seem. Each exchange adds a small piece to a larger puzzle that AI can assemble into a detailed personality portrait. Given how much data we share with ChatGPT, it matters a lot whether it can easily discern our personality traits.

Why does ChatGPT personality prediction matter beyond the research lab?

The researchers are clear about the implications. Service providers already have access to all of this data, and with over 800 million monthly ChatGPT users as of January 2026, the scale of potential profiling is enormous. A personality profile built from your chat history could be used for targeted advertising, personalized persuasion, or in worst-case scenarios, large-scale influence campaigns.

Recently, ChatGPT has started integrating ads. With the data it has on hands for us, think how easily it can format the ads to manipulate our thinking. For instance, if the AI knows you are high in neuroticism, it might show you ads for anxiety relief products or financial planning services that prey on your fears. Similarly, an extraverted user might see ads for social events or networking tools.

How accurate is the personality profiling?

The study’s accuracy is impressive but not flawless. Extraversion was the easiest trait to predict, achieving up to 44% higher accuracy than random guessing. However, other traits like openness and agreeableness were more challenging. Still, as AI models improve, so will their ability to read us. The researchers note that even a modest improvement in prediction accuracy can have significant real-world consequences when applied to hundreds of millions of users.

What can you do to protect your privacy?

For now, it is worth remembering that your AI chatbot is not a diary. At least not a private one. You can also take a proactive approach and delete your ChatGPT history regularly to remove your personal chats from its memory. This simple habit can significantly reduce the amount of data available for profiling.

Additionally, consider reviewing your privacy settings on all AI platforms. Many services allow you to opt out of data collection for training purposes. You might also want to avoid sharing highly sensitive personal information—like mental health struggles or financial details—in conversations with chatbots. If you need advice on such topics, consult a human professional instead.

Building on this, it’s wise to treat every chatbot interaction as potentially public. Think before you type: would you be comfortable if this conversation appeared on a billboard? If not, it’s probably best not to share it with an AI.

The bigger picture: AI and the future of targeted advertising

This research highlights a growing trend: the convergence of AI and psychology for commercial gain. Companies like Google, Meta, and OpenAI are sitting on vast troves of conversational data. With the right algorithms, they can turn that data into detailed psychological profiles for hyper-targeted advertising.

However, there are also ethical implications. If AI can predict your personality from chat logs, it could be used to manipulate your decisions—from what you buy to how you vote. Regulators are beginning to take notice, but the technology is moving faster than the law. As a result, individual vigilance is currently the best defense.

In conclusion, your ChatGPT history is more than a log of queries—it’s a window into your personality. While the technology is fascinating, it also demands caution. Stay informed, protect your data, and remember: in the digital age, your words have power beyond what you imagine.

Continue Reading

Trending