AI Creativity Crisis: Why Gemini and ChatGPT Think Too Much Alike
Imagine asking ten different artists to paint a sunset. You’d expect ten unique interpretations—some fiery reds, some muted purples, maybe one with silhouetted birds. Now imagine they all hand you nearly identical paintings. That’s essentially what’s happening with our most popular AI assistants.
A revealing study in Engineering Applications of Artificial Intelligence has uncovered an uncomfortable truth. When tasked with creative work, leading models including Google’s Gemini, OpenAI’s GPT, and Meta’s Llama don’t just perform similarly—they converge. Their outputs occupy a surprisingly narrow slice of the conceptual universe.
The Echo Chamber of Machine Imagination
Researchers didn’t test just one or two systems. They put more than 20 different AI models through their paces, comparing them against over 100 human participants. The tasks were classic creativity tests: brainstorming alternative uses for a brick, listing unrelated words, generating original ideas.
Individually, any single AI response might seem clever or novel. The problem emerges when you look at the collective output. When researchers mapped the responses for similarity, a stark pattern appeared. Chatbot answers huddled together in tight clusters. Human responses, by contrast, sprawled across the map.
Different companies, different architectures, same conceptual neighborhood. Whether the prompt was for ideas or unrelated concepts, the models consistently leaned on familiar linguistic structures and repeated phrasing patterns. They were playing different instruments, but all reading from the same sheet of music.
Why AI’s Creative Range Is Fundamentally Limited
Why does this convergence happen? The limitations are baked into how these systems work. Think about what an AI lacks that every human possesses: a lifetime of messy, personal experience. The taste of rain on a childhood tongue. The specific ache of a lost opportunity. The irrational love for a worn-out sweater.
AI models process patterns from vast datasets, but they don’t live. They have no intent, no personal context, no subjective consciousness pushing against conventional thought. This absence of lived reality creates a ceiling for how far their ideas can truly diverge. You can prompt them to “be more creative” until you’re blue in the face, but you’re asking a system without a self to express one.
The research team tried to force more variety. Increasing the “temperature” or randomness setting helped marginally, but it came at a cost—the outputs quickly became incoherent. A slightly more imaginative nudge was possible, but it never meaningfully expanded the overall range. The models were dancing at the edges of their conceptual cages.
Your Ideas Are Being Quietly Homogenized
Here’s where it gets personal. On its own, using ChatGPT to brainstorm blog topics or Gemini to suggest marketing angles feels productive. The output often matches or even exceeds average human originality for that single instance. The danger is cumulative and largely invisible.
When millions of writers, marketers, students, and entrepreneurs use the same handful of tools for ideation, they’re all tapping into the same underlying probability distributions. They’re drawing water from the same well. Over time, this doesn’t just influence individual projects—it compresses the cultural range of ideas across entire industries.
There’s a behavioral trap here too. The study suggests people often accept AI suggestions as finished thoughts rather than using them as springboards. We stop extending the chain of thinking ourselves. Why wrestle with a difficult concept when the chatbot offers a coherent paragraph? This intellectual shortcutting further erodes diversity of thought.
This Isn’t a Bug—It’s a Structural Feature
Don’t mistake this for a problem Google or OpenAI can simply patch next Tuesday. The convergence appeared across models built by fiercely competitive companies with different technical approaches. This points to a deeper, structural constraint in how large language models generate language and ideas.
They are, at their core, prediction engines. Given a sequence of words, they predict the most statistically likely continuation based on their training data. Creativity, in the human sense, often involves defying statistical likelihood—making unexpected leaps that feel right but aren’t “most probable.”
How to Use AI Without Losing Your Creative Edge
This research isn’t a call to abandon AI tools. It’s a crucial guide for using them wisely. The most effective approach is to treat AI not as an oracle, but as a provocateur.
Use that first AI-generated list of ideas as a starting point, then deliberately rebel against it. If the chatbot suggests three safe marketing angles, force yourself to brainstorm three radically different ones it would never propose. Ask it for the conventional wisdom on a topic, then intentionally argue with every point.
Preserve your own messy, human ideation process. Keep a notebook for half-baked thoughts. Embrace the frustrating silence of a blank page. That friction is where unique ideas are born. AI can handle the predictable parts—the structure, the grammar, the initial research. Reserve the creative leaps, the personal connections, and the weird intuitions for yourself.
Otherwise, we risk building a future where everyone is having the same conversation, just with slightly different wording. And that’s not creativity—it’s just mass-produced thought.