The Fluency Trap
You step sideways into the AI. It feels like efficiency, but is it an abdication? Fluency seduces where error alerts; it trades contact for coherence. Are you using the tool to explore the world, or to hide from the friction of it? Stop letting the lullaby of a smooth summary replace your sight.
You can predict the exact moment it happens.
You are standing at the edge of the real work. Not the “work-looking” work: the slide deck, the status update, the polished framing. The real thing. The raw, jagged material of reality.
The messy customer transcript. The email thread where no one is agreeing. The metric that contradicts your strategy. The real work is the uncomfortable detail that refuses to compress.
Then, with a motion so small it feels like efficiency, you step sideways. You open the AI and translate the raw friction of reality into a clean prompt.
I know this moment because I am in it right now. Even as I write these words, I am using a model to help me structure them. I can feel the pull of the “smooth version”.
Fluency seduces where error alerts
The trade is invisible. You are not trading truth for lies. You are trading contact for coherence. It turns loaded assumptions into something that feels like clean analysis.
We spend a lot of time talking about hallucinations. We should. But there is a subtler risk that grows as models get more capable: even when an answer is directionally correct, it can arrive with a kind of polish that makes you stop asking the next question.
An error is a flare; it wakes you up. Fluency is a lullaby. Psychologists have a name for one part of this. The Illusory Truth Effect describes how information that is easier to process, or feels familiar, tends to be judged as more true. That does not mean the model is trying to trick you. It means your nervous system has a shortcut: smooth feels safe.
And safety feels like relief. Relief lowers the felt need to verify. This is the hinge. It is how you go from "the summary helps me" to "the summary becomes my first contact with reality."
The jagged frontier
There is a reason this is hard to notice. AI is not uniformly good or bad; it is uneven.
A Harvard Business School field experiment described a "Jagged Technological Frontier." On some tasks, AI made knowledge workers faster and improved quality. On tasks outside the frontier, performance dropped.
The tool can be a power-up and a trap, sometimes in the same afternoon. The danger is what happens when you treat the tool as if it is evenly reliable. Or worse, when you let it become the default interface to the world.
The feedback loop you cannot see
Most people imagine feedback loops as a technical problem. biased data in, biased answers out. But there is a more immediate loop running through you.
Recent research from UCL suggests that users "absorb" a model's biased judgments after working with the system. You prompt dozens of times a day. You accept the smooth phrasing. You paste it into the PRD. You repeat it in the meeting.
That language becomes the shared explanation. Then it becomes the data the next system trains on. The loop does not only reinforce bias; it reinforces whatever frame is most compressible.
The counter-weight (AI as a lens)
To be fair, the story is not "AI makes everyone narrower." Research from Stanford GSB suggests people can be more willing to consider opposing views when information comes from AI rather than other humans.
AI changes the shape of influence. Sometimes it opens you. Sometimes it closes you. Often it does both, depending on how you use it and what you were hoping to get.
Pragmatic attention sovereignty
This is not a call to stop using AI. That would be a failure of pragmatism. The goal is to develop attention sovereignty. Not as a moral stance. As a survival skill.
It is the discipline of choosing reality-contact even when the summary is seductive. You can still use the tool. You can still ship features. You can still benefit from speed. But you do not let fluency become your substitute for seeing.
A practical way to hold the line:
- Touch the original artifact first when the decision actually matters.
- Ask for the contradictions, not just the conclusion.
- Notice the relief, and treat it as a signal of cognitive ease, not as proof of truth.
The meta-lesson
The reason I am publishing this while using AI to help refine the draft is that the struggle itself is the point. The model will happily produce a clean story. It is trained to help. But what it cannot do is decide what you are willing to skip.
Before you ask the AI to explain your world to you, ask yourself what truth you are hoping the summary will hide. If you cannot name it, the AI has already started to feel like the world. It is not. It is a story about it. And stories, no matter how fluent, are never the same as seeing.
Further reading
- Why hallucinations persist: https://arxiv.org/html/2509.04664v1
- HBS follow-on work on how professionals get “persuasion bombed” when validating LLMs: https://www.hbs.edu/faculty/Pages/item.aspx?num=68073