world

The AI Writing Tell That's Become Almost Universal

Artificial intelligence has a new linguistic fingerprint — and once you see it, you can't unsee it. The phrase construction 'it's not just this — it's that' has become so prevalent in AI-generated content that experts say it's now almost a guaranteed marker of synthetic writing.

·ottown
The AI Writing Tell That's Become Almost Universal

The Sentence That Gives AI Away

Artificial intelligence is getting better at mimicking human writing every year — but it keeps leaving the same fingerprints behind. One pattern has become so common that it's practically a signature: the construction "It's not just [this] — it's [that]."

According to a recent piece flagged by TechCrunch, this rhetorical structure has saturated AI-generated writing to the point where encountering it is nearly a guarantee you're reading something written by a machine, not a person.

Why Does AI Love This Phrase?

The pattern isn't random. Large language models are trained to sound insightful and authoritative, and this particular sentence construction mimics the cadence of a well-reasoned argument. It signals contrast, depth, and nuance — qualities that AI models have learned humans associate with quality writing.

The problem is that these models converge on the same rhetorical moves again and again. When millions of AI-generated texts use the same structural crutch, what was once a clever rhetorical device becomes a dead giveaway.

Linguists and content analysts have started cataloguing these patterns as AI "tells" — verbal tics that betray synthetic origin even when the surrounding text is fluent and coherent. Others in the list include overuse of the word "delve," suspiciously balanced pros-and-cons lists, and an almost compulsive need to summarize every point twice.

The Broader Problem for Publishers

For media organizations, marketers, and academic institutions, the proliferation of these patterns raises serious questions about authenticity and trust. If readers — even casually — start to associate certain sentence structures with AI, it erodes confidence in any writing that uses them, synthetic or not.

Some publishers have begun deploying AI detection tools, though these remain imperfect. Others are doubling down on voice, idiosyncrasy, and personal experience as the markers of genuine human writing — qualities that, for now, remain difficult for AI to replicate convincingly at scale.

The irony is sharp: the more AI tries to sound polished and authoritative, the more uniform and detectable its output becomes.

What This Means for the Future of Writing

As AI-generated content floods the internet, readers are developing an intuitive radar for synthetic prose — even if they can't always articulate why something feels off. Patterns like "it's not just X — it's Y" are training that radar.

For writers and journalists, the takeaway is clear: the most human thing you can do is be specific, be weird, and resist the urge to sound universally polished. Idiosyncrasy is the new credibility.

For AI developers, the challenge is harder. Fixing one tell just means a new one emerges — because as long as models are optimizing for what sounds authoritative, they'll keep converging on the same moves.

The sentence that started this whole conversation? "It's not just a clue — it's almost a guarantee." Which, yes, was written by a human. Probably.


Source: TechCrunch

Stay in the know, Ottawa

Get the best local news, new restaurant openings, events, and hidden gems delivered to your inbox every week.