AI Blindness Is the New Ad Blindness

If you’ve Googled how to make ChatGPT sound more human in the last six months, the SERP told you to memorize a magic prompt. Ban this word. Imitate that style. Add a typo. None of it works for long, and the reason is that the fix isn’t a prompt at all.

I’m seeing AI content everywhere. It lives in my dreams. Emails, blog posts, comments on Reddit, text messages, Facebook posts from friends and family. I’m not opposed to using AI to write. I’m just really, really tired of people using AI to write without actually trying to make it their own.

It’s wearing everyone out. We’ve all developed a new kind of blindness, and your readers can spot AI from across the room before they’ve read a single word, the same way we all learned to skip banner ads twenty years ago.

So can you actually make ChatGPT sound more human? Not with a prompt. You can mask surface tells like em-dashes and overused AI vocabulary, but no prompt can manufacture perspective, and perspective is what readers actually catch. The fix has to sit upstream of the draft, in a voice profile and an interview pass before any sentence gets written.

Don’t get me wrong, I love AI. I’m a huge fan, having built several MCP servers that I use personally, and even having launched my own MCP server for Siren Affiliates. I have at least two max AI subscriptions running constantly, and it’s no surprise that I am using it to help me write a lot of content.

But y’all have decided that AI can replace you instead of multiply you, and at least right now that is leading to a gigantic swell of soulless content on social media, emails, blog posts, and just about anywhere where people are asked to write something.

We’ve all developed AI blindness, the same way we developed ad blindness in the 2000s

Remember banner ads in the early 2000s? Those flashing rectangles screaming about a free iPod or punching a monkey to win a prize? We all learned to skip them. Not consciously. Our eyes just stopped landing on the right side of the page. The shape of an ad got filtered out before comprehension, and that’s why display advertising had to keep reinventing itself for the next two decades.

The same thing is happening right now with AI content, and it’s happening fast.

We’re already long-since beyond simply looking at the em-dash. People who are working in AI and reading how it writes all day every day recognize it even without them. Things like the tidy three-point structure, the “Here’s the thing” opener, the “It’s not just X, it’s Y” rhetorical move that every LinkedIn post seems to do now, and ironically enough enumerating (GIVE ME A BREAK, WILL YA?).

Your reader pattern-matches the shape of the sentence before they read the words. The skip happens before any conscious decision. They didn’t decide your post wasn’t worth reading. Their eyes just slid off it.

I do this. You do this. Everyone you’re trying to reach does this. And if they’re not doing it yet, they’ll be doing it eventually.

If you’re publishing anything right now, this is happening to your stuff whether you know it or not. People rarely will call you out, they just skip it and move on. Or they read it, and they remember nothing, and the next time your name shows up they keep scrolling. The cost is invisible until it isn’t, and by the time it isn’t, you’ve trained your audience to ignore you.

Why “how to make ChatGPT sound more human” is the wrong question

If you’ve Googled “how to make ChatGPT sound more human” in the last six months, I get it. The whole internet is asking. The first page of results is a stack of magic-prompt blog posts and Reddit threads where someone claims they’ve found the secret system prompt that strips the AI tells out.

The truth is, a prompt can mask the surface tells. You can tell ChatGPT to stop using em-dashes. You can ban its words, and correct the speech patterns. You can ask it to write in punchy short sentences, or toss in an occasional curse-word or whatever. But it never quite pulls it off.

And you know why? Because AI is probabilistic. It’s never in its entire existence thought a truly original thought, and at best it can fake a perspective. And the absence of perspective is what your reader is actually catching, even if they can’t put words to it. The em-dash isn’t really the problem. The em-dash is just a symptom that nobody behind the keyboard had an opinion strong enough to choose a different punctuation mark.

Chris Lema put it well in his piece on why the slop isn’t coming from the robot. The slop is coming from the people who didn’t do the work before they sat down at the keyboard. The robot is doing exactly what it was asked to do. It’s the asking that was lazy.

Which means the answer isn’t a better prompt. The answer is a process that puts something real into the prompt before you press enter.

What your readers actually notice

The em-dash is the obvious one. Whenever I see an em-dash in a Facebook post now, I’m immediately drowning it out as low effort. I assume the person didn’t actually try to write the content. I might be wrong sometimes, and I don’t care. The pattern is reliable enough, and so is your reader’s.

The deeper tells aren’t punctuation. They’re structural, and most of the prompt-engineering blogs miss them entirely because surface fixes are easier to sell.

The one I notice the most is the bold-label paragraph, where every chunk starts with a boldfaced phrase and then the sentence after it just explains the boldfaced phrase. It’s the “make it scannable” instinct gone radioactive.

Then there’s the “It’s not just X, it’s Y” hook that every other AI-generated LinkedIn post opens with now, which used to be a perfectly fine rhetorical move and has been spent. The tidy three-point closure is another one, the kind of ending where a piece wraps with three parallel sentences in a perfect rhythm so clean it sounds composed by a machine, because it was.

And the worst, the one the bullet test catches every time, is prose where you could put a bullet point in front of every sentence and nothing about the meaning would change. That’s not writing. That’s a list pretending to be a paragraph.

But worst of all is this general…eye glazingly bad writing. There’s so many times where I’ll see entire replies by an AI agent and I’ll think “what the hell is this thing on about? This makes absolutely no sense at all”. It’s not even necessarily the formatting, it’s just how it puts stuff together, loading it up with jargon that I’ve never even heard before, and talking generally, well…robotic.

None of these are individually proof of AI. Real humans use all of them. The tell isn’t any single one. The tell is the cluster. Your reader’s instinct catches the shape of three or four of these stacked together long before they consciously notice any one of them, and at that point you’ve lost. They’ve already moved on. This is what I mean by “AI Blindness” because it happens in milliseconds.

This happened to me on a proposal that cost me trust with a high-paying client. I’d done the work. I just let AI write the words. They smelled it. That story’s worth reading on its own if you sell anything that requires people to trust your thinking, because the cost was real and the recovery wasn’t fast.

The cost of skipping the system

Reader trust collapses faster than you’d expect.

The thing that makes this so brutal is that your reader usually can’t articulate what went wrong. They don’t think “this is AI-generated content and I’m now downgrading my trust in this person by twelve percent.” They just feel, somewhere below the level of language, that you didn’t care. That you mailed it in. That whatever you sent them wasn’t worth the time you should have spent on it.

In a B2B context, where someone is paying you for thinking, “didn’t care” is fatal. They’re not paying for the words. They’re paying for the thinking the words represent. When the words look like they came out of a slot machine, the thinking is suspect by association, even when the thinking was real. I learned this on the proposal I mentioned above. I’d done all the work. I just let AI hand it off, and the handoff was the only part the client saw.

But the worst price of all is the cost of your own independent thinking. If you’re reading this, you’re probably pretty fuckin’ smart, and by leveraging a robot to wholesale replace you is an insult to your intelligence. It implies that you actually, genuinely believe that you can be replaced by an agent, which by proxy means that you either don’t realize that AI can’t produce an original thought, or worse, you don’t believe that you can.

So what do you do

I get it, you’re probably super busy and writing content takes up too much time, so maybe you don’t believe you can simply because you don’t have time to think about this stuff. The thing is, you absolutely CAN use AI to help you write, just don’t sell yourself short by excluding your input and effort in the content published under your name.

Instead, here’s the actual workflow I use to multiply myself without shipping slop. Instead of trying to make AI do everything, you instead use a specific process that allows you to get about 80-90% of the writing done, and then you polish it and get it the rest of the way there yourself. Like…you literally just edit the post. Writing stuff using your own brain.

If you write anything that someone else needs to read and trust, you can’t ship one-shot AI output. The prompt is not the lever. The system around the prompt is. If you do it right, you can produce a lot more genuinely great content than you’ve ever produced, and still maintain a good quality that you aren’t secretly ashamed when you claim “I wrote that”.