A few months ago, I sent a consulting proposal that nearly cost me a client.
I’d actually done the work, and spent a lot of time on the research. I went through their site, talked through what they actually needed, mapped out the engagement, made sure the information was all there. By the time I sat down to write the proposal itself, the synthesis was done. I had everything I needed in front of me.
And then I let AI basically one-shot the writing.
I overestimated AI’s ability to actually create a persuasive sales pitch. I figured I’d already done the thinking, and the agent had all the context it could possibly need. The hard part was over, and surely the model would take this last mile and ship it. I didn’t read it as carefully as I should have. I skimmed it, told myself it was fine, sent it off, moved on to the next thing.
The client came back frustrated. Not about the engagement, not about the price, not about the scope. About the proposal. They’d read it as low effort. They felt like I hadn’t taken their problem seriously. From their seat, it looked like I’d phoned it in.
The reality was the opposite. I’d done a ton of work on that proposal, but that didn’t matter because the customer didn’t see that. All they saw was the piss-poor proposal that I handed them.
What makes me cringe the most is that this is a high-paying consulting client. I’m not a cheap consultant, and from where they were sitting, they were watching me talk to ChatGPT and bill them for it. That’s a really bad look.
Thankfully, I was able to smooth it over, apologized for my bad first attempt, and was given another chance to fix it. I did, and then ended up selling that project after-all.
But wow, it sure was a wake-up call. She told me that she sees those low-effort AI written content all the time, and she didn’t even really read it because she knew it was AI.
In other words, a sales pitch that may have been a seen as good, and credible a couple of years ago is now something that instantly drew scrutiny and signaled that the client is not important enough for my effort. That’s ALLLLL bad y’all.
Why your writing gets flagged as AI (and why the detector isn’t who you should worry about)
Detectors will flag your writing because of statistical patterns like “low perplexity” and “uniform sentence rhythm”, but honestly detector flags are a symptom, not the real risk. Your readers catch you long before any tool does, on a much lower bar. They notice that you didn’t care enough to write the thing yourself, and that’s the thing that costs you.
If you’ve been Googling “why does my writing get flagged as AI,” I’m guessing you ran something through GPTZero or Originality.ai, got a percentage you didn’t like, and now you’re hunting for a workaround, but I think you’d be much better off if you redirected that energy.
The detector tools flag based on perplexity and burstiness. Statistical patterns in word choice. They are not what catches you in real life. The thing that caught me on that proposal wasn’t a tool. It was a person who’s been emailing with me for years, who knows what I sound like, who sat down to read what I sent and felt something off in the first paragraph.
Reader instinct catches things detectors don’t. It catches sentences that flow with no friction, when real human writing has bumps and asides and second thoughts. It catches tidy structures that don’t match the way you actually talk. It catches a particular brand of confident neutrality that real opinions never have. It catches em-dashes, colons, enumerating, and all of those other things that you recognize right away too, of course.
A detector tries to determine if your text looks statistically like AI, which is a question about whether you got caught. A reader, however, takes that a step further after noticing. They’ll question if your content is even worth reading. After all, if you don’t care enough to write it, why should they care enough to read it?
It’s even worse when it’s on your actual social account. Your face is on the profile, and when people see that face, and then see the content, many people will feel like they’re being tricked, or lied to.
You don’t want people to think you’re trying to lie to them, do you? Is that what you want people on social media to be thinking when they read your content? I seriously doubt it.
This is part of what I call AI blindness. Your audience is pattern-matching the shape of AI writing now, the way we all learned to skip banner ads twenty years ago. They might not know they’re doing it. They’re doing it.
AI doesn’t have your perspective
People say AI doesn’t have taste. I think another way to think about it is that AI doesn’t have perspective.
It can guess a perspective. It can fake a perspective. It can do a credible impression of one if you give it enough scaffolding. But it will never truly have one, and it certainly doesn’t have your perspective. It hasn’t lost a client. It hasn’t been embarrassed. It has nothing to defend.
A perspective is a position you would actually argue for. It’s a counter-take to the obvious answer. It’s a specific lived moment that made you believe the thing you believe. It’s a sentence you would cut from your own draft because it isn’t yours, even if it’s technically good. AI cannot do that last move, because it has no concept of what is and isn’t yours. Everything is equally not-its. There’s just no amount of context that exists for this sort of thing because so much of it is subconscious.
When AI writes for you without that perspective baked in, it averages. It produces the most-likely sentence given the prompt. The most-likely sentence becomes the “correct answer”, because that’s literally how AI works. Your reader, even if they can’t articulate any of this in those words, feels the absence. They get to the end of the paragraph and they don’t quite know what happened, just that nothing landed.
It’s a medium place, like Cincinnati
At Siren, I have a perspective that the best affiliate program plugin makes it easy to see, and understand why your program gave a commission to an affiliate. Competing products spread out the affiliate rate in every object they can. Want a product to have a different affiliate rate? Set it on the product. Want that affiliate to get a higher rate than others? Set it on that affiliate. The problem is this makes it really hard 6 months later to figure out why on earth a commission calculated the way it did. To fix this in Siren, we decided from day one that when you wanted to use a different rate, you would instead create a different program, and assign the collaborators to that program instead. This centralizes the logic, and makes it a lot easier to maintain.
That? That’s perspective that came from me. That came from years of working with, troubleshooting, and thinking about affiliate program management. That’s perspective that makes the content on Siren’s site decidedly less medium, because I’m saying something worth saying. AI would be highly unlikely to ever come to that conclusion on its own, because that perspective was created from a life experience it has never lived.
That? That’s what beige, AI-written content, is lacking.
What “stop trying to bypass detectors” actually looks like
If you came here looking for a humanizer plugin or a magic prompt to get under GPTZero’s threshold, I’m sorry, this isn’t going to be that post.
The honest answer is shorter and less satisfying. Stop it. As my dad would say “You’re just slappin’ lipstick on a pig”.
Instead, you need to leverage a workflow that flavors AI’s content with your perspective, is written similarly to how you’d write, and then finished off by you manually. This is, in essence, the workflow I am currently using for all of my content. It’s absolutely a manual process, and does require effort, but I’m still easily producing 10 times the content I was publishing in a pre-AI world, so even though it’s not an infinite content glitch or whatever, it’s still pretty damn good, and I would have been salivating for this workflow in 2014.
How to create thought leadership content with AI
AI cannot create thought leadership content for you. It can multiply leadership thinking you already do, by capturing your perspective, structuring it cleanly, and producing drafts at the speed of your typing. If you don’t have a perspective on the topic, no AI workflow will generate one.
Most “thought leadership with AI” advice tells you to ask ChatGPT for ten contrarian opinions about your industry and post the best one. That’s not thought leadership. That’s regurgitating the median internet take in a slightly different shape, then putting your name on it. It’s exactly what your reader’s instinct is calibrated to skip past, which I get into in what I call AI blindness.
Real thought leadership with AI uses AI as a multiplier on a perspective you already have.
This is what the prompt-engineering blogs miss. They treat thought leadership as a content format. It isn’t. It’s the byproduct of a person with stakes thinking out loud, captured cleanly. Your job is to be that person. AI’s job is to make you faster at it.
One last thing about Siren and why we don’t ship AI-written content
I run Siren, which is a WordPress plugin for affiliate and incentive programs. The team is small. We write about complicated stuff, agentic commerce, attribution in a world where AI agents do the clicking, what happens to commission tracking when nobody touches a referral link anymore. We could absolutely one-shot a blog post on any of that. The models know the words.
We don’t, because we know our audience would smell it within a paragraph.
The Siren team’s piece on affiliate attribution in the age of AI agents is a good example of the kind of writing that doesn’t survive averaging. It takes a position. It argues for a specific primitive (bound artifacts) over the obvious one (more clicks, better tracking). That argument doesn’t exist in the average of the internet. It exists because somebody on the team had a perspective and wrote it down. We use the same writing process I use for my own blog. The shortcut isn’t on the table because we already know what happens when you take it.
I learned that lesson on a proposal.
That client and I worked through it. I apologized for the writing, explained what had happened, and on the next proposal I sent them, I wrote every word myself. Same hours billed for the thinking. No AI in the final draft. They didn’t say a word about effort. They just said yes.
I got lucky. Not every client gives you the second chance. Sometimes the trust just ends and you don’t even know which deal you lost or when. The point of the workflow isn’t to recover gracefully after this happens. The point is to make sure it doesn’t happen.
The detector tools are a distraction. Your readers are the real test, and they don’t grade on perplexity. They grade on whether you cared enough to put your perspective in the sentence before you hit send. Do that, and the AI question stops being interesting. Skip it, and no humanizer in the world is going to save you.
Leave a Reply