Key Takeaways
- AI detection scores from different tools do not directly reflect writing quality or Google ranking potential.
- Formal, professional writing often receives lower “originality” scores even when human-written.
- Different AI detectors interpret content differently and yield different results.
- Patterns and inconsistencies affect whether detectors identify text as human or AI.
- Writing for human readers, not for high AI detection scores or SEO scores, is what achieves good results.
Understanding AI Detection and Its Scores
Common Misunderstandings About AI Detectors
There are some common misunderstandings about AI Detection scores such as those from Originality AI, GPTZero, Turnitin, and Winston AI
- The higher the originality score, the better the quality.
- The higher the originality score, the more likely to rank on Google.
Those are all wrong generalizations.
Now, it may be true that the higher the score, the more likely the piece was written by a human. But the score does not mean the content is enjoyable to read or useful.
Originality Scores and Google Rankings
It is fairly common that the #1 ranking article for a keyword (Google search) has a lower originality score than the #2 or #3 ranking article.
As for quality, it is well established that the more formal a piece of human writing, the more likely it is to receive a lower originality score compared to more casual writing. Many formal articles (such as science or technology articles) and journal pieces rank between 30-80, depending on the AI detection tool. A score of 60 is the minimum to be considered “likely human written”, so some of these top-shelf, professionally written pieces are being scored as “possibly AI-written” or “likely AI-written”.
How AI Detectors Operate
Differences Among AI Detection Tools
First, we need to be aware that each tool judges content a little differently. And, more importantly, the meaning of the scores is different.
That second point is important! The fact that the same score on different AI detection tools means different things – this points out that AI detection scores lack much meaning. (Except for a specific kind of text, which I explain below.)
There is no one-size-fits-all AI Detection app. One might be well-suited for recognizing authentic academic writing as being human-written, but it doesn’t evaluate other kinds of texts as well. One might be highly tuned to recognize human-written informal blog posts, but not as well tuned to recognize human-written journal articles.
Some apps are better geared for particular LLMs, as well. For example, I know one AI Detector that is better at detecting content from Claude, compared to content from other LLMs. This particular detector often rates AI content from ChatGPT as 90-100% human.
Patterns, Formality, and Human-Like Scores
The reason is that professional writing is well-structured. Clear, precise, readable texts follow well-established patterns. People like to read content that follows those patterns. Human brains love patterns.
But, AI detectors look for pattern breaks and even errors. They look for inconsistencies and unexpected words, phrases, and structures. They view “broken writing” as more likely human.
After looking at many kinds of texts and how AI detectors rank them, I have reached the following conclusions about pieces of writing written by real humans and not AI:
- High-level professional writing on expert topics, especially academic or heavy research articles, typically ranks between 30-85.
- Professional pieces, including those on expert topics, written to have broad appeal – containing pieces and bursts of conversational elements – typically rank between 60-85.
- Professional pieces that are broken into varying-length paragraphs (including one-sentence paragraphs) and include a lot of conversational-type links and transitional elements: they typically rank from 80-100.
- Highly conversational pieces, like personal blog posts written the way a person chats with their buddies, with many short paragraphs including many one-sentence paragraphs, typically rank between 90-100.
When AI Detection Scores Are Most Accurate
Generally speaking, there is only one type of AI content that AI detection apps always recognize as definitely not human-written: pieces that contain mostly uniform paragraph blocks, written in predictable common patterns, without many links and transitions, using formal structures and formal language.
Apart from that, AI detectors are just not very useful.
Writing for Results
Writing for Humans and SEO
Google has been shouting the most important point for a long time, both before and after the era of AI-generated content: “Write for humans!” If humans like your pieces of writing and keep reading them, they will be ranked well by search engines.
Of course, it’s not just that people read them, but that people link to them on their websites – which is probably the primary factor that tells Google people like the content. So, again, it is all about humans liking what they read.
SEO Scores and Human Appeal
The same is true for SEO scores, by the way, such as those generated by SurferSEO and AgilityWriter. Articles that rank in the 60s are almost always seen in the top 3 ranking spots for keywords that are not highly competitive. It is only the highly competitive keywords that need SEO rankings of 75+. (Plus, very high SEO scores can be dangerous, as Google may judge them as “not written for humans”.)
Practical Advice for Writers
In the final analysis, people who write their own content should aim for an AI detection score of 60-100. That is, 60% to 100% likely to have been written by a human. If the content is in that range, it will probably be judged “readable” by a human, and not flagged by a search engine as AI-written content. That doesn’t mean the content will be enjoyed by humans, because different humans like different kinds of content, but they will discern that the content is human-written.
My own deep-research articles on expert topics typically don’t get extremely high AI detector scores. They score as “probably/likely human written”, but usually not into the 90+ range.
What Does This Mean For Writers?
If you are trying to pass off AI-written content as your original human writing, then understand that some detection tools might flag your work. If you are writing articles by hand – well, with your fingers on keys – don’t let your mind be troubled with thoughts of AI detectors. They don’t matter for you.
“But Shawn, I don’t want my authentic writing to somehow get a strangely low score by an AI detector.” My answer is: relax. Do you want to change your writing style in order to possibly get a higher score on one or two odd AI detection tools? You don’t, of course.
You are writing for your audience. Or for someone else’s audience. Their opinion of your work is all that matters. If they like it, AI detection scores don’t matter. If they don’t like it, AI detection scores…don’t matter. In the latter case, focus on improving your writing. Simple as that.
Relax and do your best.

