You can spot AI-generated LinkedIn content from across the room. The stacked rhetorical questions. The tidy problem-action-result-lesson arc. The "here's the thing" pivot. The three-part parallel structure that every LLM defaults to when it's being helpful. If you're using AI to write your content and it sounds like everyone else who's using AI to write their content, the tool isn't the problem. The instructions are.
The Voice Gap
Most people approach AI content creation backward. They open Claude or ChatGPT, type "write me a LinkedIn post about [topic]," and get back something grammatically correct, structurally sound, and completely generic. Then they either post it anyway (because it's "good enough") or they spend 30 minutes editing it into something that sounds vaguely like them. Neither is a good use of anyone's time.
The problem isn't that AI can't write well. It can. The problem is that "write well" without further instruction means "write in the statistical average of all the writing I was trained on." And the statistical average of LinkedIn writing is painfully bland. You get posts that sound like they were written by a composite of every thought leader who ever used the word "leverage" unironically.
This is a brand positioning problem disguised as a technology problem. The same way a company that doesn't know what it stands for will produce generic marketing at scale with AI (I wrote about that here), a person who hasn't defined their voice will produce generic personal content at scale with AI.
What Actually Makes Content Sound Like You
After spending months building my own LinkedIn presence with AI as a writing partner (not a writing replacement), I started noticing patterns. The posts that performed well had specific qualities. The ones that fell flat had specific problems. So I did what any self-respecting brand strategist would do: I built a rubric.
The rubric scores LinkedIn content across five dimensions, each rated 1-5. It's calibrated for an audience of AI-literate professionals, which means the bar is higher than normal. These are people who use AI themselves. They can smell AI-generated content instantly. Generic doesn't just underperform with this audience, it actively damages credibility.
Here are the five dimensions and what they actually mean.
1. Hook Strength
Does the first line stop the scroll? LinkedIn truncates after roughly two lines on mobile. The hook has to create enough curiosity or tension that someone who doesn't know you would click "see more."
A strong hook is a concrete, specific, slightly unexpected statement that creates a knowledge gap. "I deleted our entire content calendar last Tuesday" lands. "In today's rapidly evolving landscape" does not. The test is simple: could a stranger see this line and need to know what comes next?
The most common failure mode is what I call the "bridge line" problem. Someone writes a decent opener, then immediately follows it with "Here's what happened" or "Let me explain." That kills momentum. If the first line hooks, go straight to the substance.
2. Insight Density
Is there a genuine idea here that the reader didn't already have? Not "AI is changing everything" (we know) or "brand matters" (obviously), but a specific mental model, framework, or observation the reader can steal and use in their own work.
The biggest trap is narration disguised as content. "I tried X, then Y happened, then Z" is a feature walkthrough, not an insight. The question is always: so what does this mean for the person reading it? If you can't articulate the transferable principle in one sentence, you don't have an insight yet, you have an anecdote.
Aim for at least a 50/50 ratio between what happened and what it means. Most AI-generated content is 80% narration and 20% takeaway. Flip that, or at minimum, lead with the idea and use the story as evidence.
3. Voice Authenticity
Could any AI-literate professional have written this, or does it sound like a specific human with a specific point of view? This is the hardest dimension to score and the most important one to get right.
Voice authenticity comes from details that could only come from your experience. Specific product names, situations, opinions that not everyone shares. "I was in my kitchen at 6am trying to get a prompt to work before my kid woke up" is authentic. "I found myself exploring new possibilities" is a press release.
Watch for posts that start specific and end generic. The first half has real personal texture, and then the closing paragraph drifts into what I call LinkedIn Thought Leader Voice: "This is what leadership really looks like in 2026." Stay specific all the way through. Your last paragraph should be as distinctly yours as your first one.
4. AI-Generated Tell Avoidance
This is where it gets tactical. AI writing has structural fingerprints that an AI-literate audience will catch. No single tell is damning, but they stack. Here are the patterns I look for:
Three-part parallel structures ("Not X. Not Y. Z.") — both Claude and ChatGPT default to these. Excessive em dashes — one per post is fine, three is a red flag. Transition phrases like "here's the thing" and "let me be clear." The clean problem-action-result-lesson arc, which is AI's default LinkedIn template. Stacked rhetorical questions, especially ones the author immediately answers. The formula "The future of X isn't Y. It's Z." Overly balanced sentence lengths and paragraph sizes. And generic intensifiers: "genuinely," "fundamentally," "the real [whatever]."
Any single one of these is minor. It's the accumulation that tips people off. If your post has balanced paragraphs, an em-dash every other sentence, and ends with "this is what X really looks like," you've tripped enough patterns that the skeptical reader has already mentally filed it as AI-generated.
5. CTA / Engagement Driver
Does the post give the reader a reason to do something other than scroll past? Not a desperate "agree?" or "thoughts?" tacked onto the end, but something that naturally invites conversation or signals ongoing value.
The best engagement drivers create an open loop: an unresolved question, a tease of what's coming next, an invitation that gives the reader something specific to contribute. "I'm testing three different approaches to this — I'll share the results next week" is an open loop. "What do you think?" is a closed one dressed up as engagement.
If your post is part of a broader series or theme, say so. Serialization creates a follow incentive. Hashtag walls (#AI #FutureOfWork #Leadership) do not. They signal broadcasting, not conversation.
Scoring and What It Means
Each dimension gets a 1-5, for a total score out of 25. In practice, here's how I calibrate it: 20 or above means the post is strong and ready to publish. 15-19 means good bones but it needs targeted work on the weakest dimension. Below 15 means structural issues that require a rethink, not just polish.
The most important output isn't the number. It's identifying the pattern. Maybe you consistently write great hooks but your posts die in the middle because you're narrating instead of extracting insight. Maybe your voice is sharp in the opening but drifts generic by the close. The rubric is diagnostic. It tells you where the problem is so you can fix the specific thing instead of vaguely trying to "make it better."
How to Install This as a Claude Skill
Here's where this gets practical. I use this rubric inside Claude as a custom skill, which means every time I draft a LinkedIn post, I can run it through the evaluation before I publish. Claude scores each dimension, cites specific text from my draft, and tells me exactly what to fix. It takes about 30 seconds and it's caught problems I would have missed every single time.
If you're using Claude (the desktop app with Cowork, or Claude Code), you can set this up yourself. Here's how.
Inside your project, create a folder for the skill. The path should look like this:
.claude/skills/linkedin-eval/SKILL.mdThis is the instruction set that tells Claude how to evaluate your posts. The file needs a YAML frontmatter block at the top (the part between the --- marks) followed by the rubric itself.
The frontmatter tells Claude when to trigger the skill. Mine looks like this:
---
name: linkedin-eval
description: "Evaluate LinkedIn posts against a structured
quality rubric, scoring across five dimensions: Hook Strength,
Insight Density, Voice Authenticity, AI-Generated Tell
Avoidance, and CTA/Engagement Driver."
---Below the frontmatter, you write the full rubric. Define each dimension with what a 5 looks like, what a 1-2 looks like, and the specific patterns to watch for. The more specific your instructions, the better the evaluation. "Check if the hook is good" gives you generic feedback. "Flag bridge lines that deflate momentum after a strong opener" gives you something actionable.
This is the part most people skip, and it's the whole point. Add a calibration note at the end that tells Claude who your audience is and what that means for scoring. My calibration note says the audience is AI-literate professionals, which means AI tells are more likely to be noticed and generic thought-leader voice lands flat. Your audience might be different. The rubric should reflect that.
Draft your post (with or without AI assistance), then ask Claude to evaluate it using the skill. You'll get a score for each dimension, specific text references, and a prioritized list of what to fix. The "One Fix" output is the most useful part: if you could only change one thing before publishing, what should it be?
Why This Is a Brand Positioning Problem
The rubric works because it's specific. It's not "write better content." It's "your hook needs to work on strangers, your insight density is below 50%, and you've stacked three AI tells in the closing paragraph." That specificity comes from knowing exactly what voice you're trying to hit and exactly who you're writing for.
That's brand positioning applied to personal content. The same principle applies whether you're a company trying to make AI produce on-brand marketing or a person trying to make AI produce content that sounds like you. Without clarity about what "sounding like you" actually means in operational terms, you're asking AI to hit a target you haven't defined. It'll try. It'll produce something competent. And it'll sound like everyone else.
The rubric is the positioning document for your personal brand's LinkedIn presence. Define the dimensions, define what good looks like, build it into your workflow as a skill. The AI gets better because you told it exactly what to measure. Your content gets better because you're fixing the specific thing instead of endlessly tweaking.
Clarity creates momentum. Even on LinkedIn.