Writing Specs With AI: A Practical Guide
AI didn’t kill product specs.
Bad specs were already dying.
AI just exposed how many teams never really knew what a good spec was in the first place.
I see it everywhere now: “Just ask ChatGPT to write the PRD.” “Let AI generate the user stories.” “We’ll clean it up later.”
Later rarely comes.
The uncomfortable truth
AI is exceptional at writing words.
It is terrible at deciding what matters.
If you give it fuzzy thinking, it will give you polished nonsense. If you give it clarity, it will give you leverage.
Let me show you what I mean.
What polished nonsense actually looks like
A founder I worked with last year wanted to add a “teams” feature to their project management tool. He prompted Claude with: “Write a PRD for adding team collaboration features.”
The output was beautiful. Professional formatting, clear sections, acceptance criteria, even a rollout plan. It included features like real-time presence indicators, team chat, shared dashboards, role-based permissions, activity feeds, and @-mention notifications.
It was also completely wrong for their situation.
Their users were freelancers managing client projects solo. The “team” they needed was one freelancer inviting one client to view progress — read-only, minimal permissions, no chat. The AI had generated a Slack-shaped feature for a product that needed a “share link” button.
He spent two engineering weeks building the wrong version before a beta user asked: “Why is this so complicated? I just want to show my client the timeline.”
The spec was polished. The thinking was fuzzy. The AI just made the fuzziness look professional.
What AI is actually good at in specs
Used well, AI is a force multiplier, not a replacement.
Here’s where it shines:
Turning rough thoughts into structured sections
Expanding acceptance criteria you already understand
Stress-testing edge cases you might miss
Translating product intent into engineer-friendly language
Creating multiple spec versions for different audiences
Notice the pattern? AI accelerates expression, not judgment.
The practical way to write specs with AI
Here’s the workflow I’ve refined over the past 18 months. For a recent feature at a developer tools company, it cut spec-to-alignment time from 11 days to 3.
Step 1: Do the hard thinking first (without AI)
Before opening any AI tool, I write what I call the “messy brief” — plain language, no formatting, just forcing myself to articulate the core decisions.
Here’s an actual example from a notifications feature I specced recently:
This is for power users who manage 50+ daily tasks. They’re missing deadlines because notifications get lost in email. The problem isn’t “no notifications” — it’s notification fatigue plus no prioritization.
We’re NOT building a notification center. Not building email digests. Not building Slack integration (yet).
Success is: users who enable this miss fewer deadlines. Failure is: we add noise and they disable within a week.
Open question: Do we filter by deadline proximity or by user-defined priority? Need to decide before eng.
Messy. Incomplete. But it contains actual decisions.
If you can’t write something like this, AI won’t save you.
Step 2: Use AI to structure, not invent
Now I bring AI in — but with constraints. The prompt that works:
Below is my rough thinking for a feature. Turn this into a structured product spec with the following sections: Problem Statement, Target User, Scope (what’s included), Non-Goals (what’s explicitly excluded), Success Metrics, and Open Questions.
Rules: Do not invent features I haven’t mentioned. Do not add scope. If something is unclear, flag it as an open question rather than assuming an answer.
[paste messy brief]
The “do not invent features” constraint is critical. Without it, AI defaults to expanding scope.
Step 3: Ask AI to challenge you
This is the most underused step. After the structured draft, I run this prompt:
Review this spec as a skeptical staff engineer. Identify: (1) Assumptions that aren’t validated, (2) Edge cases not addressed, (3) Scope that seems larger than the problem requires, (4) Dependencies or risks not mentioned. Be specific and critical.
For the notifications spec, this surfaced: “You haven’t specified behavior when a user has 20+ tasks due within the same hour. Does priority ranking still work, or does this become noise again?”
That question reshaped the entire filtering logic before engineering began.
Step 4: Iterate in layers
One pass for clarity. One pass for edge cases. One pass for execution readiness.
Specs are built — not generated.
The real numbers
I’ve tracked this across eight specs over six months:
Average time from initial thinking to eng-ready spec: dropped from 9 days to 4
Revision cycles after eng review: dropped from 3.2 to 1.4
Scope changes after development started: dropped from 2.1 per feature to 0.6
The speed gain isn’t from writing faster. It’s from pressure-testing earlier — catching the “wait, what about...” questions before they become code.
The real advantage of AI-written specs
Speed isn’t the real win. Feedback is.
AI lets you explore options faster, see gaps sooner, align teams earlier, and reduce rewrite cycles.
The teams winning with AI aren’t writing more specs. They’re writing fewer, sharper ones.
The takeaway
AI won’t make you a better product thinker.
But it will expose whether you are one.
If your specs improve with AI, you had clarity. If they get worse, you were outsourcing thinking you shouldn’t have.
And that’s the real lesson.


