Back to home

PRD Assistant

An AI assistant that helps engineers write better requirements — by giving them the context to understand them first.

What I did

AdoptionaiInteraction designResearchUsability testing
Role
Product Designer + UX Engineer
Team
1 PO · 1 Backend · 1 Frontend
Timeframe
June 2025 - February 2026

THE PROBLEM

Engineers didn't trust the AI

The platform used to define its products holds 2M+ Product Requirements Document (PRD), the specs that go into production product. The engineers who write them can't afford an AI that's confidently wrong: a confident wrong answer in a spec becomes a confident wrong system.

When Gen AI tools started appearing internally, the response was skeptical: "Is this just Gen AI doing random things?" The barrier wasn't technical. It was trust. And trust couldn't be promised — the AI had to earn it, in front of the user, every time.

THE APPROACH

Make the thinking visible

The default for AI products is to hide the work. The user types something, waits, gets an answer. Whatever happened in between is opaque — and for a spec engineer, opaque means untrustworthy.

So the assistant was designed around a different default: show the work, all of it, all the time. When the AI is gathering context, the user sees what it's pulling from. When it's reasoning, the user sees the steps. When it produces an output, the user sees what it grounded the answer in.

THE DESIGN

Three message types

Showing the work meant the message stream itself had to do more than display answers. I designed three message types, each with a distinct visual treatment:

Conversational carries the back-and-forth — questions, clarifications, the user prompting and the assistant responding.

Reasoning shows what the AI is doing in real time: which sources it's pulling from, what steps it's executing.

Output is the answer itself — always grounded, always citing the source it came from. Distinct enough that the user knows when the AI is thinking versus when it's answering.

A DESIGN DECISION

When chat won over forms

The first version had structured input. When the assistant offered next steps — "split this into atomic requirements?" / "generate test cases?" — it presented them as a small form: radio buttons, a Done button, the conventional B2B pattern.

A few weeks in, the team flagged feedback they'd been hearing from users: the form felt like a blocker. They'd be mid-thought, the assistant would suggest something useful, and the form would pull them out of the flow. They were losing momentum.

The fix was small in code: surface the same options as bullets in the assistant's message, no form, no Done button. Click one to continue, ignore them to keep going. The structure was still there — the interface stopped enforcing it.

v1

Form pattern

I've improved the requirement. Anything else I can help with?

Can I help you with something else?

I noticed this requirement might contain multiple thoughts. Would you like me to split it into atomic requirements?
Do you want to generate test cases to validate this requirement?
Done
Write your prompt

Structured, but blocked the flow

VALIDATION

What changed when they used it

The adoption pattern told us as much as the numbers. The tool wasn't pushed top-down — it spread through teams that found out about it from colleagues. That kind of growth is the cleanest signal that something is working: people don't share friction.

The shape of the asks also told us something. Users weren't asking the assistant to write requirements for them — they were asking it to clarify, validate, and contextualize what they already had. The assistant became a way to understand the system better, not just produce text faster.

Adoption

1,756

interactions over 6 months

118

active days in the last 30

15

unique AI routes used

What users ask for

Requirement breakdown246
Splitting & merging24
Analysis18
State diagram15
Language quality14
Continue iterating10

Usage over time

Nov–Apr
agentic v2424026285064NovDecJanFebMarApr

Weekly samples · peaks and troughs reflect daily variance

One requirement, multiple prompts

07:43improve_description
08:08set_new_requirement
08:18feedback
08:21improve_description
08:27set_new_requirement

40 minutes, 5 prompts — the assistant became part of the iteration loop.

What I learned

Designing for AI is designing context. The assistant's job wasn't to generate good text. It was to give engineers the context to make better decisions about text they were already writing. That distinction reshaped how I think about AI products.

Designing for AI = designing the ground truth. The interface was the visible part, but most of the work was deciding what the AI could pull from, in what order, with what weighting. A good answer comes from a well-curated context window, not a clever prompt. Designing the context is designing the product.

© Git / Made with ❤️ in Next / 2026