This report synthesises publicly available, primary documentation from Linear, Notion, Figma, Stripe, and GitLab; plus primary vendor documentation for widely used agentic SDLC tooling and standards (e.g., MCP, coding agents, acceptance‑criteria frameworks). Where vendor pages do not expose a “canonical PRD template”, the report treats the company’s published templates, issue/spec templates, and workflow guidance as the closest proxy for “best‑in‑class PRD structure”. citeturn1view0turn2view0turn2view1turn4view1turn8view0turn24view0turn23view0
Evidence quality varies sharply by subtopic: template sectioning is well‑documented for Notion, Figma, and GitLab; weaker for Linear (tooling supports PRDs but doesn’t publish a single official section list); and weakest for Stripe (public guidance exists for design docs / agent integrations, but not a published PRD template). These gaps are explicitly flagged where they affect conclusions. citeturn1view0turn23view0turn9search0
image_group{"layout":"carousel","aspect_ratio":"16:9","query":["FigJam PRD template","Notion PRD template example","GitLab feature proposal issue template","Linear project documents spec PRD"]}
Notion (PRD example + guidance)
Notion’s “How to write a PRD” post gives a concrete PRD template example organised as: Context, Goal and KPIs, Constraints and assumptions, Dependencies, and Tasks (often implemented as a Kanban board embedded in the PRD). citeturn2view0
Separately, Notion’s Help Centre guide frames “typical PRD attributes” as Context, Goals/Requirements, Constraints, Assumptions, and Dependencies, and explicitly positions PRDs as lightweight pages that link out to artefacts (design docs, interviews, images, etc.). citeturn2view1
Figma (PRD template + PRD guidance)
Figma’s PRD guidance enumerates “core components” of a PRD as a broad, cross‑functional set, including Product overview, Purpose/use cases/value propositions, Features & functionality, User personas + user stories, User flows + UX notes, Release criteria + timeline, Risks, Non‑functional requirements, Assumptions/dependencies/constraints, and an evaluation plan + success metrics. citeturn4view1
Figma’s FigJam PRD template page is less explicit about headings, but it emphasises aligning on purpose, problem, and product functions/goals/user experience, consistent with a “collaborative canvas” framing rather than a signed‑off specification. citeturn4view0
GitLab (feature proposal template as PRD‑adjacent requirements artefact)
GitLab’s built‑in Feature proposal – detailed issue template is a structured requirements document embedded directly in the tracker. Its headings (in order) are: Release notes, Problem to solve, Intended users, User experience goal, Proposal, Further details, Permissions and Security, Documentation, Availability & Testing, Available Tier, Feature Usage Metrics, What does success look like, and how can we measure that?, What is the type of buyer?, Is this a cross‑stage feature?, What is the competitive advantage or differentiation for this feature?, Links / references—and it includes label quick‑actions to classify the issue. citeturn8view0turn8view1
GitLab also documents the underlying mechanism: description templates (Markdown files in .gitlab/issue_templates) standardise issue layouts across projects and can be applied via UI when creating issues. citeturn1view2turn8view1
Linear (project documents + templates; “PRD” as a doc type, not a canonical template)
Linear’s docs explicitly support “project documents” for specs and PRDs, and allow teams to create document templates to “guide creators to share information effectively”. However, Linear’s documentation does not publish an official “PRD template section list” in the same way Notion/Figma do. citeturn1view0
Instead, Linear publishes process guidance that heavily influences “requirements shape”: it argues against user stories and pushes for short, plain‑language issues with clear outcomes, and for user‑experience discussion at the project/feature level before work is broken into tasks. citeturn17view0
Stripe (evidence thin for PRD templates; stronger for adjacent artefacts + agent integration)
No Stripe‑authored PRD template or section list surfaced in current public docs during this research. Evidence is stronger for (a) how Stripe thinks about design docs (especially API design docs) and (b) agent/tooling standards (notably MCP). A Stripe Sessions developer keynote explicitly references using an LLM to handle “chaos” in a human‑written API design doc in Google Docs and provide suggestions during API design. citeturn9search0
Separately, Stripe publishes a Model Context Protocol server to let AI agents interact with Stripe’s API and knowledge base (docs + support articles). citeturn23view0
Extrapolation (flagged): Given Stripe’s public emphasis on disciplined API design and tooling‑integrated documentation, it is plausible their internal “requirements” artefact often manifests as design‑doc‑centric rather than PRD‑centric; but this cannot be confirmed from first‑party PRD templates. citeturn9search0turn23view0
Modern, widely‑shared templates converge on several shifts:
From exhaustive up‑front specification to “living” alignment artefacts. Notion describes PRDs as typically “one page” that links out to deeper artefacts and can adapt per project, rather than a monolithic document. citeturn2view1
Figma explicitly frames PRDs as focusing on what is being built and why, while remaining adaptable in agile environments (including PRDs that resemble boards combining epics/stories/tasks with context). citeturn4view1
Reforge contrasts 1990s PRDs (20–30 pages, definitive records) with modern agile contexts where static documents fit poorly; it argues for a “dynamic and evolving” PRD that is “enough to get started” and updated as learning occurs. citeturn18view0
From “requirements as control” to “requirements as shared understanding”. Reforge identifies two modern failure modes—over‑documenting to compensate for trust/ownership issues, or under‑documenting and forcing downstream ambiguity—and positions effective PRDs as enabling alignment and creativity rather than dictating a prescriptive plan. citeturn18view0
Linear’s “Write issues not user stories” is aligned: it argues user stories can obscure the work, be expensive to maintain, and silo engineers into a mechanical role; it advocates discussing UX at the product level and writing clear tasks rather than ritualised story formats. citeturn17view0
Heavier inclusion of measurement, rollout, risk, and non‑functional constraints. Figma’s enumerated core components include risks, non‑functional requirements, and an evaluation plan with success metrics. citeturn4view1
GitLab’s feature proposal template explicitly includes security/permissions, documentation, availability/testing, usage metrics, and a success‑measurement section—closer to a “full lifecycle” spec than a narrow requirements list. citeturn8view0
A practical, evidence‑aligned way to define “minimum viable” is: the smallest artefact that still supports (1) stakeholder alignment, (2) unambiguous scoping, and (3) a testable release gate.
Minimum viable PRD (MV‑PRD):
A consistently supported minimal set across Notion + Figma guidance is:
Notion’s own framing that “streamlined PRDs” cover purpose, goals, features, and release criteria matches this MV‑PRD definition. citeturn2view0
Comprehensive PRD (or PRD‑adjacent “full stack” spec):
GitLab’s feature proposal template is a canonical example of “comprehensive”: beyond problem/users/proposal, it embeds security, documentation requirements, availability/testing strategy, tiering/commercial framing, and explicit measurement. citeturn8view0
Figma’s “core components” list similarly expands into non‑functional requirements, risk, and an evaluation plan; it also acknowledges that some teams fold functional/technical specs into (or alongside) the PRD. citeturn4view1
Across the major platforms reviewed, teams are converging on “AI as a requirements accelerator” in four recurring patterns: capture → structure → enrich → hand off.
Capture: meetings, threads, and scattered context become structured inputs.
Notion’s AI Meeting Notes explicitly targets automated capture of meeting content into summaries and action items and keeps the resulting artefacts searchable in the workspace. citeturn15view0
Linear generates AI discussion summaries when issues reach a comment threshold, and includes citations linking back to source comments to preserve traceability. citeturn13view1
These features are functionally “PRD feeders”: they convert conversational artefacts into structured decisions, blockers, and next steps. citeturn15view0turn13view1
Structure: AI drafts or rewrites requirements into a consistent template.
GitLab Duo can generate a “detailed description for an issue based on a short summary” directly in the issue creation flow. citeturn14view0
GitLab’s internal engineering handbook publishes a concrete practice: use a standard prompt with Duo Agent to transform vague follow‑up issues into well‑structured work items with background, current state, requested changes, proposed implementation, acceptance criteria, and technical context. citeturn13view3
This is a direct example of AI being used to take “vague idea / thin ticket → scoped, actionable requirements”. citeturn13view3turn14view0
Enrich: AI infers metadata, dependencies, duplicates, ownership, and relationships.
Linear’s Triage Intelligence uses LLMs to suggest issue properties (teams, projects, assignees, labels) and detect duplicates/relationships by comparing new triage items against historical workspace data; it supports accept/decline and exposes reasoning. citeturn13view0
This is effectively “requirements routing + de‑duplication” at scale: it reduces the frequency of PRDs failing because ownership, scope, or prior art is missing. citeturn13view0
Hand‑off: PRD → tickets → code changes, increasingly via agentic flows.
Notion’s “Linear PRD Implementer” agent template claims an end‑to‑end hand‑off: @mention the agent in a Notion PRD/strategy doc/meeting notes, and it generates a structured Linear project with milestones and actionable issues (including descriptions, labels, deadlines, and phased rollout like Alpha/Beta/GA). citeturn15view2
GitLab Duo Agent Platform’s “Issue to MR” flow is an explicit PRD‑to‑implementation bridge: it analyses an issue’s requirements, opens a draft merge request linked to the issue, creates a development plan, and proposes an implementation in the GitLab UI. citeturn14view1
A key “current state” shift is that requirements artefacts increasingly live inside systems that are directly operable by agents (issue trackers, docs systems, and design tools), rather than in disconnected documents.
Agent‑operable requirements systems (examples)
Linear positions agents as “app users” who can be mentioned, delegated issues, comment, and collaborate on projects/documents; importantly, Linear states that delegating to an agent does not transfer responsibility—the human remains responsible for completion. citeturn13view2
Notion’s Agent runs inside the workspace, can create/edit pages and databases using workspace + connected apps context, and can be personalised with instructions/skills/resources; it also clarifies the agent’s authority boundary (same permissions as the user) and that changes can be undone. citeturn16view0
GitLab bakes AI into the issue and merge request lifecycle (issue description generation; merge request summaries; AI‑assisted code review flows). citeturn14view0turn14view2
MCP as the emerging “glue” layer for PRD/SDLC agents
A notable 2025–2026 pattern is rapid adoption of the Model Context Protocol (MCP) by major platforms, to let agents securely access product context and tooling:
Taken together, this suggests a near‑term architecture for AI‑augmented PRDs: PRD content stays in Notion/Linear/GitLab/Figma, and agents interact through standardised connectors rather than bespoke integrations. citeturn22view0turn24view0turn23view0
Primary sources consistently codify a “human‑in‑the‑loop” control plane:
This implies a practical division of labour: AI accelerates drafting, structuring, and enrichment; humans remain accountable for product intent, scope trade‑offs, and acceptance decisions. citeturn13view2turn14view1turn21view4
Acceptance criteria as “conditions that must be satisfied to be complete” are widely framed as clear, concise, and testable statements focused on outcomes rather than the implementation path. entity["company","Atlassian","software company"] explicitly distinguishes acceptance criteria from “how to reach a solution” and emphasises outcome focus. citeturn10search2
Definition of Done (DoD) is structured as a quality gate for increments: the Scrum Guide defines it as the formal description of the state of the increment when it meets required quality measures. citeturn10search1
Given/When/Then (BDD / Gherkin)
Cucumber guidance recommends making scenarios more “declarative” to describe behaviour rather than implementation, improving maintainability and making scenarios read as living documentation. citeturn10search0
CucumberStudio guidance notes the purpose of Given/When/Then is logical structure and readability for people (automation tools may not care, but humans do). citeturn10search16
Strength: high precision for key user interactions, straightforward to convert into automated tests. citeturn10search0turn10search16
Checklist‑style criteria
Checklists are commonly used for verification items and are often paired with DoD‑style gates (documentation, rollout checks, security review). GitLab’s issue enhancement template explicitly uses checklist formatting for acceptance criteria in its “Example Structure”. citeturn13view3
Strength: efficient for non‑functional and process requirements that don’t fit a scenario model (e.g., docs updated, metrics added). citeturn13view3turn10search1
Outcome‑based criteria (metrics + evaluation plans)
Figma includes “evaluation plan and related success metrics” as a core PRD component, indicating that acceptance often includes measurable post‑release outcomes, not only functional checks. citeturn4view1
Notion similarly pairs goals with KPIs in its PRD example. citeturn2view0
Strength: protects against shipping “busy work” that meets functional checks but fails user/business value. citeturn4view1turn2view0
Current agent tooling strongly favours acceptance criteria that can be translated into tests, while retaining a lightweight checklist for cross‑cutting quality.
Two concrete signals:
Implication: For agent‑implemented tasks, the most robust pattern is a hybrid:
Where teams expect agents to act autonomously across tools, it is also becoming important to specify tool permissions and escalation triggers as part of “done”, consistent with OpenAI’s “tool safeguards” framing. citeturn21view4
Too vague / context‑poor
GitLab’s handbook documents a concrete, high‑frequency failure: follow‑up issues created from merge request discussions often have generic titles, minimal context, no implementation guidance, and vague acceptance criteria; this creates technical debt and reduces pick‑up ability by anyone except the original author. citeturn13view3
Too prescriptive / solution‑locked
Linear argues user stories can drive “code to the requirements” behaviour and push engineers into a mechanical role, implying a failure mode where requirements over‑specify the solution rather than framing the problem and desired outcome. citeturn17view0
Too heavy / used to compensate for trust and ownership problems
Reforge highlights a failure mode where PRDs become dissertation‑like documents to solve misalignment and trust issues; conversely, under‑writing creates downstream unresolved questions. citeturn18view0
Missing stakeholder alignment and review participation
RFC‑style processes fail when review is absent: Increment notes that writing a long RFC is not useful if no one reviews or discusses it, and that RFCs impose a time cost that some resist. citeturn18view2
Mitigation: automated summarisation + traceability reduces context loss.
Linear issue discussion summaries include citations to source comments, providing an audit trail for decisions and reducing “lost context” failure. citeturn13view1
Notion AI Meeting Notes turns meetings into structured summaries/actions and keeps them searchable in the workspace, which reduces “decisions trapped in calls” failure. citeturn15view0
Mitigation: templated “issue enhancement” and generation raises baseline quality.
GitLab’s Duo Agent prompt formalises a structure that forces background, current state, requested changes, proposed implementation, acceptance criteria, and technical context—explicitly targeting the “vague follow‑up issue” failure mode. citeturn13view3
GitLab Duo’s issue description generation similarly raises baseline detail from a short summary. citeturn14view0
Mitigation: inference of ownership/duplication reduces routing errors and scope drift from duplicate work.
Linear’s Triage Intelligence suggests ownership and relationships (duplicates/related issues) and can auto‑apply properties—addressing common PRD failure modes where the work lands with the wrong team or duplicates existing efforts. citeturn13view0
Risk: AI “writes the PRD”, but the team doesn’t internalise it.
Maarten Dalmijn argues the bottleneck in requirements isn’t writing speed; it is shared understanding, and AI‑generated requirements can worsen understanding by short‑circuiting collaboration. citeturn11search5
Risk: plausible detail creates false confidence.
Evidence here is indirect but consistent with vendor guardrail emphasis: OpenAI’s agent guidance treats guardrails and tool safeguards as critical because model outputs can be off‑topic, unsafe, or unreliable, especially when connected to tools and data. citeturn21view2turn21view4
Stripe’s MCP docs explicitly warn about prompt injection and recommend human confirmation of tools, reinforcing that “agentic automation” increases the blast radius of incorrect or manipulated instructions. citeturn23view0
Risk: PRDs become over‑long because AI makes verbosity cheap.
This is a reasoned extrapolation grounded in Reforge’s “dissertation PRD” failure mode: if teams already over‑document to solve people problems, LLMs can accelerate producing large documents that still fail to create alignment. citeturn18view0
Extrapolation (flagged): the sources do not directly quantify “AI‑caused PRD bloat”, but the mechanism is consistent with observed failure modes + AI writing affordances. citeturn18view0turn21view1
Requirements → implementation via GitLab Duo Agent Platform (Issue → MR)
GitLab documents an “Issue to MR” agent flow: given a well‑scoped issue with clear requirements and acceptance criteria, the flow analyses the issue, opens a draft merge request linked to the issue, creates a development plan, and proposes an implementation; the developer can monitor agent sessions, review file‑level changes, optionally validate locally, and merge via the standard workflow. citeturn14view1
This is one of the clearest first‑party examples of agents participating across requirements → plan → code → CI pipeline → review. citeturn14view1turn14view2
Ticket → plan → test → PR via Devin
entity["company","Cognition","ai software company"]’s Devin product site explicitly presents a staged workflow: Ticket, Plan, Test, PR, including “Devin tests changes by itself” and a “review changes natively” step. citeturn19search8
Devin’s docs describe it as an autonomous AI software engineer that can write, run, and test code, and can tackle Linear/Jira tickets and implement new features (with an explicit limitation framing that it cannot handle extremely difficult tasks). citeturn19search12
Cognition also published (Feb 2026) guidance on using Devin internally (“How Cognition uses Devin to build Devin”), reinforcing the “treat it like a teammate, give context, teach conventions” operational model. citeturn19search16
Spec → plan → edits across files via Copilot Workspace
entity["company","GitHub","code hosting platform"] Next’s Copilot Workspace page describes a workflow where—after you edit/approve a “spec”—the tool generates a concrete plan enumerating files to create/modify/delete and bullet‑point actions per file, with the plan remaining user‑editable. citeturn19search9
While it is presented as an experimental “project”, it is a direct expression of “requirements to plan” inside the developer workflow. citeturn19search9
Idea → app + design iteration via Replit Agent
entity["company","Replit","online IDE company"] positions its Agent as building an app/website from a chat prompt end‑to‑end (“tell it your idea, it will build it”). citeturn19search1
Replit’s Agent 4 announcement (March 2026) emphasises collapsing design iteration and code into one environment: explore ideas and refine details in a “design board”, then bring outputs into the app and integrate into production code. citeturn19search5
Design tooling explicitly connecting to coding agents via MCP
Figma’s MCP catalog positions its MCP server as adding Figma context to agentic coding tools (Cursor, Codex, Claude Code, etc.) and explicitly aims at “design‑informed code generation.” citeturn22view0
Linear’s MCP server similarly enables compatible agents to access Linear data securely and take actions (find/create/update issues/projects/comments). citeturn24view0
Stripe’s MCP server enables agents to call Stripe tools and search documentation/support articles with OAuth‑scoped access. citeturn23view0
Collectively, these confirm an emerging “AI‑native SDLC substrate”: agents operate along the entire chain only when they have standardised, authenticated access to the same artefacts humans use (design files, issues, repos, APIs). citeturn22view0turn24view0turn23view0
Well‑scoped, acceptance‑criteria‑rich work items as the “API” for agents.
GitLab’s Issue→MR flow explicitly requires clear requirements and acceptance criteria for better output quality. citeturn14view1
This aligns with Cursor’s guidance that tests/criteria can drive agent execution and iteration, using tests as an objective loop. citeturn19search11turn19search30
A plan‑first step that enumerates file changes and actions.
Copilot Workspace centres a “spec → concrete plan” transition listing touched files and per‑file actions. citeturn19search9
GitLab’s Issue→MR flow similarly creates a development plan before proposing implementation. citeturn14view1
Instruction layers / “house rules” that encode team conventions.
Linear provides “agent guidance” (Markdown with history) to specify repository conventions, commit/PR references, and review process, explicitly to align agents with existing workflows. citeturn13view2
Notion lets users personalise agents with instructions, skills, and preferred resources (pages, Slack channels, files). citeturn16view0
OpenAI’s agent design framing treats “instructions” as one of three core components (model/tools/instructions) and emphasises layered guardrails and evals. citeturn21view0turn21view1turn21view4
Tasks beyond an agent’s “reliable complexity envelope”.
Devin’s docs draw a boundary (“excluding extremely difficult tasks”) and offer a pragmatic heuristic about what it can likely handle. citeturn19search12
This implies that AI‑native SDLCs still require deliberate task sizing and decomposition to achieve predictable outcomes. citeturn19search12turn14view1
Security, tool misuse, and prompt injection remain limiting factors.
OpenAI’s guidance treats guardrails and tool safeguards as core design requirements, not optional add‑ons. citeturn21view2turn21view4
Stripe’s MCP docs explicitly call out prompt injection and recommend human confirmation of tool calls, reinforcing that “agentic SDLC” expands the attack surface when agents can take actions. citeturn23view0
GitLab’s Issue→MR flow also uses explicit enablement toggles for agentic features, signalling that organisations are gating autonomous actions at the project level. citeturn14view1
Evidence gap (flagged): there is limited first‑party documentation showing agents reliably doing requirements → design exploration → implementation → testing with high autonomy without substantial human review and operational guardrails. The best documented flows (GitLab, Devin, Copilot Workspace) all retain strong human gating at “spec/plan approval” and “PR review/merge”. citeturn14view1turn19search16turn19search9turn21view4
A consistent line across sources is:
PRD = the “what/why”: user problem, intended users, scope, success, and release criteria.
Notion explicitly says the PRD’s goal is to tell the story of how users will use the product “without getting too far into the technical details”. citeturn2view1
Figma similarly says PRDs typically focus on what you’re building rather than how you’ll build it, while still including user flows, risks, and non‑functional constraints as decision support. citeturn4view1
Design doc / functional spec / tech spec = the “how”: architecture, implementation details, and engineering trade‑offs.
Figma explicitly differentiates PRDs from functional/tech specs, describing tech specs as explaining exactly how engineers will implement PRD requirements (and noting some teams include functional specifications directly within the PRD). citeturn4view1
In practice, GitLab’s feature proposal template straddles the boundary: it includes “Proposal” plus sections like security, availability/testing, documentation, and metrics—indicating that many organisations blend PRD and design‑doc concerns when the work item is the central unit of execution (issue‑centric development). citeturn8view0turn8view1
The most robust AI‑native pattern emerging from the documented flows is to treat the PRD as an input specification for downstream agents and to make the handoff explicit:
Evidence‑aligned synthesis: In AI‑native SDLCs, the “what vs how” boundary still exists, but the enforcement mechanism is changing: it is less about document types and more about which artefact is allowed to drive automated action. PRDs/issues/specs drive planning and task decomposition; design docs + design artefacts drive implementation choices; acceptance criteria + tests constrain agent execution; and PR review/merge remains the principal human gate. citeturn14view1turn19search11turn19search30turn21view4