Context-Driven Development

Measure twice,
code once.

Draft is a methodology that brings structure to AI-assisted development through documented context, specifications, and phased implementation.

Explore

Getting Started

Install Draft as a Claude Code plugin and initialize your project in under a minute. Also available for Cursor, GitHub Copilot, and Gemini.

installation
# Install the plugin
/plugin marketplace add mayurpise/draft
/plugin install draft

# Initialize your project (once)
/draft:init

# Create a feature track
/draft:new-track "Add user authentication"

# Start implementing
/draft:implement

# Verify test coverage (95%+ target)
/draft:coverage

# Validate quality (architecture, security, performance)
/draft:validate

# Check progress
/draft:status

Prerequisites: Claude Code CLI, Git, and Node.js 18+.

Other Editors

Cursor β€” One command, no clone required:

cursor setup
curl -o .cursorrules https://raw.githubusercontent.com/mayurpise/draft/main/integrations/cursor/.cursorrules

GitHub Copilot β€” One command, no clone required:

copilot setup
mkdir -p .github && curl -o .github/copilot-instructions.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/copilot/.github/copilot-instructions.md

Gemini β€” One command, no clone required:

gemini setup
curl -o GEMINI.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/gemini/GEMINI.md

Command Reference

Draft provides 12 slash commands for the full development lifecycle.

πŸ“‹
/draft

Overview and intent mapping

πŸš€
/draft:init

Initialize project context (product, tech-stack, workflow, architecture)

πŸ”„
/draft:init refresh

Update context files (re-scan tech stack, architecture, workflow)

πŸ“
/draft:new-track

Create spec + plan for a feature or fix

⚑
/draft:implement

Execute tasks with TDD workflow

πŸ“Š
/draft:status

Display progress overview

βͺ
/draft:revert

Git-aware rollback at any level

πŸ—οΈ
/draft:decompose

Module decomposition + dependency mapping

πŸ“ˆ
/draft:coverage

Code coverage report (target 95%+)

βœ…
/draft:validate

Systematic quality validation (architecture, security, performance)

πŸ‘οΈ
/draft:jira-preview

Generate Jira export for review

🎫
/draft:jira-create

Push issues to Jira via MCP

The Core Problem

AI coding assistants are powerful but undirected. Without structure, they make assumptions, choose arbitrary approaches, and produce code that doesn't fit your codebase.

🎯

Assumption-Driven

Guesses at requirements instead of asking clarifying questions

πŸ”€

Arbitrary Choices

Picks random technical approaches without considering your stack

🧩

Poor Fit

Produces code that doesn't match existing patterns or conventions

⏭️

No Checkpoints

Skips verification and claims completion without proof

Problems with Chat-Driven Development

Traditional AI chat interfaces have fundamental limitations that get worse over time.

πŸͺŸ

Context Window Fills Up

Long chats exhaust token limits; AI loses early context and forgets decisions

πŸŒ€

Hallucination Increases

More tokens in context means more confusion and worse decisions

πŸ’¨

No Persistent Memory

Close the chat, lose the context. Every session starts from zero.

πŸ”

Unsearchable History

"Where did I work on feature X?" β€” good luck finding it in chat logs

πŸ‘»

No Team Visibility

Your chat history is invisible to colleagues. No review, no audit trail.

πŸ”„

Repeated Context Loading

Every new session requires re-explaining the project from scratch

How Draft Solves This

Chat-Driven
Draft Approach
Context in ephemeral chat
File-based persistent memory
No version history
Git-tracked specs with diffs and blame
Loads entire project every session
Scoped context per track
Unsearchable conversations
Grep-able specs and plans
Invisible to teammates
PR-reviewable planning artifacts

The Draft Workflow

Draft solves this through Context-Driven Development: structured documents that constrain and guide AI behavior. By treating context as a managed artifact alongside code, your repository becomes the single source of truth.

πŸ“‹
Context
Define the landscape
β†’
πŸ“
Spec & Plan
Document the approach
β†’
πŸ—οΈ
Decompose
Module architecture
β†’
⚑
Implement
Execute with confidence

Step 1: Context β€” Why it exists

Without context, AI reinvents your project from scratch every session. It guesses your tech stack, ignores your conventions, and builds features that don't fit. /draft:init fixes this by creating persistent files that tell the AI who your users are (product.md), what technologies to use (tech-stack.md), and how your team works (workflow.md). For existing codebases, it also generates architecture.md β€” a deep system map with mermaid diagrams covering directory structure, entry points, data flows, design patterns, existing module dependencies, and external integrations. This means every future track gets immediate, rich context without re-analyzing the codebase. These files live in git, survive across sessions, and load automatically β€” so the AI always starts with your ground truth instead of assumptions.

Step 2: Spec & Plan β€” Why it exists

When you ask AI to "add authentication," it immediately writes code. If the approach is wrong, you discover it during code review β€” after hours of work. /draft:new-track forces the AI to write a specification and phased plan before touching code. You review the approach in a quick spec PR, not a massive code PR. Disagreements get resolved in a 5-minute document edit instead of a multi-hour rewrite. The plan breaks work into phases with verification steps, so the AI tackles one thing at a time instead of attempting everything at once.

Step 3: Decompose β€” Why it exists (optional)

For multi-module features, jumping straight to implementation creates tangled code with unclear boundaries. /draft:decompose maps your feature into discrete modules with defined responsibilities, API surfaces, and a dependency graph. This gives you an implementation order (build the database layer before the API layer) and prevents circular dependencies. Each module is small enough to reason about, test independently, and review in isolation.

Step 4: Implement β€” Why it exists

Even with a plan, AI can drift: skip tests, claim completion without proof, or make sweeping changes across unrelated files. /draft:implement executes one task at a time from the plan, follows the TDD cycle (write test first, then code, then refactor), runs a verification gate before marking anything complete, and triggers a two-stage review at phase boundaries. The AI can only work on what's specified, in the order specified, with proof required at every step.

Step 5: Verify Quality β€” Why it exists

Passing tests doesn't guarantee good code. AI can violate architectural patterns, introduce security vulnerabilities, or create performance bottlenecks β€” all while every test passes. /draft:coverage measures test completeness (95%+ target) and classifies uncovered lines. /draft:validate runs context-aware quality checks: architecture conformance (patterns from architecture.md), security scans (hardcoded secrets, SQL injection, XSS), performance anti-patterns (N+1 queries, blocking I/O), and regression risk (blast radius of changes). Both generate reports with file:line references and actionable remediation. Auto-runs at track completion when enabled in workflow.md β€” non-blocking by default to maintain velocity while surfacing issues.

The Constraint Hierarchy

Each document layer narrows the solution space. By the time AI writes code, most decisions are already made.

1
product.md
"Build a task manager for developers"
2
tech-stack.md
"Use React, TypeScript, Tailwind"
3
architecture.md
"Express API → Service layer → Prisma ORM → PostgreSQL"
4
spec.md
"Add drag-and-drop reordering"
5
plan.md
"Phase 1: sortable list, Phase 2: persistence"
πŸ’‘ Key Insight

The AI becomes an executor of pre-approved work, not an autonomous decision-maker. Explicit specs, phased plans, verification steps, and status markers keep implementation focused and accountable.

Review Before Code

This is Draft's most important feature. In traditional AI coding, you discover the AI's design decisions during code review β€” after it's already built the wrong thing. With Draft, the AI writes a spec first. You review the approach in a document, not a diff. Disagreements are resolved by editing a paragraph, not rewriting a module.

Traditional AI Coding
Draft Approach
AI writes code immediately
AI writes spec first β€” you approve the approach
Review happens on code PR (too late)
Review happens on spec PR (cheap to change)
Disagreements require rewriting code
Disagreements resolved by editing a document
AI decisions are implicit and buried in code
AI decisions are documented and git-tracked

Practical impact: Faster reviews (approve approach, not implementation details). Fewer rewrites (catch design issues before code exists). Knowledge transfer (specs document why, not just what). Onboarding (new team members read specs to understand features).

Team Workflow: Alignment Before Code

Draft's most powerful application is team-wide: every markdown file goes through commit β†’ review β†’ update β†’ merge before a single line of code is written. By the time implementation starts, the entire team has already agreed on what to build, how to build it, and in what order.

The PR cycle on documents, not code

1
product.md + tech-stack.md + architecture.md + workflow.md
Tech lead runs /draft:init. For brownfield projects, Draft performs deep architecture discovery β€” generating architecture.md with mermaid diagrams of system structure, data flows, and patterns. Team reviews project vision, technical choices, system architecture, and workflow preferences via PR. Product managers review product.md without reading code. Engineers review architecture.md and tech-stack.md without context-switching into implementation.
2
spec.md + plan.md
Lead runs /draft:new-track. Team reviews requirements, acceptance criteria, phased task breakdown, and dependencies via PR. Disagreements surface as markdown comments β€” resolved by editing a paragraph, not rewriting a module.
3
architecture.md
Lead runs /draft:decompose. Team reviews module boundaries, API surfaces, dependency graph, and implementation order via PR. Senior engineers validate the architecture without touching the codebase.
4
jira-export.md β†’ Jira stories
Lead runs /draft:jira-preview and /draft:jira-create. Epics, stories, and sub-tasks are created from the approved plan. Individual team members pick up Jira stories and implement β€” with or without Draft's /draft:implement.
5
Implementation + Verification
Only after all documents are merged does coding start. Every developer has full context: what to build (spec.md), in what order (plan.md), with what boundaries (architecture.md). After implementation, /draft:coverage verifies tests (95%+ target) and /draft:validate checks quality (architecture conformance, security, performance). Both generate reports in the track directory for PR review.

Why this changes how teams work

Traditional AI Development
Draft Team Workflow
Developer gets a Jira ticket and asks AI to build it
Developer gets a Jira ticket with linked spec, plan, and architecture already reviewed
Each developer makes independent design decisions
Design decisions are made once in documents, reviewed by the team
Integration problems surface during code review
Integration problems surface during architecture review β€” before any code exists
New team members read code to understand features
New team members read spec.md and plan.md to understand features
πŸ’‘ Key Insight

The CLI is single-user, but the artifacts it produces are the collaboration layer. Draft handles planning and decomposition. Git handles review. Jira handles distribution. Changing a sentence in spec.md takes seconds. Changing an architectural decision after 2,000 lines of code takes days.

Project Structure

Draft uses a set of markdown files to capture project context, specifications, and implementation plans. Each file has a specific purpose.

🎯
product.md

Product vision, target users, and success criteria

🎨
product-guidelines.md

Style, branding, and UX standards

βš™οΈ
tech-stack.md

Languages, frameworks, and patterns

πŸ—ΊοΈ
architecture.md

System map, data flows, patterns, mermaid diagrams (brownfield)

πŸ”„
workflow.md

TDD preferences and commit strategy

🎫
jira.md

Jira project config for sync (optional)

πŸ“‹
tracks.md

Master list of all work tracks

πŸ“
spec.md

Requirements for a specific track

πŸ—οΈ
track architecture.md

Module decomposition and dependencies (optional)

project structure
draft/
β”œβ”€β”€ product.md          # Product vision and goals
β”œβ”€β”€ tech-stack.md       # Technical choices
β”œβ”€β”€ architecture.md     # System map + mermaid diagrams (brownfield)
β”œβ”€β”€ workflow.md         # TDD, commit, validation, architecture mode
β”œβ”€β”€ validation-report.md # Project-level quality checks (generated)
β”œβ”€β”€ jira.md             # Jira project config (optional)
β”œβ”€β”€ tracks.md           # Master track list
└── tracks/
    └── <track-id>/
        β”œβ”€β”€ spec.md      # Requirements
        β”œβ”€β”€ plan.md      # Phased task breakdown
        β”œβ”€β”€ architecture.md # Track modules (optional)
        β”œβ”€β”€ metadata.json
        β”œβ”€β”€ validation-report.md # Quality checks (generated)
        └── jira-export.md # Jira stories (optional)

Status Markers

Simple markers track progress throughout specs and plans. Progress is explicit, not assumed.

[ ] Pending
[~] In Progress
[x] Completed
[!] Blocked
⚠️ Iron Law

Evidence before claims, always. Never mark [x] without running verification, confirming output shows success, and showing evidence in the response.

Jira Integration

Sync tracks to Jira with a two-step workflow. Preview before pushing to catch issues early.

1

Preview

Generate jira-export.md with epic and stories

2

Review

Adjust story points, descriptions, acceptance criteria

3

Create

Push to Jira via MCP server

Auto Story Points

Story points are calculated from task count:

1-2 tasks1 pt
3-4 tasks2 pts
5-6 tasks3 pts
7+ tasks5 pts

TDD Workflow

AI-generated code without tests is a liability β€” it works until it doesn't, and you have no safety net. When TDD is enabled in workflow.md, Draft forces the AI to prove its code works at every step.

πŸ”΄ Red
β†’
🟒 Green
β†’
πŸ”΅ Refactor
β†’
πŸ“¦ Commit
R
Red β€” Write failing test
Define what "correct" means before any code exists. The test must fail with an assertion error (not a syntax error), proving it actually tests the requirement. This prevents the AI from writing tests that pass vacuously.
G
Green β€” Minimum code to pass
Write the simplest implementation that makes the test pass. No extras, no "improvements," no abstractions for hypothetical futures. This prevents the AI from over-engineering β€” every line of code is justified by a failing test.
R
Refactor β€” Clean with tests green
Improve code structure while tests stay green. Remove duplication, clarify naming, extract functions. The test suite acts as a safety net β€” if refactoring breaks anything, you know immediately.
C
Commit β€” Following conventions
One task = one commit. The commit message follows project conventions (feat(track-id): description). Small, focused commits make reverts surgical and git blame useful.

Why this matters for engineers: Every completed task has a test proving it works. You can refactor with confidence, onboard new team members who read tests as documentation, and revert individual tasks without collateral damage. The AI can't claim "it should work" β€” it has to show evidence.

Architecture Discovery

For brownfield projects, /draft:init doesn't just detect your tech stack β€” it performs a deep three-phase codebase analysis that generates architecture.md. This document becomes the persistent context every future track references, so the AI never has to re-analyze your codebase from scratch.

Why this exists

Without architecture discovery, every new track starts cold. The AI explores your codebase, builds a mental model, and starts working β€” then loses that understanding when the session ends. The next track starts the same exploration over again. Architecture discovery pays the analysis cost once during init, then every track, every question, and every implementation gets immediate, rich context about how your system actually works.

Phase 1: Orientation (The System Map)

Scans the codebase to produce a high-level system map with mermaid diagrams.

graph TD

System Architecture

Layered architecture diagram showing actual components β€” presentation, business logic, and data layers with real names from the codebase.

graph TD

Directory Hierarchy

Maps every top-level directory to its single responsibility and key files. Generates a tree diagram of the project structure.

table

Entry Points

Identifies all entry points: API routes, main loops, event listeners, CLI commands, serverless handlers. Critical paths through the system.

sequenceDiagram

Request/Response Flow

Traces one representative request through the full stack with actual file and class names β€” not generic placeholders.

table

Tech Stack Inventory

Cross-references detected dependencies with config files. Records language versions, framework versions, and where each is configured.

Phase 2: Logic (The "How" & "Why")

Examines specific files and functions to understand business logic, data flows, and patterns.

flowchart LR

Data Lifecycle

Maps how 3-5 primary domain objects enter the system, where they're modified, and where they're persisted. Mermaid flowchart of the data pipeline.

table

Design Patterns

Identifies dominant patterns: Repository, Factory, Middleware, Observer, etc. Documents where each is used and why.

table

Complexity Hotspots

Flags god objects, circular dependencies, and high-complexity areas. Unclear business logic is marked "Unknown/Legacy Context Required" β€” never guessed.

table

Conventions

Extracts existing conventions: error handling, logging, naming, validation patterns. New code must respect these guardrails.

graph LR

External Dependencies

Maps external service integrations: auth providers, email services, storage, queues, third-party APIs. Mermaid diagram of all connections.

Phase 3: Module Discovery (Existing Modules)

Reverse-engineers the existing module structure from import graphs and directory boundaries. This is discovering what already exists β€” not planning new modules (that's /draft:decompose).

graph LR

Module Dependencies

Analyzes imports to build a dependency graph of existing modules. Detects circular dependencies. Mermaid diagram with actual module names.

table

Module Inventory

Each module documented with: responsibility, source files, exported API surface, dependencies, complexity rating, and a story summarizing what it does.

topological sort

Dependency Order

Topological ordering from leaf modules (foundational, no dependencies) to the most dependent. Shows which parts of the system are the foundation.

βš™οΈ Init vs Decompose

/draft:init discovers and documents existing modules (marked [x] Existing). /draft:decompose plans new modules for features or refactors. Both write to the same section of architecture.md β€” init sets the baseline, decompose extends it. Existing modules are never removed by decompose.

Refresh Mode

/draft:init refresh re-scans the codebase and diffs against the existing architecture.md. It detects new directories, removed components, changed integration points, new domain objects, and new or merged modules β€” then updates all mermaid diagrams and module documentation to reflect the current state. Changes are presented for review before writing.

πŸ’‘ Key Insight

Pay the analysis cost once, benefit on every track. Architecture discovery turns your codebase into a documented system that any AI assistant can understand instantly. Every /draft:new-track starts with full context: where things are, how they connect, what patterns to follow, and what to avoid.

Architecture Mode

Standard Draft gives you specs and plans. Architecture Mode goes deeper β€” it forces the AI to design before it codes. Every module gets a dependency analysis. Every algorithm gets documented in plain language. Every function signature gets approved before implementation begins. This is how you build complex features without the AI creating a tangled mess.

Why Architecture Mode exists

When AI builds a multi-module feature, it makes ad-hoc decisions about module boundaries, function signatures, and data flow. These decisions compound β€” a bad interface choice in module A cascades through modules B, C, and D. By the time you notice, the entire implementation needs restructuring. Architecture Mode front-loads these decisions into reviewable checkpoints where changes cost minutes instead of hours.

/draft:decompose

Module Decomposition

Problem: AI creates tangled code with unclear boundaries. Solution: Break the feature into 1-3 file modules with defined APIs, dependencies, and implementation order. You review the architecture before any code is written.

Step 2.5

Algorithm Stories

Problem: AI jumps to code without understanding the algorithm. Solution: Write a natural-language Input β†’ Process β†’ Output description at the top of each file. You approve the algorithm before the AI writes a single line of implementation.

Step 3.0a

Execution State

Problem: AI invents random variable names and data structures on the fly. Solution: Define all intermediate state variables (input, processing, output, error) before coding. You control the data model.

Step 3.0b

Function Skeletons

Problem: AI creates functions with unclear signatures and unexpected interfaces. Solution: Generate stubs with complete types, parameters, and docstrings β€” no bodies. You approve function names and contracts before TDD begins.

~200 lines

Chunk Reviews

Problem: AI dumps 500+ lines in one shot, making review impossible. Solution: Implementation is capped at ~200-line chunks with mandatory developer review after each. Every chunk is a reviewable, committable unit.

/draft:coverage

Code Coverage

Problem: AI writes tests that look good but miss critical paths. Solution: Auto-detect your coverage tool, run it, classify every uncovered line (testable / defensive / infrastructure), and suggest specific tests for gaps.

How each checkpoint helps engineers

1
Story checkpoint
The AI writes a plain-language algorithm description: what goes in, what processing happens, what comes out. You review the logic before any code exists. If the approach is wrong, you fix it in English β€” not in a 200-line diff.
2
Execution State checkpoint
The AI defines every intermediate variable: input state, processing state, output state, error state. You see the exact data model before implementation. Bad variable names, missing error states, or wrong data structures get caught here β€” not during debugging.
3
Skeleton checkpoint
The AI generates function stubs with full type signatures, parameters, and docstrings β€” but no implementation bodies. You approve the public API, function names, and contracts. When TDD starts, the structure is locked; only the bodies get filled in.
4
TDD + Chunk review
Implementation happens inside the pre-approved skeletons using TDD. If the diff exceeds ~200 lines, it stops for review. You never see a 500-line code dump β€” every chunk is small enough to read, understand, and approve.
5
Coverage checkpoint
After implementation, /draft:coverage measures test quality. Every uncovered line is classified: testable (needs a test), defensive (acceptable), or infrastructure (acceptable). You decide what matters, not the AI.
6
Validation checkpoint
After coverage verification, /draft:validate runs systematic quality checks using Draft context. Detects architecture violations (patterns from architecture.md), security issues (hardcoded secrets, SQL injection, XSS), performance anti-patterns (N+1 queries, blocking I/O), and regression risks (blast radius analysis). Generates validation-report.md with actionable fixes. Non-blocking by default β€” surfaces issues without halting progress.

Implementation Flow

Story β†’ checkpoint β†’ Execution State β†’ checkpoint
β†’ Skeletons β†’ checkpoint β†’ TDD (Red β†’ Green β†’ Refactor)
β†’ /draft:coverage β†’ 95%+ target β†’ /draft:validate β†’ quality checks

The decomposition process

Before Architecture Mode's checkpoints activate, /draft:decompose maps your feature into modules. Here's what happens concretely:

1
Scan
Analyze your codebase: directory structure, entry points, existing module boundaries, import patterns. For new projects, work from the spec.
2
Propose modules
Each module gets: name, single responsibility, 1-3 files, public API surface with typed signatures, dependencies on other modules, and complexity rating. You review and modify.
3
Map dependencies
Generate a dependency graph with ASCII diagram. Detect circular dependencies and propose fixes (extract shared interface, invert dependency, or merge modules). You review the graph.
4
Order implementation
Topological sort determines build order: implement leaf modules first (no dependencies), then dependents. Identifies which modules can be built in parallel.
5
Generate architecture.md
All of the above is written to a reviewable markdown file with module definitions, dependency diagram, implementation order, and story placeholders for each module.
βš™οΈ Opt-In

Architecture Mode is optional. Simple features, bug fixes, and config changes use the standard Draft workflow. Enable it when module boundaries, algorithm design, and coverage measurement add value. Good fit: multi-module features, new projects, complex algorithms, teams wanting maximum review granularity. Overkill: simple features touching 1-2 files, bug fixes with clear scope, configuration changes.

Revert Workflow

AI makes mistakes. When it does, you need to undo cleanly β€” not just git reset --hard and lose everything. Draft's revert understands the logical structure of your work. It knows which commits belong to which task, which tasks belong to which phase, and it updates both git history and Draft's tracking state together. You pick the granularity, review what will change, and confirm before anything happens.

1
Task
Single task's commits
2
Phase
All commits in a phase
3
Track
Entire track's commits
1
Preview
Show commits, affected files, and plan.md changes before executing
2
Confirm
Explicit user confirmation required β€” no silent reverts
3
Execute
Git revert (newest first) + single revert commit + Draft state update

Quality Disciplines

AI's default failure mode is to guess at fixes, skip verification, and claim success. Draft embeds three quality agents directly into the workflow β€” they activate automatically at the right moments and enforce disciplined engineering practices.

Systematic Debugging

The problem: When AI hits an error, it tries random changes hoping something sticks. Each "fix" often introduces new bugs. The solution: When a task is blocked ([!]), the Debugger Agent enforces a four-phase process that mirrors how senior engineers actually debug β€” understand first, then fix.

1
Investigate
Read errors, reproduce
β†’
2
Analyze
Find root cause
β†’
3
Hypothesize
Smallest possible test
β†’
4
Implement
Regression test + fix

Why this matters: After 3 failed hypothesis cycles, the agent escalates to you with everything it has learned and eliminated β€” no more silent spirals of random attempts. Root cause is documented in plan.md, so the team learns from every bug.

Two-Stage Review

The problem: AI happily moves on to the next phase even if it missed requirements or wrote poor code. The solution: At every phase boundary, the Reviewer Agent runs two sequential checks. Stage 1 catches what's missing. Stage 2 catches what's messy. Only when both pass does the work proceed.

Stage 1: Spec Compliance
Stage 2: Code Quality
All functional requirements implemented?
Follows project patterns from tech-stack.md?
Acceptance criteria met?
Appropriate error handling?
No scope creep or missing features?
Tests cover real logic, not implementation details?
Edge cases and error scenarios handled?
Issues classified: Critical > Important > Minor

Why this matters: Critical issues (broken functionality, security vulnerabilities) must be fixed before proceeding β€” no exceptions. Important issues should be fixed. Minor issues are noted but don't block. This prevents both perfectionism paralysis and quality erosion.

Code Coverage

The problem: TDD ensures you write tests, but doesn't guarantee those tests are comprehensive. You can have 100% of your tests passing while 40% of your code is untested. The solution: /draft:coverage runs your project's coverage tool and classifies every uncovered line, so you know exactly what's tested, what's not, and whether that matters.

βœ…
Testable

Should be covered. Suggests specific tests to write.

πŸ›‘οΈ
Defensive

Error handlers for impossible states. Acceptable to leave.

βš™οΈ
Infrastructure

Framework boilerplate and entry points. Acceptable.

Systematic Validation

The problem: AI can pass all tests while violating architectural patterns, introducing security vulnerabilities, or creating performance bottlenecks. Tests measure "does it work" but not "is it good." The solution: /draft:validate uses Draft context (architecture.md, tech-stack.md) to catch quality issues tests miss β€” architecture violations, hardcoded secrets, N+1 queries, missing input validation, and more.

πŸ—οΈ
Architecture

Pattern violations, layer boundaries, dependency rules

πŸ”’
Security

Hardcoded secrets, SQL injection, XSS, weak hashing

⚑
Performance

N+1 queries, blocking I/O, missing pagination

πŸ“‹
Regression

Blast radius analysis, critical path detection

Why this matters: Validation runs automatically at track completion (configurable in workflow.md). Non-blocking by default β€” issues documented in validation-report.md with file:line references and remediation guidance. Complements coverage (quantitative test metrics) with qualitative codebase health checks.

The Economics

Writing specs feels slower. It isn't. The overhead is constant (~20% for simple tasks), but savings scale with complexity, team size, and criticality.

Scenario
Without Spec
With Spec
Simple feature
1 hour
1.2 hours
Feature with ambiguity
3 hours + rework
2 hours
Feature requiring team input
5 hours + meetings
2.5 hours
Wrong feature entirely
Days wasted
Caught in review

For critical product development, Draft isn't overhead β€” it's risk mitigation.

Core Principles

1

Plan before you build

Create specs and plans that guide development before writing code

2

Maintain context

Ensure agents follow style guides and product goals consistently

3

Iterate safely

Review plans before code is written, catch issues early

4

Work as a team

Share project context across team members through git-tracked specs

5

Verify before claiming

Evidence before assertions, always β€” run tests, show proof

When to Use Draft

Good fit

πŸ—οΈ

Design Decisions

Features requiring architecture choices, API design, or data model decisions

πŸ‘₯

Team Review

Work that will be reviewed by others β€” specs are faster to review than code

πŸ“¦

Multi-Step Work

Complex implementations spanning multiple files, modules, or phases

πŸ”

Repeated Failures

Anything where "just do it" has resulted in rework or misalignment

Overkill

One-line bug fixes, typo corrections, exploratory prototypes you'll throw away, simple config changes.

Constraint Mechanisms

How Draft keeps AI focused and accountable:

Mechanism
Effect
Prevents
Explicit spec
AI only implements what's documented
Scope creep
Phased plans
AI works on one phase at a time
Over-engineering
Verification steps
Each phase requires proof of completion
False claims
Status markers
Progress is tracked, not assumed
Lost context
Two-stage review
Spec compliance before code quality
Quality gaps
Validation checks
Context-aware quality validation (architecture, security, performance)
Pattern violations, vulnerabilities, tech debt
πŸ“Œ Remember

Draft adds structure. Use it when structure has value. The AI becomes an executor of pre-approved work, not an autonomous decision-maker.