AI coding tool configs that reduce wasted tokens and prevent scope creep — for Claude Code, Cursor, and GitHub Copilot.
git clone https://github.com/mayurpise/dotai.git
cd dotai
./install.sh --config
See the repo for all options.
Why CLAUDE.md and scrub.md: two levers that shape LLM behavior. CLAUDE.md controls how the model responds in every session; scrub.md controls what it does in a specific high-risk workflow. Together they reduce wasted tokens, prevent scope creep, and make outputs reliably actionable.
Token Efficiency
| Mechanism | How it saves tokens |
|---|---|
| Absolute Mode | Eliminates filler, hedging, emotional softening, continuation bias. A typical Claude response without guidance adds 20-40% padding — this strips it. |
| Output routing | Reports >20 lines go to a file; console returns 3-5 bullets. Prevents the context window from being consumed by long inline prose. |
| Gated clarifying questions | “Ask only when blocked” stops the back-and-forth loop that doubles session length. LLMs default to asking; this default is reversed. |
| Thinking frameworks as conditionals | Frameworks (First Principles, Expert Panel, etc.) only fire when the trigger condition matches. Without this, models apply heavy reasoning scaffolding to trivial prompts. |
| No comments by default | Cuts generated code size. Comments on obvious code are pure token waste in both generation and future context loads. |
Steering Better Decisions
Simplicity First + Surgical Changes work as a pair against the LLM’s strongest failure mode: pattern-matching to “best practices” and gold-plating. Models trained on public code associate quality with abstractions, interfaces, and configurability. These rules explicitly override that bias:
Goal-Driven Execution converts vague tasks into verifiable checkpoints before the model touches code. This matters because LLMs default to optimistic execution — they start writing and discover ambiguity mid-flight, which leads to restarts and rework. Stating assumptions upfront surfaces disagreements cheaply.
Precedence ordering (User Prefs → Absolute Mode → Coding Guidelines → Workflow) gives the model an explicit conflict-resolution rule. Without it, models blend instructions inconsistently when they conflict.
Workflow Rules (test before commit, clean status before push, conventional commits) encode guardrails that prevent the most common agentic failure: a model that makes changes, declares success, and leaves the repo in a broken state.
Token Efficiency
| Mechanism | How it saves tokens |
|---|---|
| Phase 1: scope upfront | Globs and reads every source file once before agents launch. The three agents share that single read pass instead of each re-globbing and re-reading independently — a 3x cut in file-IO tokens. Also forces the user to confirm the target directory when missing, so agents never wander off-scope. |
| Phase 2: three parallel agents | Wall-clock time cut by ~3x vs sequential. Each agent receives the same file list but focuses on one dimension (reuse / quality / efficiency), preventing cross-contamination that bloats findings. |
| Structured report template | Exact schema (Applied / Skipped / Pending / Budget) means the LLM doesn’t improvise format. Format improvisation inflates output tokens and makes results hard to parse programmatically. |
| Budget caps (30 findings / 500 lines) | Hard stops a runaway session before it consumes unbounded context. Also forces prioritization — the model must rank, not just list. |
Steering Better Decisions
Tiered classification (T1/T2/T3) is the central safety mechanism. It maps directly to risk:
Without this, models either over-apply (making risky changes autonomously) or under-apply (flagging everything as “needs review”). The tiers give precise autonomy boundaries.
Skip category constraints are unusually strong: only three valid reasons to skip a finding (output would change, test broke, project rule forbids it). “Low impact,” “cosmetic,” and “not worth it” are explicitly invalid. This prevents the model’s natural conservatism — LLMs frequently self-censor findings they judge as minor, creating silent gaps in the audit. The constraint forces completeness.
“Skipping is the exception, not the default” in the opening line sets the execution posture before the model reads any rules. Framing bias is real: a prompt that opens with “be thorough” produces more findings than one that opens with “be careful.” This phrasing front-loads the aggressive posture.
Per-file commits reduce the blast radius of any single bad application. One file = one revertable unit. Without this instruction, models batch changes across files into one commit, making targeted rollbacks impossible.
Gates between tiers (typecheck after T1, typecheck + tests after each T2 file) create mandatory feedback loops. Without gates, errors in early files compound silently into later files, and the final state is broken in ways the model can’t attribute to a specific change.