refactor: Compress discuss prompt for conciseness (~30% word reduction)
Cut redundant rules already demonstrated by good/bad examples, removed default-Claude-behavior instructions, collapsed verbose sections into single directives.
This commit is contained in:
@@ -22,61 +22,43 @@ ${ID_GENERATION}
|
||||
|
||||
## Goal-Backward Analysis
|
||||
|
||||
Before asking questions, work backward from the goal:
|
||||
|
||||
Work backward from the goal before asking anything:
|
||||
1. **Observable outcome**: What will the user see/do when this is done?
|
||||
2. **Artifacts needed**: What code, config, or infra produces that outcome?
|
||||
3. **Wiring**: How do the artifacts connect (data flow, API contracts, events)?
|
||||
4. **Failure points**: What can go wrong? What are the edge cases?
|
||||
4. **Failure points**: What can go wrong? Edge cases?
|
||||
|
||||
Only ask questions that this analysis cannot answer from the codebase alone.
|
||||
Only ask questions this analysis cannot answer from the codebase alone.
|
||||
|
||||
## Question Quality
|
||||
|
||||
**Bad question**: "How should we handle errors?"
|
||||
**Good question**: "The current API returns HTTP 500 for all errors. Should we: (a) add specific error codes (400, 404, 409) with JSON error bodies, (b) keep 500 but add error details in the response body, or (c) add a custom error middleware that maps domain errors to HTTP codes?"
|
||||
**Bad**: "How should we handle errors?"
|
||||
**Good**: "The current API returns HTTP 500 for all errors. Should we: (a) add specific error codes (400, 404, 409) with JSON error bodies, (b) keep 500 but add error details in the response body, or (c) add a custom error middleware that maps domain errors to HTTP codes?"
|
||||
|
||||
Every question must:
|
||||
- Reference something concrete (file, pattern, constraint)
|
||||
- Offer specific options when choices are clear
|
||||
- Explain what depends on the answer
|
||||
Every question must explain what depends on the answer.
|
||||
|
||||
## Decision Quality
|
||||
|
||||
**Bad decision**: "We'll use a database for storage"
|
||||
**Good decision**: "Use SQLite via better-sqlite3 with drizzle-orm. Schema in src/db/schema.ts, migrations via drizzle-kit. Chosen over PostgreSQL because: single-node deployment, no external deps, existing pattern in the codebase."
|
||||
**Bad**: "We'll use a database for storage"
|
||||
**Good**: "Use SQLite via better-sqlite3 with drizzle-orm. Schema in src/db/schema.ts, migrations via drizzle-kit. Chosen over PostgreSQL because: single-node deployment, no external deps, existing pattern in the codebase."
|
||||
|
||||
Every decision must include: what, why, and what alternatives were rejected.
|
||||
Include: what, why, rejected alternatives. For behavioral decisions, add verification criteria.
|
||||
|
||||
When the decision affects observable behavior, also include: how to verify it works (acceptance criteria, test approach, or measurable outcome).
|
||||
|
||||
## Read Before Asking
|
||||
|
||||
Before asking ANY question, check if the codebase already answers it:
|
||||
- Read existing code patterns, config files, package.json
|
||||
- Check if similar problems were already solved elsewhere
|
||||
- Don't ask "what framework should we use?" if the project already uses one
|
||||
## Codebase First
|
||||
Don't ask what the codebase already answers. If the project uses a framework, don't ask which framework to use.
|
||||
|
||||
## Question Categories
|
||||
- **User Journeys**: Main workflows, success/failure paths, edge cases
|
||||
- **Technical Constraints**: Patterns to follow, things to avoid, reference code
|
||||
- **Data & Validation**: Data structures, validation rules, constraints
|
||||
- **User Journeys**: Workflows, success/failure paths, edge cases
|
||||
- **Technical Constraints**: Patterns to follow, things to avoid
|
||||
- **Data & Validation**: Structures, rules, constraints
|
||||
- **Integration Points**: External systems, APIs, error handling
|
||||
- **Testability & Verification**: How will we verify each feature works? What are measurable acceptance criteria? What test strategies apply (unit, integration, e2e)?
|
||||
- **Testability**: Acceptance criteria, test strategies
|
||||
|
||||
## Rules
|
||||
- Ask 2-4 questions at a time, not more
|
||||
- Provide options when choices are clear
|
||||
- Capture every decision with rationale
|
||||
- Don't proceed until ambiguities are resolved
|
||||
|
||||
## Definition of Done
|
||||
|
||||
Before writing signal.json with status "done", verify:
|
||||
|
||||
- [ ] Every question references something concrete (file, pattern, constraint)
|
||||
- [ ] Every question offers specific options when choices are clear
|
||||
- [ ] Every decision includes what, why, and rejected alternatives
|
||||
- [ ] Behavioral decisions include verification criteria
|
||||
- [ ] The codebase was checked before asking — no questions the code already answers`;
|
||||
- Every decision includes what, why, and rejected alternatives
|
||||
- Behavioral decisions include verification criteria
|
||||
- No questions the codebase already answers`;
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user