refactor: Compress discuss prompt for conciseness (~30% word reduction)

Cut redundant rules already demonstrated by good/bad examples,
removed default-Claude-behavior instructions, collapsed verbose
sections into single directives.
This commit is contained in:
Lukas May
2026-02-18 17:30:07 +09:00
parent e73e99cb28
commit a4502ebf77

View File

@@ -22,61 +22,43 @@ ${ID_GENERATION}
## Goal-Backward Analysis ## Goal-Backward Analysis
Before asking questions, work backward from the goal: Work backward from the goal before asking anything:
1. **Observable outcome**: What will the user see/do when this is done? 1. **Observable outcome**: What will the user see/do when this is done?
2. **Artifacts needed**: What code, config, or infra produces that outcome? 2. **Artifacts needed**: What code, config, or infra produces that outcome?
3. **Wiring**: How do the artifacts connect (data flow, API contracts, events)? 3. **Wiring**: How do the artifacts connect (data flow, API contracts, events)?
4. **Failure points**: What can go wrong? What are the edge cases? 4. **Failure points**: What can go wrong? Edge cases?
Only ask questions that this analysis cannot answer from the codebase alone. Only ask questions this analysis cannot answer from the codebase alone.
## Question Quality ## Question Quality
**Bad question**: "How should we handle errors?" **Bad**: "How should we handle errors?"
**Good question**: "The current API returns HTTP 500 for all errors. Should we: (a) add specific error codes (400, 404, 409) with JSON error bodies, (b) keep 500 but add error details in the response body, or (c) add a custom error middleware that maps domain errors to HTTP codes?" **Good**: "The current API returns HTTP 500 for all errors. Should we: (a) add specific error codes (400, 404, 409) with JSON error bodies, (b) keep 500 but add error details in the response body, or (c) add a custom error middleware that maps domain errors to HTTP codes?"
Every question must: Every question must explain what depends on the answer.
- Reference something concrete (file, pattern, constraint)
- Offer specific options when choices are clear
- Explain what depends on the answer
## Decision Quality ## Decision Quality
**Bad decision**: "We'll use a database for storage" **Bad**: "We'll use a database for storage"
**Good decision**: "Use SQLite via better-sqlite3 with drizzle-orm. Schema in src/db/schema.ts, migrations via drizzle-kit. Chosen over PostgreSQL because: single-node deployment, no external deps, existing pattern in the codebase." **Good**: "Use SQLite via better-sqlite3 with drizzle-orm. Schema in src/db/schema.ts, migrations via drizzle-kit. Chosen over PostgreSQL because: single-node deployment, no external deps, existing pattern in the codebase."
Every decision must include: what, why, and what alternatives were rejected. Include: what, why, rejected alternatives. For behavioral decisions, add verification criteria.
When the decision affects observable behavior, also include: how to verify it works (acceptance criteria, test approach, or measurable outcome). ## Codebase First
Don't ask what the codebase already answers. If the project uses a framework, don't ask which framework to use.
## Read Before Asking
Before asking ANY question, check if the codebase already answers it:
- Read existing code patterns, config files, package.json
- Check if similar problems were already solved elsewhere
- Don't ask "what framework should we use?" if the project already uses one
## Question Categories ## Question Categories
- **User Journeys**: Main workflows, success/failure paths, edge cases - **User Journeys**: Workflows, success/failure paths, edge cases
- **Technical Constraints**: Patterns to follow, things to avoid, reference code - **Technical Constraints**: Patterns to follow, things to avoid
- **Data & Validation**: Data structures, validation rules, constraints - **Data & Validation**: Structures, rules, constraints
- **Integration Points**: External systems, APIs, error handling - **Integration Points**: External systems, APIs, error handling
- **Testability & Verification**: How will we verify each feature works? What are measurable acceptance criteria? What test strategies apply (unit, integration, e2e)? - **Testability**: Acceptance criteria, test strategies
## Rules ## Rules
- Ask 2-4 questions at a time, not more - Ask 2-4 questions at a time, not more
- Provide options when choices are clear
- Capture every decision with rationale
- Don't proceed until ambiguities are resolved
## Definition of Done ## Definition of Done
- Every decision includes what, why, and rejected alternatives
Before writing signal.json with status "done", verify: - Behavioral decisions include verification criteria
- No questions the codebase already answers`;
- [ ] Every question references something concrete (file, pattern, constraint)
- [ ] Every question offers specific options when choices are clear
- [ ] Every decision includes what, why, and rejected alternatives
- [ ] Behavioral decisions include verification criteria
- [ ] The codebase was checked before asking — no questions the code already answers`;
} }