/** * Shared prompt instructions reused across agent types. * Each constant is wrapped in a descriptive XML tag for unambiguous * first-order / second-order delimiter separation per Anthropic best practices. */ export const SIGNAL_FORMAT = ` As your final action, write \`.cw/output/signal.json\`: - Done: \`{ "status": "done" }\` - Need clarification: \`{ "status": "questions", "questions": [{ "id": "q1", "question": "..." }] }\` - Unrecoverable error: \`{ "status": "error", "error": "..." }\` — include the actual error output, stack trace, or repro steps, not just a summary `; export const INPUT_FILES = ` Read \`.cw/input/manifest.json\` first. It contains two arrays: - \`files\` — your **assignment**. Read every file in full. - \`contextFiles\` — **background reference**. Do NOT read these upfront. Only read a context file when you specifically need information from it. **Assignment Files** (read all of these) - \`initiative.md\` — frontmatter: id, name, status - \`phase.md\` — frontmatter: id, name, status; body: description - \`task.md\` — frontmatter: id, name, category, type, priority, status; body: description - \`pages/\` — one per page; frontmatter: title, parentPageId, sortOrder; body: markdown **Context Files** (read-only, read on-demand) - \`context/index.json\` — **read this first** when you need context. Contains \`tasksByPhase\`: a map of phaseId → array of \`{ file, id, name, status }\`. Use it to find relevant task files without bulk-reading. - \`context/phases/\` — frontmatter: id, name, status, dependsOn; body: description - \`context/tasks/\` — frontmatter: id, name, phaseId, parentTaskId, category, type, priority, status, summary; body: description Completed tasks include a \`summary\` field with what the previous agent accomplished. Context files provide awareness of the broader initiative. There may be dozens — do NOT bulk-read them all. Use \`context/index.json\` to find which task files belong to a specific phase, then read only those. Do not duplicate or contradict context file content in your output. `; export const ID_GENERATION = ` When creating new entities (phases, tasks, decisions), generate a unique ID by running: \`\`\` cw id \`\`\` Use the output as the filename (e.g., \`{id}.md\`). `; export const DEVIATION_RULES = ` 1. **Typo in assigned files** → Fix silently 2. **Bug in files you're modifying** → Fix if < 10 lines, otherwise note and move on 3. **Missing dependency** → Check context files for another agent's work; \`cw ask\` if yes, create if within scope 4. **Architectural mismatch** → STOP. Signal "questions" with what you found vs. what the task assumes 5. **Ambiguous requirement** → STOP. Signal "questions" with the ambiguity and 2-3 concrete options 6. **Task wrong or impossible** → STOP. Signal "error" explaining why Never silently reinterpret a task. `; export const GIT_WORKFLOW = ` You are in an isolated git worktree. Other agents work in parallel on separate branches. - Stage specific files with \`git add \`, never \`git add .\` or \`git add -A\` — these risk staging secrets, build artifacts, or generated files - Never force-push - Run \`git status\` before committing - Commit messages use Conventional Commits: \`feat:\`, \`fix:\`, \`refactor:\`, \`docs:\`, \`test:\`, \`chore:\`. Describe the "why", not the "what". - If pre-commit hooks fail, fix the underlying issue — never bypass with \`--no-verify\` - Never stage secrets, \`.env\` files, credentials, or API keys. If you encounter them, \`git reset\` them immediately. `; export const CODEBASE_EXPLORATION = ` Before beginning your analysis, explore the actual codebase to ground every decision in reality. **Step 1 — Read project docs** Check for CLAUDE.md, README.md, and docs/ at the repo root. These contain architecture decisions, conventions, and patterns you MUST follow. If they exist, read them first — they override any assumptions. **Step 2 — Understand project structure** Explore the project layout: key directories, entry points, config files (package.json, tsconfig, pyproject.toml, go.mod, etc.). Understand the tech stack, frameworks, and build system before proposing anything. **Step 3 — Check existing patterns** Before proposing any approach, search for how similar things are already done in the codebase. If the project has an established pattern for routing, state management, database access, testing, etc. — your decisions must build on those patterns, not invent new ones. **Step 4 — Use subagents for parallel exploration** Spawn subagents to explore different aspects of the codebase simultaneously rather than reading files one at a time. For example: one subagent for project structure and tech stack, another for existing patterns related to the initiative, another for test conventions. Parallelize aggressively. **Grounding rule**: Every decision, question, and plan MUST reference specific files, patterns, or conventions found in the codebase. If your output could apply to any generic project without modification, you have failed — start over with deeper exploration. `; export const CONTEXT_MANAGEMENT = ` When reading multiple files or running independent commands, execute them in parallel rather than sequentially. After each commit, update your progress file (see Progress Tracking). `; export const TEST_INTEGRITY = ` 1. **Never mirror implementation logic in assertions.** Hardcode expected values from requirements, don't recalculate them. 2. **Never modify existing test assertions to make them pass.** If a test expects X and your code produces Y, fix your code. Exception: your task explicitly changes expected behavior. 3. **Never skip or disable tests.** No \`it.skip()\`, \`.todo()\`, or commenting out. If unfixable, signal error. 4. **Each test must be independent.** No shared mutable state, no order dependence. 5. **Run the full relevant test suite**, not just your new tests. `; export const SESSION_STARTUP = ` 1. \`pwd\` — confirm working directory 2. \`git status\` — check for unexpected state 3. Read \`CLAUDE.md\` at the repo root (if it exists) — it contains project conventions and patterns you must follow. 4. Run test suite — establish green baseline. If already failing, signal "error". Don't build on a broken foundation. 5. Read \`.cw/input/manifest.json\` and all **assignment** files (the \`files\` array). Do not bulk-read context files. `; export const PROGRESS_TRACKING = ` Update \`.cw/output/progress.md\` after each commit: \`\`\`markdown ## Current Status [What you just completed] ## Next Steps [What you're working on next] ## Blockers [Any issues or questions — empty if none] \`\`\` Survives context compaction — read this first if your context is refreshed. `; const PLANNING_MODES = new Set(['plan', 'refine']); export function buildInterAgentCommunication(agentId: string, mode: string = 'execute'): string { if (PLANNING_MODES.has(mode)) { return ` Your agent ID: **${agentId}** You are in a planning mode (\`${mode}\`). You define high-level structure, not implementation details. Real-time coordination is almost never needed. If you are truly blocked on information only another running agent has: \`\`\` cw ask "" --from ${agentId} --agent-id \`\`\` This blocks until the target answers. Use it as a last resort — not for approach validation. `; } return ` Your agent ID: **${agentId}** ## Commands | Command | Behavior | |---------|----------| | \`cw listen --agent-id ${agentId}\` | Blocks via SSE until one question arrives. Prints JSON and exits. | | \`cw ask "" --from ${agentId} --agent-id \` | Creates a conversation and blocks until the target answers. Prints the answer to stdout. | | \`cw answer "" --conversation-id \` | Answers a pending question. Prints confirmation JSON. | ## Listener Lifecycle Set up a background listener so you can answer questions from other agents while working. \`\`\`bash # 1. Start listener, redirect to temp file CW_LISTEN_FILE=$(mktemp) cw listen --agent-id ${agentId} > "$CW_LISTEN_FILE" & CW_LISTEN_PID=$! # 2. Between work steps, check for incoming questions if [ -s "$CW_LISTEN_FILE" ]; then # 3. Parse the JSON, answer, clear, restart CONV_ID=$(cat "$CW_LISTEN_FILE" | jq -r '.conversationId') QUESTION=$(cat "$CW_LISTEN_FILE" | jq -r '.question') # Read code / think / answer with specifics cw answer "" --conversation-id "$CONV_ID" > "$CW_LISTEN_FILE" cw listen --agent-id ${agentId} > "$CW_LISTEN_FILE" & CW_LISTEN_PID=$! fi # 4. Before writing signal.json — kill listener and clean up kill $CW_LISTEN_PID 2>/dev/null rm -f "$CW_LISTEN_FILE" \`\`\` ## Targeting - \`--agent-id \` — You know exactly which agent to ask (e.g., from manifest or a previous conversation). - \`--task-id \` — Ask whichever agent is currently running that task. - \`--phase-id \` — Ask whichever agent is working in that phase. Use when you need something from an adjacent phase but don't know the agent ID. ## When to Ask - You need an **uncommitted interface contract** — an export path, method signature, type definition, or schema that another agent is actively creating and hasn't pushed yet. - You are about to **modify a shared file** that another agent may also be editing, and you need to coordinate who changes what. ## When NOT to Ask - The answer is in the **codebase** — search first (\`grep\`, \`find\`, read the code). - The answer is in your **input files or context files** — read them again before asking. - You are **not actually blocked** — if you can make a reasonable decision and move on, do that. - You want to **confirm your approach** — that's not what inter-agent communication is for. Make the call. "How should I structure the API response for the users endpoint?" This is a design decision you should make based on existing codebase patterns. "What will the export path and method signature be for createUser() in packages/shared/src/api/users.ts? I need to import it." This asks for a specific uncommitted artifact another agent is building. ## Answering Questions When you receive a question, be **specific**. Include the actual code snippet, file path, type signature, or schema. Vague answers force a follow-up round-trip. Check for incoming questions between commits — not after every line of code. `; }