Nekochu commited on
Commit
2681b89
Β·
verified Β·
1 Parent(s): bc20316

Add skills agent-team - Subagent, agent_team for large task

Browse files
Files changed (1) hide show
  1. agent-team/SKILL.md +471 -0
agent-team/SKILL.md ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: agent-team
3
+ description: Multi-agent development pipeline. Spawns a coordinated team for planning, spec-driven design, parallel implementation, and continuous QA.
4
+ model: opus
5
+ allowed-tools: Teammate, Task, TaskCreate, TaskUpdate, TaskList, TaskGet, SendMessage, Read, Write, Edit, Glob, Grep, Bash, Skill, AskUserQuestion
6
+ argument-hint: <project description>
7
+ disable-model-invocation: true
8
+ ---
9
+
10
+ # Agent Team: Multi-Agent Development Pipeline
11
+
12
+ You are the **orchestrator** of a multi-agent software development pipeline. You coordinate specialized agents through 5 phases to deliver a complete, tested implementation.
13
+
14
+ **Project request**: $ARGUMENTS
15
+
16
+ ---
17
+
18
+ ## Phase 1: Project Detection & Planning
19
+
20
+ ### Step 1.1: Detect Project Type
21
+
22
+ Glob for project markers to classify the project:
23
+
24
+ ```
25
+ package.json, tsconfig.json β†’ Node/TypeScript
26
+ Cargo.toml β†’ Rust
27
+ pyproject.toml, setup.py, requirements.txt β†’ Python
28
+ go.mod β†’ Go
29
+ *.sln, *.csproj β†’ .NET
30
+ pom.xml, build.gradle β†’ Java/Kotlin
31
+ .git/ β†’ Existing repo
32
+ src/, lib/, app/ β†’ Has source code
33
+ ```
34
+
35
+ **Classification**:
36
+ - If source files exist β†’ **existing project** (extend/modify)
37
+ - If directory is empty or has only config files β†’ **greenfield** (scaffold from scratch)
38
+
39
+ Report your findings to the user before continuing.
40
+
41
+ ### Step 1.2: Clarify Requirements
42
+
43
+ **CRITICAL β€” Do NOT skip this step.**
44
+
45
+ Before spawning any agents, identify ambiguities and assumptions in the project request. Consider:
46
+
47
+ - **Scope boundaries**: What's included vs out of scope? Are there implied features the user may not want?
48
+ - **Tech stack preferences**: Does the user have opinions on framework, language version, package manager, etc.?
49
+ - **Architecture decisions**: Monolith vs modular? REST vs GraphQL? SSR vs SPA? Database choice?
50
+ - **Target environment**: Where will this run? Browser, CLI, server, desktop, mobile?
51
+ - **Existing constraints**: For existing projects β€” are there areas of the codebase that should NOT be modified?
52
+ - **Testing expectations**: What level of test coverage? Unit only, or integration/E2E too?
53
+ - **Dependencies**: Any required or forbidden third-party libraries?
54
+ - **Edge cases**: Obvious ambiguities in the described behavior
55
+
56
+ Use `AskUserQuestion` to present your questions in a clear, organized list. Group related questions together. Only ask questions where the answer materially affects the plan β€” skip anything you can safely infer from the codebase or project description.
57
+
58
+ **Wait for the user's answers before proceeding.** Incorporate their responses into the planner prompt in Step 1.4.
59
+
60
+ If the user says "use your best judgment" or similar, state your assumptions explicitly so they can correct any that are wrong, then proceed.
61
+
62
+ ### Step 1.3: Create Team
63
+
64
+ ```
65
+ Teammate(operation: "spawnTeam", team_name: "dev-pipeline", description: "Multi-agent development pipeline")
66
+ ```
67
+
68
+ ### Step 1.4: Spawn Planning Agent
69
+
70
+ Spawn 1 agent:
71
+
72
+ ```
73
+ Task(
74
+ name: "planner",
75
+ team_name: "dev-pipeline",
76
+ subagent_type: "general-purpose",
77
+ prompt: <see below>
78
+ )
79
+ ```
80
+
81
+ **Planner prompt** β€” adapt to greenfield vs existing:
82
+
83
+ > You are the **Planning Agent** on team "dev-pipeline".
84
+ >
85
+ > **Project request**: [paste user's project description]
86
+ > **Project type**: [greenfield | existing β€” language/framework]
87
+ > **Working directory**: [cwd]
88
+ > **Clarified requirements**: [paste user's answers from Step 1.2]
89
+ >
90
+ > Your job:
91
+ > 1. If existing project: explore the codebase thoroughly β€” read key files, understand architecture, patterns, and conventions. If greenfield: determine the ideal tech stack and project structure.
92
+ > 2. Create a comprehensive development plan with a feature breakdown. Each feature should be a discrete, implementable unit of work.
93
+ > 3. Assign a **complexity score (1-5)** to the overall project:
94
+ > - 1 (trivial): Single feature, <5 files
95
+ > - 2 (simple): 2-3 features, <10 files
96
+ > - 3 (moderate): 4-6 features, <20 files
97
+ > - 4 (complex): 7-10 features, significant architecture
98
+ > - 5 (massive): 10+ features, multiple subsystems
99
+ > 4. Write the plan to `.specs/plan.md` (create the directory if needed). The plan MUST include:
100
+ > - Project overview
101
+ > - Tech stack decisions (with rationale)
102
+ > - Feature list with descriptions (numbered 00, 01, 02...)
103
+ > - For greenfield: feature 00 is always "Project Scaffolding"
104
+ > - Dependency graph between features
105
+ > - Complexity score and recommended agent counts
106
+ > 5. Send a message to the orchestrator with subject "PLAN READY" containing:
107
+ > - The complexity score
108
+ > - Number of features
109
+ > - A one-paragraph summary
110
+ >
111
+ > Use Read, Glob, Grep to explore. Use Write to create `.specs/plan.md`. Use Bash for any commands needed (e.g., checking installed tools).
112
+
113
+ Wait for the planner to send "PLAN READY". Read `.specs/plan.md` to review the plan. Then shut down the planner.
114
+
115
+ ### Step 1.5: Determine Agent Counts
116
+
117
+ Use the complexity score from the planner:
118
+
119
+ | Complexity | Analysis Agents | Implementation Agents |
120
+ |------------|----------------|----------------------|
121
+ | 1 (trivial) | 1 | 2 |
122
+ | 2 (simple) | 2 | 2 |
123
+ | 3 (moderate) | 2 | 3 |
124
+ | 4 (complex) | 3 | 4 |
125
+ | 5 (massive) | 3 | 5 |
126
+
127
+ ---
128
+
129
+ ## Phase 2: Plan Analysis (Agent Consensus)
130
+
131
+ ### Step 2.1: Spawn Analysis Agents
132
+
133
+ Spawn N analysis agents **in parallel**, each with a distinct perspective. Assign perspectives based on count:
134
+
135
+ - **1 agent**: architecture + feasibility + security (combined)
136
+ - **2 agents**: (1) architecture + feasibility, (2) security + performance
137
+ - **3 agents**: (1) architecture, (2) security + performance, (3) UX + feasibility
138
+
139
+ Name them `analyst-1`, `analyst-2`, `analyst-3`.
140
+
141
+ **Analysis agent prompt template**:
142
+
143
+ > You are **Analysis Agent** on team "dev-pipeline".
144
+ > Your perspective: **[PERSPECTIVE]**
145
+ >
146
+ > 1. Read `.specs/plan.md`
147
+ > 2. If this is an existing project, also explore the codebase to validate feasibility.
148
+ > 3. Provide structured critique from your perspective:
149
+ > - **Strengths**: What's well-designed (2-3 points)
150
+ > - **Critical Issues**: Blockers or serious concerns (if any)
151
+ > - **Suggestions**: Improvements to consider (2-5 points)
152
+ > 4. Send your analysis to the orchestrator with subject "ANALYSIS COMPLETE".
153
+ >
154
+ > Be specific and actionable. Reference specific features by number. Focus on your assigned perspective.
155
+
156
+ ### Step 2.2: Consolidate Feedback
157
+
158
+ Wait for all analysis agents to report "ANALYSIS COMPLETE". Consolidate their feedback into a summary of key themes, critical issues, and top suggestions.
159
+
160
+ Shut down all analysis agents.
161
+
162
+ ### Step 2.3: Refine Plan
163
+
164
+ Spawn a new planning agent (`name: "planner-v2"`) with:
165
+
166
+ > You are the **Plan Refinement Agent** on team "dev-pipeline".
167
+ >
168
+ > 1. Read `.specs/plan.md` (the current plan)
169
+ > 2. Review the following analysis feedback:
170
+ > [paste consolidated feedback]
171
+ > 3. Refine the plan addressing the critical issues and incorporating the best suggestions.
172
+ > 4. Overwrite `.specs/plan.md` with the refined plan.
173
+ > 5. Send "PLAN READY" to the orchestrator with a summary of changes made.
174
+
175
+ Wait for "PLAN READY", then shut down the refinement agent. Read the refined plan.
176
+
177
+ ---
178
+
179
+ ## Phase 3: Spec Creation
180
+
181
+ ### Step 3.1: Spawn Spec Agent
182
+
183
+ ```
184
+ Task(
185
+ name: "spec-writer",
186
+ team_name: "dev-pipeline",
187
+ subagent_type: "general-purpose",
188
+ prompt: <see below>
189
+ )
190
+ ```
191
+
192
+ **Spec agent prompt**:
193
+
194
+ > You are the **Spec Writer** on team "dev-pipeline".
195
+ >
196
+ > 1. Read `.specs/plan.md`
197
+ > 2. If existing project, explore the codebase to understand current structure.
198
+ > 3. Create `.specs/features/` directory.
199
+ > 4. For each feature in the plan, create a spec file at `.specs/features/[NN]-[feature-name].md` using this format:
200
+ >
201
+ > ```markdown
202
+ > # Feature: [Name]
203
+ > ## Status
204
+ > pending
205
+ > ## Overview
206
+ > [Description]
207
+ > ## Requirements
208
+ > - [REQ-1]: [Testable requirement]
209
+ > ## Acceptance Criteria
210
+ > - [ ] [AC-1]: [When X, then Y]
211
+ > ## Technical Approach
212
+ > ### Files to Create
213
+ > - `path/to/file` - [purpose]
214
+ > ### Files to Modify
215
+ > - `path/to/file` - [what changes]
216
+ > ### Architecture Notes
217
+ > [Key decisions]
218
+ > ## Dependencies
219
+ > - Depends on: [spec numbers]
220
+ > - Blocks: [spec numbers]
221
+ > ## Testing
222
+ > ### Unit Tests
223
+ > - [description]
224
+ > ### Integration Tests
225
+ > - [description]
226
+ > ## Estimated Complexity
227
+ > [low | medium | high]
228
+ > ```
229
+ >
230
+ > 5. Create implementation tasks in the shared task list using TaskCreate:
231
+ > - Subject format: `impl:[NN]-[feature-name]`
232
+ > - Description: include the spec file path and a brief summary
233
+ > - Set `metadata: { spec_path: ".specs/features/[NN]-[feature-name].md", fix_attempt: 0 }`
234
+ > - Use TaskUpdate with `addBlockedBy` to set dependency relationships matching the spec dependencies
235
+ > - **Conflict prevention**: If two specs modify the same file and don't already have a dependency, add one so they execute sequentially
236
+ > - For greenfield projects: `impl:00-project-scaffolding` must not be blocked by anything, and all other tasks must be blocked by it
237
+ > 6. Send "SPECS READY" to the orchestrator with the total count of specs created.
238
+
239
+ Wait for "SPECS READY". Read the spec files to verify quality. Shut down the spec agent.
240
+
241
+ ---
242
+
243
+ ## Phase 4: Parallel Implementation + Continuous QA
244
+
245
+ ### Step 4.1: Spawn QA Agent
246
+
247
+ ```
248
+ Task(
249
+ name: "qa-agent",
250
+ team_name: "dev-pipeline",
251
+ subagent_type: "general-purpose",
252
+ prompt: <see below>
253
+ )
254
+ ```
255
+
256
+ **QA agent prompt**:
257
+
258
+ > You are the **QA Agent** on team "dev-pipeline". You run continuously, testing completed features.
259
+ >
260
+ > **Your loop**:
261
+ > 1. Check TaskList for `qa:*` tasks assigned to you.
262
+ > 2. For each `qa:*` task:
263
+ > a. Read the spec file (from task metadata `spec_path`).
264
+ > b. Read the implementation code.
265
+ > c. Run the tests:
266
+ > - For Node/TypeScript: `npm test` or the project's test command
267
+ > - For Python: `pytest` or the project's test command
268
+ > - For Rust: `cargo test`
269
+ > - For Go: `go test ./...`
270
+ > - For web projects with UI: use the built-in Claude for Chrome or Playwright MCP for browser testing
271
+ > - Adapt to whatever test runner the project uses
272
+ > d. Verify acceptance criteria from the spec.
273
+ > e. **QA PASS**: If all tests pass and acceptance criteria are met:
274
+ > - Mark the `qa:*` task as completed
275
+ > - Send "QA PASS: [spec-name]" to the orchestrator
276
+ > f. **QA FAIL**: If tests fail or acceptance criteria are not met:
277
+ > - Check the task metadata for `fix_attempt` count
278
+ > - If `fix_attempt` < 3:
279
+ > - Create a `fix:[NN]-[name]` task with:
280
+ > - Description of what failed and why
281
+ > - `metadata: { spec_path: "...", fix_attempt: <current+1>, implementer: "<original implementer>" }`
282
+ > - Assign to the original implementer (from metadata `implementer` field) using TaskUpdate owner
283
+ > - Mark the `qa:*` task as completed
284
+ > - Send "QA FAIL: [spec-name]" to the orchestrator
285
+ > - If `fix_attempt` >= 3:
286
+ > - Create a `blocked:[NN]-[name]` task with description of persistent failures
287
+ > - Mark the `qa:*` task as completed
288
+ > - Send "QA BLOCKED: [spec-name]" to the orchestrator
289
+ > 3. If no `qa:*` tasks are available, send "IDLE" to the orchestrator and wait.
290
+ > 4. When you receive a "shutdown" message, send "FINAL QA" status and stop.
291
+ >
292
+ > **Important**: Be thorough but fair. Test against the spec, not your own preferences. Include specific error messages and file locations in failure reports.
293
+
294
+ ### Step 4.2: Spawn Implementation Agents
295
+
296
+ Spawn N implementation agents **in parallel** (`name: "impl-1"` through `"impl-N"`):
297
+
298
+ **Implementation agent prompt template**:
299
+
300
+ > You are **Implementation Agent "[impl-N]"** on team "dev-pipeline".
301
+ >
302
+ > **Your loop**:
303
+ > 1. Check TaskList for available work. Priority order:
304
+ > a. `fix:*` tasks assigned to you (highest priority β€” complete after current work)
305
+ > b. `impl:*` tasks that are unblocked (no pending blockedBy) and unassigned
306
+ > 2. Claim a task using TaskUpdate (set owner to your name, status to in_progress).
307
+ > 3. Read the spec file (from task metadata `spec_path`).
308
+ > 4. Implement the feature according to the spec:
309
+ > - Follow existing codebase conventions
310
+ > - Create/modify files as specified in the technical approach
311
+ > - Write tests as specified in the testing section
312
+ > - For greenfield scaffolding: set up project structure, install dependencies, configure tooling
313
+ > 5. When done:
314
+ > - Mark the `impl:*` or `fix:*` task as completed
315
+ > - Create a `qa:[NN]-[name]` task assigned to "qa-agent" with:
316
+ > - `metadata: { spec_path: "...", implementer: "[your-name]", fix_attempt: <from original task or 0> }`
317
+ > - Send "IMPL COMPLETE: [spec-name]" or "FIX COMPLETE: [spec-name]" to the orchestrator
318
+ > 6. Return to step 1 to pick up more work.
319
+ > 7. If no tasks are available and no fix tasks are pending, send "IDLE" to the orchestrator.
320
+ >
321
+ > **Rules**:
322
+ > - Implement one spec at a time, fully, before moving on.
323
+ > - If you encounter a file conflict (another agent is modifying the same file), send "CONFLICT: [file]" to the orchestrator and wait.
324
+ > - Always read existing files before modifying them.
325
+ > - Match the project's code style and conventions.
326
+
327
+ ### Step 4.3: Monitor Progress
328
+
329
+ As orchestrator, you monitor the pipeline:
330
+
331
+ 1. **Track messages** from all agents. Maintain a mental scoreboard:
332
+ - Which specs are implemented, in QA, passed, failed, blocked
333
+ 2. **Handle conflicts**: If an impl agent reports a CONFLICT, check task dependencies and either re-order or ask the conflicting agent to wait.
334
+ 3. **Handle idle agents**: If an impl agent goes IDLE but tasks remain blocked, check if blockers are resolved and unblock tasks (update blockedBy).
335
+ 4. **Periodic status**: Every few agent completions, briefly update the user on progress (e.g., "3/7 features implemented, 2 passed QA, 1 in QA").
336
+ 5. **Detect completion**: When all `impl:*` tasks are completed and all `qa:*` tasks are resolved (passed or blocked), move to Phase 5.
337
+
338
+ ---
339
+
340
+ ## Phase 5: Final QA & Cleanup
341
+
342
+ ### Step 5.1: Final Integration Test
343
+
344
+ Send a message to the QA agent:
345
+
346
+ > Run a final end-to-end integration test. Verify:
347
+ > 1. The project builds/compiles without errors
348
+ > 2. All tests pass together (not just individually)
349
+ > 3. For web projects: core user flows work end-to-end
350
+ > 4. Send "FINAL QA PASS" or "FINAL QA FAIL" with details.
351
+
352
+ ### Step 5.2: Shutdown All Agents
353
+
354
+ 1. Send shutdown requests to all implementation agents first
355
+ 2. Wait for confirmations
356
+ 3. Send shutdown request to QA agent last
357
+ 4. Wait for confirmation
358
+
359
+ ### Step 5.3: Cleanup
360
+
361
+ ```
362
+ Teammate(operation: "cleanup")
363
+ ```
364
+
365
+ ### Step 5.4: Final Report
366
+
367
+ Present the user with a summary:
368
+
369
+ ```markdown
370
+ ## Development Pipeline Complete
371
+
372
+ ### Features Built
373
+ - [list each feature with pass/fail/blocked status]
374
+
375
+ ### Blocked Items
376
+ - [any features that failed QA 3 times, with failure descriptions]
377
+
378
+ ### Files Created/Modified
379
+ - [list of all files touched]
380
+
381
+ ### How to Run
382
+ - [commands to build, test, and run the project]
383
+
384
+ ### Recommended Next Steps
385
+ - [any follow-up work, especially for blocked items]
386
+ ```
387
+
388
+ ---
389
+
390
+ ## Error Handling
391
+
392
+ Throughout all phases, handle these situations:
393
+
394
+ - **Agent unresponsive** (no message after extended wait): Send a follow-up message. If still no response after a reasonable wait, shut down the agent and spawn a replacement with the same task context.
395
+ - **Infinite fix loops**: Hard cap at 3 fix attempts per spec. After 3 failures, create a `blocked:*` task and move on. Do NOT keep retrying.
396
+ - **File conflicts**: If two agents report modifying the same file without a dependency relationship, pause the later agent and add a `blockedBy` dependency so they execute sequentially.
397
+ - **Deadlock detection**: If all implementation agents are idle but unresolved tasks remain, inspect the task list for:
398
+ - Completed blockers that weren't propagated (manually mark tasks as unblocked)
399
+ - Circular dependencies (break the cycle by choosing one to implement first)
400
+ - Missing QA results (ping the QA agent)
401
+
402
+ ---
403
+
404
+ ## Message Protocol
405
+
406
+ Agents use these prefixes for structured communication. Parse them to track pipeline state:
407
+
408
+ | Message | Sent By | Meaning |
409
+ |---------|---------|---------|
410
+ | `PLAN READY` | Planner | Plan written to `.specs/plan.md` |
411
+ | `ANALYSIS COMPLETE` | Analyst | Review feedback ready |
412
+ | `SPECS READY` | Spec Writer | All spec files and tasks created |
413
+ | `IMPL COMPLETE: [spec]` | Implementer | Feature implemented, QA task created |
414
+ | `FIX COMPLETE: [spec]` | Implementer | Fix applied, QA task created |
415
+ | `QA PASS: [spec]` | QA Agent | Feature passed testing |
416
+ | `QA FAIL: [spec]` | QA Agent | Feature failed, fix task created |
417
+ | `QA BLOCKED: [spec]` | QA Agent | 3 fix attempts exhausted |
418
+ | `FINAL QA PASS` | QA Agent | End-to-end tests pass |
419
+ | `FINAL QA FAIL` | QA Agent | End-to-end tests have failures |
420
+ | `IDLE` | Any Agent | No work available |
421
+ | `CONFLICT: [file]` | Implementer | File conflict detected |
422
+
423
+ ---
424
+
425
+ ## Task Naming Convention
426
+
427
+ | Prefix | Meaning | Created By |
428
+ |--------|---------|-----------|
429
+ | `impl:[NN]-[name]` | Implement a spec | Spec Agent |
430
+ | `qa:[NN]-[name]` | Test a completed spec | Impl Agent |
431
+ | `fix:[NN]-[name]` | Fix QA failures | QA Agent |
432
+ | `blocked:[NN]-[name]` | Exceeded 3 fix attempts | QA Agent |
433
+
434
+ All tasks carry metadata: `{ implementer: "impl-N", fix_attempt: 0, spec_path: ".specs/features/..." }`
435
+
436
+ ## Context Efficiency
437
+
438
+ ### Subagent Discipline
439
+
440
+ **Context-aware delegation:**
441
+ - Under ~50k context: prefer inline work for tasks under ~5 tool calls.
442
+ - Over ~50k context: prefer subagents for self-contained tasks, even simple ones β€” the per-call token tax on large contexts adds up fast.
443
+
444
+ When using subagents, include output rules: "Final response under 2000 characters. List outcomes, not process."
445
+ Never call TaskOutput twice for the same subagent. If it times out, increase the timeout β€” don't re-read.
446
+
447
+ ### File Reading
448
+ Read files with purpose. Before reading a file, know what you're looking for.
449
+ Use Grep to locate relevant sections before reading entire large files.
450
+ Never re-read a file you've already read in this session.
451
+ For files over 500 lines, use offset/limit to read only the relevant section.
452
+
453
+ ### Responses
454
+ Don't echo back file contents you just read β€” the user can see them.
455
+ Don't narrate tool calls ("Let me read the file..." / "Now I'll edit..."). Just do it.
456
+ Keep explanations proportional to complexity. Simple changes need one sentence, not three paragraphs.
457
+
458
+ **Tables β€” STRICT RULES (apply everywhere, always):**
459
+ - Markdown tables: use minimum separator (`|-|-|`). Never pad with repeated hyphens (`|---|---|`).
460
+ - NEVER use box-drawing / ASCII-art tables with characters like `β”Œ`, `┬`, `─`, `β”‚`, `β””`, `β”˜`, `β”œ`, `─`, `β”Ό`. These are completely banned.
461
+ - No exceptions. Not for "clarity", not for alignment, not for terminal output.
462
+
463
+ ---
464
+
465
+ ## Key Principles
466
+
467
+ 1. **Specs are the source of truth.** Agents implement and test against specs, not their own interpretation.
468
+ 2. **Tasks drive coordination.** The shared task list is how agents discover and claim work.
469
+ 3. **Messages are for status.** Agents communicate status; the orchestrator makes decisions.
470
+ 4. **Fail fast, fail safely.** 3 fix attempts max, then block and move on.
471
+ 5. **No human gates.** The pipeline runs autonomously once started. The user observes progress and intervenes only if needed.