/ YackShavingSkill_PRD_v1.md
YackShavingSkill_PRD_v1.md
  1  # PRD: Agent Skill Framework for Pi Coding Agent
  2  
  3  > Designing a next-generation version of the Karpathy-inspired coding guidelines as a structured skill system compatible with the Pi coding agent.
  4  
  5  ---
  6  
  7  ## 1. Goal
  8  
  9  Transform the four Karpathy-inspired principles from a static `CLAUDE.md` into a **composable, agent-aware skill framework** that:
 10  
 11  1. **Operationalizes** behavioral guidelines as checkable pre/post steps rather than aspirational text
 12  2. **Works with Pi's task/plan/work model** — integrating naturally into Pi's lifecycle (plan → task → work → review)
 13  3. **Adapts rigor proportionally** to task complexity
 14  4. **Can be composed, extended, and versioned** independently of any single agent's native instruction format
 15  
 16  ## 2. Background & Motivation
 17  
 18  The existing `andrej-karpathy-skills` repo correctly identifies that LLM coding agents fail in predictable ways (wrong assumptions, overcomplication, orthogonal edits, weak verification). Its solution — a `CLAUDE.md` file with four principles — is a good start but has limitations:
 19  
 20  - **It's a suggestion, not a guardrail.** The model is told to "think before coding" but there's no mechanism ensuring it actually does.
 21  - **It's agent-specific.** Separate files for Claude Code, Cursor, etc. No shared skill layer.
 22  - **It's all-or-nothing.** Every task gets the full treatment, wasting tokens on trivial work and under-protecting complex work.
 23  - **It has no feedback loop.** No way to measure effectiveness or iterate.
 24  
 25  The PRD addresses these gaps by designing a framework specifically compatible with Pi's architecture while remaining conceptually portable to other agents.
 26  
 27  ## 3. Design Principles
 28  
 29  1. **Checkable over inspirational.** Every skill produces a concrete artifact (a checklist, a scope boundary, a verification matrix) rather than just telling the model what to do.
 30  2. **Composable by nature.** Skills are independent modules that can be combined. A trivial task uses 1 skill; a complex refactor uses all 4.
 31  3. **Aligned with Pi's lifecycle.** Skills map to Pi's plan → task → work → review flow, not bolted on outside it.
 32  4. **Agent-agnostic core, agent-specific bindings.** The skill definitions are portable; the integration layer is per-agent.
 33  5. **Self-measuring.** Skills track whether they're being followed and report adherence.
 34  
 35  ## 4. System Architecture
 36  
 37  ### 4.1 Skill Modules (Core)
 38  
 39  Four composable skill modules, each producing a discrete artifact:
 40  
 41  #### Skill A: `think-before-coding`
 42  **Purpose:** Force explicit reasoning before any code changes.
 43  **Artifact produced:** A structured pre-computation block containing:
 44  - Assumptions list with confidence levels
 45  - Possible interpretations of the request
 46  - Simpler approaches considered and rejected
 47  - Scope declaration (what will and won't change)
 48  
 49  **Activation:** All tasks above "trivial" complexity.
 50  
 51  #### Skill B: `simplicity-first`
 52  **Purpose:** Detect and prevent over-engineering.
 53  **Artifact produced:** A self-review report answering:
 54  - What's the simplest solution that satisfies all goals?
 55  - What abstractions were avoided and why?
 56  - What edge cases are intentionally not handled?
 57  - Line-count budget vs. estimate
 58  
 59  **Activation:** All non-trivial coding tasks.
 60  
 61  #### Skill C: `surgical-changes`
 62  **Purpose:** Governance over code scope.
 63  **Artifact produced:** A change boundary document:
 64  - Files explicitly touched and rationale per file
 65  - Files explicitly not touched and rationale
 66  - Orthogonal issues noticed (flagged but not touched)
 67  - Orphan tracking (imports/variables made unused by THIS change)
 68  
 69  **Activation:** Any task that modifies existing files.
 70  
 71  #### Skill D: `goal-driven-execution`
 72  **Purpose:** Define and verify success.
 73  **Artifact produced:** A task verification matrix:
 74  - Each subtask with explicit pass/fail criteria
 75  - Test cases to write before implementation
 76  - Before/after comparison points
 77  - Loop iteration targets (max retries before escalating)
 78  
 79  **Activation:** All tasks (even trivial ones get a minimal version).
 80  
 81  ### 4.2 Complexity Orchestrator
 82  
 83  A lightweight decision layer that assesses task complexity and activates the appropriate skill set:
 84  
 85  ```
 86  Complexity Scoring (human-in-the-loop optional):
 87    - Scope size:         1 point per file (max 5)
 88    - Ambiguity:          low(0) / medium(1) / high(2)
 89    - Risk surface:       internal(0) / public-api(1) / critical-path(2)
 90    - Knowledge gap:      none(0) / partial(1) / unknown(2)
 91    
 92    Score 0-2:    TRIVIAL    → Goal-Driven only
 93    Score 3-5:    SIMPLE     → Simplicity + Goal-Driven  
 94    Score 6-8:    MEDIUM     → Simplicity + Surgical + Goal-Driven
 95    Score 9-10:   COMPLEX    → ALL skills + explicit user confirmation
 96  ```
 97  
 98  Complexity is assessed during the planning phase. The orchestrator selects which skills to activate and sets their intensity.
 99  
100  ### 4.3 Post-Hoc Verification Hook
101  
102  Before any changes are committed or reported as complete:
103  
104  ```
105  Verification Gate:
106    1. Read the pre-computation artifact from Skill A
107    2. Read the change boundary from Skill C
108    3. Diff output against boundaries
109    4. Report violations
110    5. Only proceed if no violations, or escalate if violations exist
111  ```
112  
113  This is the operational "trust, but verify" step.
114  
115  ### 4.4 Multi-Agent Review Mode
116  
117  When used with Pi's multi-agent capability:
118  
119  ```
120  Agent 1 (Executor): Runs skills A, B, D → produces artifacts
121  Agent 2 (Reviewer): Runs skill C verification + cross-checks all artifacts
122  Agent 3 (Escalation): Only engaged if review finds violations > threshold
123  ```
124  
125  Reviewer's sole job is principle adherence. No code written, only checked.
126  
127  ## 5. Pi Integration Design
128  
129  ### 5.1 Mapping to Pi's Lifecycle
130  
131  | Pi Phase | Skill Integration |
132  |---|---|
133  | **Plan** | Complexity orchestrator runs. Skills selected. Pre-computation (Skill A) produced as part of the execution plan. |
134  | **Task → Work** | Goal-Driven (Skill D) generates verification matrix as the task definition. Worker loops against this matrix. |
135  | **Review** | Post-Hoc Verification Gate runs. Reviewer agent (if in swarm mode) validates Surgical Changes (Skill C). |
136  | **Review → Done** | Adherence report emitted as part of task completion summary. |
137  
138  ### 5.2 Pi-Specific Skill Format
139  
140  Skills are delivered to Pi as **task specs** that wrap the actual work:
141  
142  ```
143  For each Pi task:
144    task.spec contains:
145      - title: "[SKILL:A] Pre-computation → [original task title]"
146      - content: Structured prompt template for the skill
147      - review: Criteria for checking the skill was followed
148      
149    The worker first runs the skill, produces the artifact.
150    Then runs the actual work, guided by the artifact.
151  ```
152  
153  ### 5.3 Adaptive Skill Injection
154  
155  Based on complexity scoring, the plan phase injects only the needed skills:
156  
157  ```
158  Example complexity: 4 (SIMPLE)
159  
160  Plan output: "Activate: Simplicity First, Goal-Driven Execution"
161  Tasks generated:
162    - [SKILL:B] Simplicity review of the approach
163    - [ORIGINAL] Implement the feature
164        → with verification criteria from [SKILL:D] embedded as task spec
165  ```
166  
167  ### 5.4 Skill Execution as Standalone Tasks
168  
169  Skills themselves are first-class Pi tasks:
170  
171  ```
172  pi_messenger({ action: "work", ... })
173    → For each active skill:
174      → Generate a task with the skill's prompt template
175      → Execute → produce artifact
176      → Use artifact to inform the real task
177  ```
178  
179  This leverages Pi's existing task orchestration rather than fighting it.
180  
181  ## 6. Skill Prompt Templates
182  
183  ### 6.1 Think Before Coding Template
184  
185  ```markdown
186  # Pre-Computation Checklist
187  
188  Analyze the following request before producing any code:
189  
190  ## Request Analysis
191  - What is this asking? Restate in your own words.
192  - What is NOT being asked? Explicitly list out-of-scope items.
193  
194  ## Assumptions
195  For each assumption, rate confidence [HIGH|MEDIUM|LOW]:
196  1. [...]
197  2. [...]
198  
199  ## Alternatives Considered
200  - [X] Approach: ... (why chosen)
201  - [Y] Approach: ... (why rejected)
202  
203  ## Scope Declaration
204  Files to touch: [...]
205  Files to NOT touch: [...]
206  Patterns to NOT change: [...]
207  
208  ## Clarifications Needed
209  [Any questions for the user. If none, say "No clarifications needed."]
210  
211  Output format: Return this checklist, then wait for confirmation before coding.
212  ```
213  
214  ### 6.2 Simplicity First Template
215  
216  ```markdown
217  # Simplicity Review
218  
219  Before implementing, answer:
220  
221  1. What is the SIMPLEST possible solution that satisfies all goals?
222     (Be specific. No abstractions, no factories, no interfaces unless already in the codebase.)
223  
224  2. What flexibility was requested that I should NOT add?
225     (List features NOT in scope.)
226  
227  3. Is there any code I'm about to write that already exists or is trivially built-in?
228  
229  4. Line count check: My estimate is __ lines. Could it be __ lines?
230  
231  If any answer triggers "simplify," produce a revised, minimal approach.
232  ```
233  
234  ### 6.3 Surgical Changes Template
235  
236  ```markdown
237  # Change Boundary Check
238  
239  BEFORE writing or modifying any file:
240  
241  1. List every file that WILL be changed and the reason per file.
242  2. List every file that will NOT be changed and why.
243  3. Note any orthogonal improvements I see but will NOT do:
244     - [...]: "Not touching — not requested and outside scope."
245  4. Before deleting anything, confirm it was CREATED by previous changes in this session, not pre-existing.
246  
247  AFTER changes are written, verify:
248  - Every diff line traces to a stated reason in step 1.
249  - No comments, formatting, or adjacent code changed without explicit reason.
250  ```
251  
252  ### 6.4 Goal-Driven Execution Template
253  
254  ```markdown
255  # Verification Matrix
256  
257  Define success criteria for each subtask. Weak criteria require my 
258  intervention. Strong criteria let me loop independently.
259  
260  Task: [what the user asked]
261  
262  Subtask 1: [description]
263    → Verify: [specific, testable check]
264    → Test: [specific test to write]
265  
266  Subtask 2: [description]
267    → Verify: [specific, testable check]
268    → Test: [specific test to write]
269  
270  Max self-loops before escalation: [number]
271  Escalation trigger: [what would cause me to stop and ask user]
272  ```
273  
274  ## 7. Success Metrics
275  
276  ### 7.1 Adoption
277  - Skill files exist and are installable into Pi projects
278  - Skill prompt templates are usable by Pi workers without modification
279  
280  ### 7.2 Effectiveness
281  - **Diff purity:** % of changed lines directly attributable to stated goals (target: >95%)
282  - **Over-engineering rate:** % of tasks requiring a "simplify" rework (target: <15%)
283  - **Clarification rate:** % of tasks that surface clarifications BEFORE implementation (target: >60% for complex tasks)
284  - **Orthogonal change rate:** % of unintended changes to out-of-scope files (target: <5%)
285  
286  ### 7.3 Pi-Specific
287  - Skill tasks compose naturally with Pi's task model (no awkward workarounds)
288  - Complexity scoring can be driven by Pi's plan phase
289  - Multi-agent review fits within Pi's swarm capabilities
290  
291  ## 8. Implementation Phases
292  
293  ### Phase 1: Core Skills (Week 1-2)
294  - Define the four skill modules as portable prompt templates
295  - Implement complexity scoring (rule-based)
296  - Create the post-hoc verification gate
297  - Test with Pi on 3-5 real tasks of varying complexity
298  
299  ### Phase 2: Pi Integration (Week 3-4)
300  - Map skills to Pi's plan/work/review lifecycle
301  - Implement adaptive skill injection (orchestrator)
302  - Create Pi task wrapper scripts for skill execution
303  - Document integration guide
304  
305  ### Phase 3: Multi-Agent Review (Week 5-6)
306  - Implement executor→reviewer two-agent workflow
307  - Create adherence reporting
308  - Develop review scoring dashboard
309  - Test with concurrent agent sessions
310  
311  ### Phase 4: Feedback Loop & Iteration (Week 7-8)
312  - Build adherence tracking (which skills fire, how often, what violates)
313  - Create iterative improvement cycle
314  - Document lessons learned and refined templates
315  - Open-source the skill framework
316  
317  ## 9. Risks & Mitigations
318  
319  | Risk                                                  | Impact | Mitigation                                                                                                                      |
320  |-------------------------------------------------------|--------|---------------------------------------------------------------------------------------------------------------------------------|
321  | Skills add token overhead and slow down simple tasks  | Medium | Adaptive complexity scoring strictly limits skill activation. Trivial tasks bypass most skills.                                 |
322  | Pi workers may not faithfully produce skill artifacts | High   | Skills are encoded as structured task specs with explicit output format requirements. Review stage validates artifact presence. |
323  | Over-enforcement kills developer flow                 | Medium | Skills are advisory-first, enforcement-second. Violations are reported, not blocked (unless critical). User can override.       |
324  | Principles are too abstract to operationalize         | Medium | Each skill produces a concrete, structured artifact. Templates are specific, not generic.                                       |
325  | Model-specific behavior differences                   | Low    | Skills are designed around general LLM behavior patterns, not specific model quirks. Tested across models.                      |
326  
327  ## 10. Out of Scope (For Now)
328  
329  - Native Pi agent runtime extension (this is a skill/spec solution, not a Pi code change)
330  - Automated project structure ingestion (Phase 5+)
331  - CI/CD pipeline integration (Phase 5+)
332  - Cross-project skill sharing platform (Phase 5+)
333  - Human-in-the-loop complexity assessment UI (rule-based scoring first)
334  
335  ## 11. Success Criteria for "Done"
336  
337  The system is considered viable when:
338  
339  1. ✅ Four skill modules defined as standalone prompt templates
340  2. ✅ Complex orchestrator selects appropriate skill subsets based on task complexity
341  3. ✅ Post-hoc verification gate detects scope violations at >80% rate on test tasks
342  4. ✅ Skills integrate cleanly into Pi's plan/work/review flow without workarounds
343  5. ✅ Multi-agent review (executor + reviewer) produces measurable improvement in code quality
344  6. ✅ Adherence metrics are tracked and reported for each task
345  
346  ---
347  
348  *"The bottleneck is no longer coding — it's specification and review."*
349  
350  This framework doesn't try to make the model code better. It makes the model **specify and review better**, which is where the actual value lies.