/ docs / AGENT_ARCHITECTURE_v2.md
AGENT_ARCHITECTURE_v2.md
  1  # HENRY AI — AGENT ARCHITECTURE v2
  2  **Research Basis:** Anthropic Engineering, Manus Context Engineering, Phil Schmid Harness 2026, A-MEM, MemoryArena Feb 2026, MALT multi-agent training
  3  **Version:** 2.0 | **Date:** 2026-03-01
  4  **Status:** PRODUCTION STANDARD — All new agent files use this architecture
  5  
  6  ---
  7  
  8  ## WHY THIS ARCHITECTURE EXISTS
  9  
 10  As of March 2026, the research is conclusive:
 11  
 12  > **The model is a commodity. The harness determines whether agents succeed or fail.**
 13  > — Phil Schmid, Anthropic Engineering, Manus post-mortem
 14  
 15  Manus rebuilt their agent harness 5 times in 6 months. Same models. 5 architectures. Each rebuild improved reliability. Vercel removed 80% of their agent's tools — accuracy jumped from 80% to 100%, tokens dropped 37%, speed improved 3.5x.
 16  
 17  Anthropics own internal research system — the one powering Claude Research — uses this exact pattern. Their multi-agent system outperformed single-agent Claude Opus by 90.2% on complex research tasks.
 18  
 19  We are building HENRY on what works at the frontier, not what seems smart.
 20  
 21  ---
 22  
 23  ## THE COMPUTER METAPHOR (how to think about this)
 24  
 25  ```
 26  MODEL          = CPU          (raw intelligence, interchangeable)
 27  CONTEXT WINDOW = RAM          (volatile, limited, expensive)
 28  HARNESS        = OPERATING SYSTEM  (boots the agent, manages resources)
 29  AGENT FILE     = APPLICATION  (specific logic running on the OS)
 30  MEMORY FILES   = HARD DRIVE   (persistent, survives context wipes)
 31  ```
 32  
 33  The HENRY harness is OpenClaw. The agent files are what run on top of it.
 34  Memory files are GitHub. That is the hard drive.
 35  
 36  ---
 37  
 38  ## THE 3-LAYER PROGRESSIVE DISCLOSURE STANDARD
 39  
 40  Source: Anthropic Agent Skills architecture (Oct 2025, production-validated)
 41  
 42  Agent files are NOT monolithic system prompts dumped into context.
 43  They are 3-layer structures that load only what's needed — context window is RAM, treat it like RAM.
 44  
 45  ```
 46  LAYER 1 — METADATA (always in context, < 200 tokens)
 47    name:
 48    description:
 49    triggers: [keywords that activate this agent]
 50    version:
 51    memory_file: [path to persistent memory]
 52  
 53  LAYER 2 — SKILL.md CORE (loaded when agent is triggered, < 2000 tokens)
 54    Identity and role
 55    Constraints and guardrails
 56    Output format
 57    Scaling rules (effort vs. complexity)
 58    Memory read/write protocol
 59    Self-improvement triggers
 60  
 61  LAYER 3 — LINKED SUB-FILES (loaded on demand, unlimited depth)
 62    domain_knowledge.md  — specific facts, frameworks, checklists
 63    playbooks/           — step-by-step procedures for known scenarios
 64    memory/              — persistent memory files (read at start, write at end)
 65    scripts/             — executable code the agent can run
 66  ```
 67  
 68  **Key insight from Anthropic:** Agents with filesystem access don't need everything in context.
 69  Context bundled into skills is effectively unbounded — load it progressively.
 70  
 71  ---
 72  
 73  ## THE HENRY HARNESS PROTOCOL
 74  
 75  Every agent in the HENRY system runs this protocol on every invocation:
 76  
 77  ```
 78  BOOT SEQUENCE (runs automatically at agent start):
 79  1. READ memory file → load what I know about current state
 80  2. READ task brief → understand what's being asked
 81  3. CLASSIFY complexity → select effort level (see scaling rules below)
 82  4. PLAN → multi-path scoring before any action
 83  5. EXECUTE → run the plan
 84  6. SELF-EVALUATE → score output before returning
 85  7. WRITE memory → log what was done, what was learned, next recommended action
 86  
 87  SHUTDOWN SEQUENCE (runs automatically at agent end):
 88  1. Write EXECUTION_LOG entry (timestamp, task, outcome, confidence, lessons)
 89  2. Update agent's own AGENT_MEMORY.md with lessons learned
 90  3. Flag any tool or process improvements discovered
 91  4. Return output to ORCHESTRATOR with confidence score
 92  ```
 93  
 94  ---
 95  
 96  ## SCALING RULES — EMBED IN EVERY AGENT
 97  
 98  Source: Anthropic Research System engineering post (Jun 2025)
 99  Without embedded scaling rules, agents over-invest in simple tasks. This is the most common failure mode.
100  
101  ```
102  COMPLEXITY TIER 1 — Simple Lookup
103    When: Single fact, single decision, clear answer exists
104    Resources: 1 agent, 3-10 tool calls
105    Token budget: LOW
106    Example: "What is TXS5513's revenue?"
107  
108  COMPLEXITY TIER 2 — Analysis / Comparison
109    When: Multiple options, scoring required, recommendation needed
110    Resources: 1 agent + 1-2 sub-agents if parallel paths exist
111    Token budget: MEDIUM
112    Example: "Compare TXS5513 vs TXS5450 for acquisition priority"
113  
114  COMPLEXITY TIER 3 — Deep Research / Strategy
115    When: Open-ended, path-dependent, multi-domain
116    Resources: Lead agent + 3-10 specialized sub-agents in parallel
117    Token budget: HIGH — justified by task value
118    Example: "Build complete due diligence on TXS5513 including RIA exposure"
119  
120  COMPLEXITY TIER 4 — System Build / Full Execution
121    When: Multi-day workstream, many dependencies, external actions
122    Resources: Full HENRY team, sub-agents, checkpointing
123    Token budget: MAXIMUM — must justify against business value
124    Example: "Execute Dark Factory Week 1 — all tracks"
125  ```
126  
127  ---
128  
129  ## PERSISTENT MEMORY STANDARD
130  
131  Source: A-MEM (Feb 2025), Hindsight Memory (Dec 2025), MemoryArena (Feb 2026), Anthropic Research post-mortem
132  
133  Every agent has its own memory file. Format:
134  
135  ```markdown
136  # [AGENT NAME] — PERSISTENT MEMORY
137  Last Updated: [timestamp]
138  Session Count: [N]
139  
140  ## CURRENT STATE
141  [What I am working on right now]
142  
143  ## RECENT ACTIONS
144  [Last 5 things I did, with outcomes]
145  
146  ## LESSONS LEARNED
147  [What I discovered that changes how I work]
148  
149  ## TOOL IMPROVEMENTS FLAGGED
150  [Tools that worked poorly — for OPTIMIZER agent to fix]
151  
152  ## NEXT RECOMMENDED ACTION
153  [What should happen next time I am invoked]
154  
155  ## WHITT PREFERENCES OBSERVED
156  [How Whitt likes to work — learned from interaction]
157  ```
158  
159  Memory file location: `memory/[AGENT_NAME]_MEMORY.md`
160  Write trigger: End of every session, even short ones.
161  Read trigger: First action of every session, before anything else.
162  
163  **WHY THIS MATTERS:** Anthropic's own research agent saves its plan to memory before executing because context windows get truncated at 200k tokens. Memory files are the solution. Every HENRY agent writes to memory or it doesn't run.
164  
165  ---
166  
167  ## SELF-IMPROVEMENT TRIGGERS — LIVING FILES
168  
169  Source: Anthropic Engineering (tool-testing agent, 40% improvement from self-description rewrites)
170  Source: MARS framework — "Meta-cognitive Reflection for Efficient Self-Improvement" (Jan 2026)
171  Source: Agent-R — "Iterative Self-Training via Monte Carlo Tree Search" (Jan 2025)
172  
173  Agent files are NOT static. They evolve.
174  
175  **Built-in self-improvement triggers in every agent:**
176  
177  ```
178  TRIGGER: TOOL_FAILURE
179    When: A tool call fails or returns poor results
180    Action: Note the failure. Log to memory. Recommend description improvement.
181    Format: "TOOL_IMPROVEMENT: [tool name] — [what happened] — [suggested fix]"
182  
183  TRIGGER: LOW_CONFIDENCE_OUTPUT
184    When: Confidence score < 14/20
185    Action: Self-reflect before returning. "Why did I score low?"
186    Log the root cause. Adjust approach for next invocation.
187  
188  TRIGGER: TASK_FASTER_PATH_DISCOVERED
189    When: Agent finds a more efficient route mid-execution
190    Action: Complete current task. Then write the shortcut to memory.
191    Format: "SHORTCUT: [task type] → [faster approach] (saved ~X tokens)"
192  
193  TRIGGER: INSTRUCTION_DRIFT
194    When: Agent detects it has drifted from original task goal
195    Action: STOP. Re-read original objective. Re-anchor. Log drift cause.
196    This is the agent harness equivalent of a watchdog timer.
197  
198  TRIGGER: END_OF_SESSION
199    When: Every session ends
200    Action: Write full memory update. Score the session. Recommend next action.
201    Non-negotiable. Every session. Even 5-minute ones.
202  ```
203  
204  ---
205  
206  ## THE OPTIMIZER AGENT — SYSTEM SELF-IMPROVEMENT
207  
208  Source: Anthropic (tool-testing agent), Agent-R (iterative self-training), MARS (meta-cognitive reflection)
209  
210  The HENRY system includes a dedicated OPTIMIZER agent that runs periodically.
211  This is the agent that makes all other agents smarter over time.
212  
213  **OPTIMIZER reads:**
214  - All agent memory files
215  - All TOOL_IMPROVEMENT flags
216  - All SHORTCUT discoveries
217  - Execution logs for failure patterns
218  
219  **OPTIMIZER does:**
220  - Rewrites tool descriptions where failures were logged
221  - Updates agent SKILL.md files with discovered shortcuts
222  - Identifies agents that are duplicating work (task overlap patterns)
223  - Surfaces patterns to Whitt: "These 3 agents are inefficient — here's why"
224  - Proposes new sub-agent configurations for common task types
225  
226  **OPTIMIZER runs:**
227  - Automatically: After 10 sessions logged, or weekly
228  - Manually: Whitt says `OPTIMIZER: run` or `OPTIMIZE: [specific agent]`
229  
230  **OPTIMIZER does NOT:**
231  - Push changes without showing Whitt first
232  - Change agent identity, values, or business context
233  - Remove guardrails or constraints
234  
235  ---
236  
237  ## CONTEXT ENGINEERING — HOW HENRY STAYS SHARP
238  
239  Source: Manus Context Engineering post, Anthropic harness guide, Lost in the Middle research (Liu et al.)
240  
241  **The U-shaped attention problem:** LLMs attend strongly to the beginning and end of context. Middle content gets lost. This is why long agent runs degrade — important early instructions get buried.
242  
243  **HENRY's countermeasures:**
244  
245  ```
246  1. FRONT-LOAD CRITICAL INSTRUCTIONS
247     First 200 tokens of every agent prompt = mission-critical info
248     Never bury constraints in the middle
249  
250  2. APPEND-ONLY CONTEXT (Manus pattern)
251     Don't modify earlier context. Append corrections at the end.
252     The model attends to the end. Use it.
253  
254  3. SUBAGENT FILESYSTEM OUTPUT (Anthropic pattern)
255     Subagents write results to files. Pass file references to lead agent.
256     Not: subagent → memory → lead agent (telephone game degrades quality)
257     Yes: subagent → filesystem → lead agent reads file (full fidelity)
258  
259  4. CONTEXT COMPACTION AT THRESHOLD
260     When approaching 150k tokens: summarize completed work, archive it
261     Start fresh sub-task with clean context + summary reference
262     Anthropic: agents spawn fresh subagents with clean contexts + handoff notes
263  
264  5. STABLE PREFIXES FOR CACHE EFFICIENCY
265     System prompt = never changes between runs (cache hit = 10x cheaper)
266     Dynamic content = always appended at end, never inserted in middle
267     Manus achieved 10x cost reduction purely from cache optimization
268  ```
269  
270  ---
271  
272  ## TOKEN EFFICIENCY STANDARDS
273  
274  Source: Manus (KV-cache optimization), Vercel (tool reduction), Anthropic (scaling rules)
275  
276  Every agent must:
277  - Report token tier used (LOW / MEDIUM / HIGH / MAXIMUM) with every output
278  - Never use TIER 3 for a TIER 1 task
279  - Flag when a task takes more tokens than expected (potential optimization opportunity)
280  
281  System-wide targets:
282  - Simple queries: < 5,000 tokens
283  - Analysis tasks: < 25,000 tokens  
284  - Deep research: < 100,000 tokens
285  - Full strategy sessions: budget explicitly before starting
286  
287  ---
288  
289  ## SUB-AGENT PROTOCOL
290  
291  ### Naming
292  ```
293  FORMAT:  SUB-[PARENT]-[NN]
294  Example: SUB-CFO-01, SUB-RESEARCH-02, SUB-CTO-01
295  ```
296  
297  ### Briefing Format (what parent sends to sub-agent)
298  Source: Anthropic Research post — vague sub-agent briefs cause duplicate work and coverage gaps
299  
300  ```
301  SUB-AGENT BRIEFING:
302    Agent:       SUB-[NAME]-[NN]
303    Parent:      [PARENT AGENT]
304    Objective:   [ONE sentence — exactly what to find/do]
305    Scope:       [What to cover — and what NOT to cover (prevents overlap)]
306    Tools:       [Which specific tools to use]
307    Output:      [Exact format of what to return]
308    File output: [Path to write results — use filesystem, not context]
309    Token limit: [TIER 1/2/3]
310    Done when:   [Definition of done]
311  ```
312  
313  ### Return Format (what sub-agent sends back)
314  ```
315  SUB-AGENT RETURN:
316    Agent:      SUB-[NAME]-[NN]
317    Task done:  [what was completed]
318    Results:    [findings — facts, data, output]
319    File saved: [path where full output was written]
320    Confidence: [X/20]
321    Gaps:       [what couldn't be determined]
322    Next:       [recommended follow-up if needed]
323  ```
324  
325  ### Reporting Chain
326  ```
327  SUB-AGENT → PARENT AGENT → ORCHESTRATOR → WHITT
328  ```
329  Sub-agents never surface directly to Whitt without passing through their parent.
330  
331  ---
332  
333  ## AGENT FILE TEMPLATE (use this for all new agents)
334  
335  See: `docs/agent-definitions/AGENT_TEMPLATE.md`
336  
337  ---
338  
339  ## MIGRATION STATUS — EXISTING AGENTS
340  
341  All 9 existing agent files were built on v1 architecture (monolithic prompts).
342  They are functional. They should be migrated to v2 progressively.
343  
344  Priority order for migration:
345  1. ORCHESTRATOR (it routes everything — most leverage)
346  2. RESEARCH (heaviest token user — most savings from scaling rules)
347  3. CFO (financial models — most value from persistent memory)
348  4. All others as time permits
349  
350  Migration does NOT require rewriting the business logic.
351  It requires adding: memory protocol, scaling rules, self-improvement triggers, and progressive disclosure structure.