/ scripts / claude.md
claude.md
  1  # scripts/
  2  
  3  **Context:** Utility Scripts & Development Tools
  4  
  5  This directory contains helper scripts for setup, testing, deployment, and maintenance of the ECHO system.
  6  
  7  ## Purpose
  8  
  9  Scripts provide:
 10  - **Setup & Installation** - Initialize system and dependencies
 11  - **Agent Management** - Start, stop, monitor agents
 12  - **Testing Utilities** - Test individual components
 13  - **Database Operations** - Migrations and maintenance
 14  - **Deployment Helpers** - Build and deploy scripts
 15  
 16  ## Directory Structure
 17  
 18  ```
 19  scripts/
 20  ├── claude.md                  # This file
 21  ├── agents/                   # Agent-specific scripts
 22  │   ├── test_agent_llm.sh    # Test single agent LLM
 23  │   ├── test_all_agents_llm.sh # Test all agents LLM
 24  │   └── [other agent scripts]
 25  ├── ollama/                   # Ollama/LLM setup scripts
 26  │   ├── setup_models.sh      # Download all required models
 27  │   └── verify_models.sh     # Verify models are available
 28  ├── llm/                      # LocalCode - Local LLM assistant
 29  │   ├── localcode_session.sh  # Session management
 30  │   ├── localcode_query.sh    # Query interface with tools
 31  │   ├── localcode_quick.sh    # Helper functions (lc_*)
 32  │   ├── context_builder.sh    # Tiered context injection
 33  │   ├── query_helpers.sh      # Common query patterns
 34  │   ├── should_query.sh       # Query decision logic
 35  │   ├── LOCALCODE_GUIDE.md    # Complete user guide
 36  │   ├── QUICK_START.md        # Quick start tutorial
 37  │   └── EFFICIENCY_TEST_RESULTS.md # Performance analysis
 38  └── [root-level utility scripts]
 39  ```
 40  
 41  ## Key Scripts
 42  
 43  ### setup.sh
 44  
 45  **Purpose:** Complete system setup - builds all agents
 46  
 47  **Usage:**
 48  ```bash
 49  ./setup.sh
 50  
 51  # What it does:
 52  # 1. Compiles shared library
 53  # 2. Builds all 9 agent executables
 54  # 3. Verifies compilation success
 55  # 4. Displays build summary
 56  ```
 57  
 58  **Requirements:**
 59  - Elixir 1.18+ installed
 60  - PostgreSQL running
 61  - Redis running
 62  - Dependencies installed (mix deps.get)
 63  
 64  ### setup_llms.sh
 65  
 66  **Purpose:** Install Ollama and download all required AI models
 67  
 68  **Usage:**
 69  ```bash
 70  ./setup_llms.sh
 71  
 72  # Downloads (~48GB total):
 73  # - qwen2.5:14b          (CEO)
 74  # - deepseek-coder:33b   (CTO, Architect)
 75  # - llama3.1:8b          (CHRO, PM)
 76  # - mistral:7b           (Operations)
 77  # - llama3.2-vision:11b  (UI/UX)
 78  # - deepseek-coder:6.7b  (Developer)
 79  # - codellama:13b        (Test Lead)
 80  ```
 81  
 82  **Time:** 30-60 minutes depending on internet speed
 83  
 84  ### echo.sh
 85  
 86  **Purpose:** System health monitoring and status
 87  
 88  **Usage:**
 89  ```bash
 90  ./echo.sh              # Full system status
 91  ./echo.sh summary      # Brief summary
 92  ./echo.sh agents       # Agent health only
 93  ./echo.sh workflows    # Running workflows
 94  ./echo.sh messages     # Message queue status
 95  ./echo.sh decisions    # Pending decisions
 96  ```
 97  
 98  **Sample Output:**
 99  ```
100  ========================================
101   ECHO System Status
102  ========================================
103  
104  Infrastructure:
105    ✓ PostgreSQL (echo_org) - Connected
106    ✓ Redis (localhost:6379) - Connected
107    ✓ Ollama (localhost:11434) - 9 models loaded
108  
109  Agents (9 total):
110    ✓ CEO              - Running (v1.0.0)
111    ✓ CTO              - Running (v1.0.0)
112    ✓ CHRO             - Running (v1.0.0)
113    ✓ Operations Head  - Running (v1.0.0)
114    ✓ Product Manager  - Running (v1.0.0)
115    ✓ Senior Architect - Running (v1.0.0)
116    ✓ UI/UX Engineer   - Running (v1.0.0)
117    ✓ Senior Developer - Running (v1.0.0)
118    ✓ Test Lead        - Running (v1.0.0)
119  
120  Activity (last 24h):
121    Decisions: 42 (37 approved, 3 pending, 2 rejected)
122    Messages:  187 sent, 184 read (98% read rate)
123    Workflows: 5 running, 12 completed
124  
125  System Health: ✓ OPERATIONAL
126  ```
127  
128  ### test_agents.sh
129  
130  **Purpose:** Run all agent tests
131  
132  **Usage:**
133  ```bash
134  ./test_agents.sh
135  
136  # Options:
137  # --agent ceo           # Test specific agent
138  # --verbose             # Detailed output
139  # --coverage            # Generate coverage report
140  ```
141  
142  ### start_ceo_cto.sh / stop_ceo_cto.sh
143  
144  **Purpose:** Start/stop specific agents for development
145  
146  **Usage:**
147  ```bash
148  # Start agents in autonomous mode
149  ./start_ceo_cto.sh
150  
151  # Stop agents
152  ./stop_ceo_cto.sh
153  ```
154  
155  ### rebuild_all.sh
156  
157  **Purpose:** Clean rebuild of entire system
158  
159  **Usage:**
160  ```bash
161  ./rebuild_all.sh
162  
163  # What it does:
164  # 1. Cleans all build artifacts
165  # 2. Recompiles shared library
166  # 3. Rebuilds all agents
167  # 4. Runs tests
168  # 5. Verifies system health
169  ```
170  
171  ### verify_all_agents.sh
172  
173  **Purpose:** Comprehensive agent verification
174  
175  **Usage:**
176  ```bash
177  ./verify_all_agents.sh
178  
179  # Checks:
180  # - Agents compile successfully
181  # - Escript executables exist
182  # - MCP protocol compliance
183  # - Tool definitions valid
184  # - LLM models available
185  # - Database connectivity
186  # - Redis connectivity
187  ```
188  
189  ### check_system_status.sh
190  
191  **Purpose:** Quick system health check
192  
193  **Usage:**
194  ```bash
195  ./check_system_status.sh
196  
197  # Returns exit code:
198  # 0 - All systems operational
199  # 1 - Infrastructure issues (DB/Redis/Ollama)
200  # 2 - Agent issues
201  # 3 - Multiple issues
202  ```
203  
204  ### send_message.sh
205  
206  **Purpose:** Send test message between agents
207  
208  **Usage:**
209  ```bash
210  ./send_message.sh <from_role> <to_role> <type> <subject> [content]
211  
212  # Example:
213  ./send_message.sh ceo cto request "Architecture review needed" '{"design_doc":"url"}'
214  ```
215  
216  ### fix_postgres.sh
217  
218  **Purpose:** Fix common PostgreSQL issues
219  
220  **Usage:**
221  ```bash
222  ./fix_postgres.sh
223  
224  # Fixes:
225  # - Stale connections
226  # - Migration conflicts
227  # - Permission issues
228  # - Database not found errors
229  ```
230  
231  ### docker-setup.sh
232  
233  **Purpose:** Setup Docker environment for ECHO
234  
235  **Usage:**
236  ```bash
237  ./docker-setup.sh
238  
239  # What it does:
240  # 1. Builds Docker images for all agents
241  # 2. Creates docker-compose configuration
242  # 3. Sets up networking
243  # 4. Initializes volumes
244  ```
245  
246  ## Agent-Specific Scripts
247  
248  ### scripts/agents/test_agent_llm.sh
249  
250  **Purpose:** Test LLM integration for specific agent
251  
252  **Usage:**
253  ```bash
254  ./scripts/agents/test_agent_llm.sh ceo
255  
256  # Tests:
257  # - Model availability
258  # - Connection to Ollama
259  # - Prompt formatting
260  # - Response parsing
261  # - Error handling
262  ```
263  
264  **Output:**
265  ```
266  Testing CEO Agent LLM Integration
267  =================================
268  Model: qwen2.5:14b
269  Ollama Endpoint: http://localhost:11434
270  
271  [1/5] Checking model availability... ✓
272  [2/5] Testing basic query...         ✓ (2.1s)
273  [3/5] Testing with context...        ✓ (3.4s)
274  [4/5] Testing error handling...      ✓
275  [5/5] Measuring response time...     ✓ (avg: 2.7s)
276  
277  Result: ALL TESTS PASSED
278  ```
279  
280  ### scripts/agents/test_all_agents_llm.sh
281  
282  **Purpose:** Test LLM integration for all agents
283  
284  **Usage:**
285  ```bash
286  ./scripts/agents/test_all_agents_llm.sh
287  
288  # Generates report:
289  # - Per-agent LLM status
290  # - Response time comparison
291  # - Model availability
292  # - Error rates
293  ```
294  
295  ## LocalCode - Local LLM Assistant
296  
297  **Location:** `scripts/llm/`
298  
299  LocalCode replicates Claude Code's functionality using local LLMs (deepseek-coder:6.7b). It provides project-aware AI assistance with $0 cost and 100% privacy.
300  
301  ### Quick Start
302  
303  ```bash
304  # Load helper functions (once per terminal session)
305  source ./scripts/llm/localcode_quick.sh
306  
307  # Start session - auto-loads CLAUDE.md, git context, system status
308  lc_start
309  
310  # Query local LLM
311  lc_query "What is ECHO?"
312  lc_query "How do agents communicate?"
313  
314  # Interactive mode
315  lc_interactive
316  
317  # End session (archives conversation)
318  lc_end
319  ```
320  
321  ### Core Scripts
322  
323  #### scripts/llm/localcode_session.sh
324  
325  **Purpose:** Session manager - creates sessions with project context
326  
327  **Usage:**
328  ```bash
329  # Start new session
330  ./scripts/llm/localcode_session.sh start [path]
331  
332  # List all sessions
333  ./scripts/llm/localcode_session.sh list
334  
335  # Show session details
336  ./scripts/llm/localcode_session.sh show <session_id>
337  
338  # End session and archive
339  ./scripts/llm/localcode_session.sh end <session_id>
340  ```
341  
342  **What it does:**
343  1. Creates session directory: `~/.localcode/sessions/session_ID/`
344  2. Loads startup context (~1,900 tokens):
345     - CLAUDE.md (first 200 lines)
346     - Git context (branch, commits, changed files)
347     - System status (from `.claude/hooks/session-start.sh`)
348     - Directory structure
349  3. Initializes conversation storage:
350     - `session.json` - metadata
351     - `startup_context.txt` - project context
352     - `conversation.json` - chat history
353     - `tool_results.json` - tool execution log
354  
355  **Session Storage:**
356  ```
357  ~/.localcode/sessions/
358  └── session_20251111_012114_83759/
359      ├── session.json           # Metadata (model, project path, turn count)
360      ├── startup_context.txt    # Static project context (~1,900 tokens)
361      ├── conversation.json      # Chat history (last 5 turns kept)
362      └── tool_results.json      # Tool execution results (last 3 kept)
363  ```
364  
365  #### scripts/llm/localcode_query.sh
366  
367  **Purpose:** Query interface with tool simulation and context management
368  
369  **Usage:**
370  ```bash
371  # Single query
372  ./scripts/llm/localcode_query.sh <session_id> "your question"
373  
374  # Interactive mode
375  ./scripts/llm/localcode_query.sh <session_id> --interactive
376  ```
377  
378  **Features:**
379  1. **Context Assembly:**
380     - Tier 1: Startup context (~1,900 tokens)
381     - Tier 2: Conversation history (last 5 turns)
382     - Tier 3: Tool results (last 3 executions)
383     - Tier 4: Current question
384  
385  2. **Context Size Warnings:**
386     - >3,000 tokens: Moderate warning
387     - >4,000 tokens: High warning (approaching 8K limit)
388     - >6,000 tokens: Critical (blocks query)
389  
390  3. **Tool Simulation:**
391     - Detects `TOOL_REQUEST: function(args)` patterns
392     - Executes tools automatically:
393       - `read_file(path)` - Read file contents
394       - `grep_code(pattern)` - Search codebase
395       - `glob_files(pattern)` - Find files by pattern
396       - `run_bash(command)` - Execute bash command
397     - Re-queries LLM with tool results
398  
399  4. **Timeout:** 180 seconds (3 minutes) for local inference
400  
401  **Sample Output:**
402  ```
403  ═══════════════════════════════════════════════════════════
404  LocalCode Query (Session: session_20251111_012114_83759)
405  ═══════════════════════════════════════════════════════════
406  
407  ℹ Building query context...
408  ℹ Context size: 8245 bytes (~2061 tokens)
409  ℹ Querying deepseek-coder:6.7b (timeout: 180s)...
410  
411  ═══════════════════════════════════════════════════════════
412  Response from deepseek-coder:6.7b:
413  ═══════════════════════════════════════════════════════════
414  
415  [LLM response here]
416  ```
417  
418  #### scripts/llm/localcode_quick.sh
419  
420  **Purpose:** Simplified wrapper providing helper functions
421  
422  **Functions:**
423  - `lc_start [path]` - Start new session
424  - `lc_query "question"` - Query current session
425  - `lc_interactive` - Interactive mode
426  - `lc_end` - End current session
427  - `lc_list` - List all sessions
428  - `lc_show` - Show current session details
429  
430  **Environment Variables:**
431  - `LLM_TIMEOUT=180` - Query timeout (default: 3 minutes)
432  - `LLM_MODEL=deepseek-coder:6.7b` - Model to use
433  - `LOCALCODE_SESSION` - Current session ID (auto-managed)
434  
435  #### scripts/llm/context_builder.sh
436  
437  **Purpose:** Tiered context injection engine for specialized queries
438  
439  **Templates:**
440  - `code_review` - Code quality analysis
441  - `feature_design` - Feature planning
442  - `debugging` - Problem diagnosis
443  - `architecture` - System design
444  - `general` - General queries (default)
445  
446  **Usage:**
447  ```bash
448  ./scripts/llm/context_builder.sh \
449    --template code_review \
450    --files "apps/ceo/lib/ceo.ex" \
451    --goal "Review for security issues"
452  ```
453  
454  **Architecture:**
455  ```
456  Tier 1: Static Core (~600 tokens)
457    - ECHO project info
458    - Tech stack (Elixir, PostgreSQL, Redis, MCP)
459    - 9 agent roles
460    - Critical rules
461  
462  Tier 2: Dynamic Context (~1000-2000 tokens)
463    - Template-specific (code files, git diff, logs, etc.)
464    - Relevant documentation
465    - Recent changes
466  
467  Tier 3: Conversation Context (~500 tokens)
468    - User goal
469    - Recent decisions
470    - Active workflows
471  
472  Tier 4: Question (~200 tokens)
473    - Specific question with instructions
474    - Expected output format
475  ```
476  
477  #### scripts/llm/query_helpers.sh
478  
479  **Purpose:** Helper functions for common query patterns
480  
481  **Functions:**
482  ```bash
483  # General query (uses context_builder.sh)
484  llm_query "your question"
485  
486  # Code review
487  llm_code_review apps/ceo/lib/ceo.ex
488  
489  # Feature design
490  llm_feature_design "Add user authentication"
491  
492  # Debug help
493  llm_debug "Redis connection refused"
494  
495  # Architecture analysis
496  llm_architecture
497  
498  # Quick query (bypass context builder)
499  llm_quick "What's the current git branch?"
500  ```
501  
502  #### scripts/llm/should_query.sh
503  
504  **Purpose:** Decision logic for when to query LLM vs use tools directly
505  
506  **Usage:**
507  ```bash
508  if ./scripts/llm/should_query.sh "$user_input"; then
509    llm_query "$user_input"
510  else
511    # Use direct tool execution
512  fi
513  ```
514  
515  **Query Patterns (returns 0 - should query):**
516  - Architecture questions
517  - Design decisions
518  - Code review requests
519  - Debug help
520  - Security analysis
521  - Performance optimization
522  
523  **Skip Patterns (returns 1 - should skip):**
524  - File operations (ls, cat, head, tail)
525  - Git commands (status, log, diff)
526  - Simple searches (grep, find)
527  - System status checks
528  
529  ### Performance & Limitations
530  
531  **Response Times:**
532  - Simple queries: 5-10 seconds
533  - Medium queries: 10-20 seconds
534  - Complex queries: 20-40 seconds
535  - Maximum timeout: 180 seconds (3 minutes)
536  
537  **Context Capacity:**
538  - Startup context: ~1,900 tokens (fixed)
539  - Session capacity: 10-12 conversational turns
540  - Context growth: ~480 tokens/turn average
541  - Warning thresholds: 3K (moderate), 4K (high), 6K (critical)
542  
543  **Limitations:**
544  - No streaming (waits for full response)
545  - Context overflow after 10-12 turns (requires session restart)
546  - Tool results accumulate (mitigated by keeping last 3 only)
547  - Local inference slower than cloud APIs
548  
549  **Quality:**
550  - Overall grade: A- (4.25/5 stars)
551  - Accurate, project-aware responses
552  - Minor confusion on complex multi-system topics
553  - Excellent for quick queries and exploration
554  
555  ### Documentation
556  
557  - **User Guide:** `scripts/llm/LOCALCODE_GUIDE.md` (406 lines)
558  - **Quick Start:** `scripts/llm/QUICK_START.md` (272 lines)
559  - **Performance:** `scripts/llm/EFFICIENCY_TEST_RESULTS.md` (326 lines)
560  - **Architecture:** See main `CLAUDE.md` Rule 8
561  
562  ### Integration with ECHO Development
563  
564  **Use LocalCode for:**
565  1. Quick codebase exploration
566  2. Understanding agent implementations
567  3. Debugging hints (not full debugging)
568  4. Documentation lookup
569  5. Architecture clarifications
570  
571  **Use Claude Code for:**
572  1. Complex refactoring
573  2. Multi-file code generation
574  3. Test writing and execution
575  4. Git operations
576  5. Long-running tasks
577  
578  **Use Both (Dual Perspective):**
579  1. Code reviews (get two opinions)
580  2. Design decisions (compare approaches)
581  3. Security audits (thorough analysis)
582  4. Complex debugging (more insights)
583  
584  ### Configuration
585  
586  **Environment Variables:**
587  ```bash
588  # Model selection
589  export LLM_MODEL="deepseek-coder:6.7b"  # Default
590  # or
591  export LLM_MODEL="deepseek-coder:1.3b"  # Faster (5-10s)
592  export LLM_MODEL="qwen2.5:14b"          # Better quality (10-30s)
593  
594  # Timeout (for slow queries)
595  export LLM_TIMEOUT=180   # Default: 3 minutes
596  export LLM_TIMEOUT=300   # 5 minutes for very complex queries
597  
598  # Ollama endpoint
599  export OLLAMA_ENDPOINT="http://localhost:11434"
600  ```
601  
602  **Session Storage:**
603  ```bash
604  # Default location
605  ~/.localcode/sessions/
606  
607  # Change with environment variable
608  export LOCALCODE_SESSIONS_DIR="/custom/path"
609  ```
610  
611  ### Troubleshooting
612  
613  **"Failed to get response from Ollama"**
614  ```bash
615  # Check Ollama is running
616  curl http://localhost:11434/api/tags
617  
618  # Increase timeout
619  export LLM_TIMEOUT=300
620  lc_query "your question"
621  
622  # Use smaller/faster model
623  export LLM_MODEL="deepseek-coder:1.3b"
624  lc_query "your question"
625  ```
626  
627  **"Context size: 15000 bytes (~3750 tokens)" warning**
628  ```bash
629  # Start fresh session
630  lc_end
631  lc_start
632  
633  # Or reduce CLAUDE.md lines loaded
634  # Edit localcode_session.sh line ~80:
635  # Change: head -200 CLAUDE.md → head -100 CLAUDE.md
636  ```
637  
638  **"No active session"**
639  ```bash
640  # Check current session
641  echo $LOCALCODE_SESSION
642  
643  # Start new session
644  lc_start
645  ```
646  
647  ### Example Workflows
648  
649  **1. Learning Codebase**
650  ```bash
651  lc_start
652  lc_interactive
653  > What agents exist in ECHO?
654  > How does the CEO agent work?
655  > Explain the decision-making flow
656  > Show me the MessageBus implementation
657  > exit
658  lc_end
659  ```
660  
661  **2. Code Review**
662  ```bash
663  lc_start
664  lc_query "Review apps/echo_shared/lib/echo_shared/message_bus.ex for issues"
665  # Review local LLM perspective, then ask Claude Code same question
666  lc_end
667  ```
668  
669  **3. Debugging**
670  ```bash
671  lc_start
672  lc_query "I'm getting 'connection refused' to Redis. What could be wrong?"
673  # Get debugging hints, then use Claude Code for implementation
674  lc_end
675  ```
676  
677  **4. Architecture Analysis**
678  ```bash
679  lc_start
680  lc_query "What are the pros and cons of the dual-write pattern in MessageBus?"
681  # Get local LLM analysis, compare with Claude Code's perspective
682  lc_end
683  ```
684  
685  ## Common Script Patterns
686  
687  ### Script Template
688  
689  ```bash
690  #!/bin/bash
691  set -euo pipefail  # Exit on error, undefined var, pipe failure
692  
693  # Script metadata
694  readonly SCRIPT_NAME=$(basename "$0")
695  readonly SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)
696  readonly PROJECT_ROOT=$(cd "$SCRIPT_DIR/.." && pwd)
697  
698  # Colors for output
699  readonly RED='\033[0;31m'
700  readonly GREEN='\033[0;32m'
701  readonly YELLOW='\033[1;33m'
702  readonly NC='\033[0m'  # No Color
703  
704  # Logging functions
705  log_info() {
706    echo -e "${GREEN}[INFO]${NC} $1"
707  }
708  
709  log_warn() {
710    echo -e "${YELLOW}[WARN]${NC} $1"
711  }
712  
713  log_error() {
714    echo -e "${RED}[ERROR]${NC} $1"
715  }
716  
717  # Usage function
718  usage() {
719    cat <<EOF
720  Usage: $SCRIPT_NAME [OPTIONS]
721  
722  Description of what this script does
723  
724  Options:
725    -h, --help     Show this help message
726    -v, --verbose  Enable verbose output
727  
728  Example:
729    $SCRIPT_NAME --verbose
730  EOF
731  }
732  
733  # Main function
734  main() {
735    # Parse arguments
736    while [[ $# -gt 0 ]]; do
737      case $1 in
738        -h|--help)
739          usage
740          exit 0
741          ;;
742        -v|--verbose)
743          VERBOSE=true
744          shift
745          ;;
746        *)
747          log_error "Unknown option: $1"
748          usage
749          exit 1
750          ;;
751      esac
752    done
753  
754    # Main script logic here
755    log_info "Starting $SCRIPT_NAME..."
756  
757    # Do work
758  
759    log_info "Completed successfully"
760  }
761  
762  # Run main function
763  main "$@"
764  ```
765  
766  ### Error Handling Pattern
767  
768  ```bash
769  # Function with error handling
770  function_with_error_handling() {
771    local result
772  
773    if ! result=$(risky_command 2>&1); then
774      log_error "Command failed: $result"
775      return 1
776    fi
777  
778    echo "$result"
779    return 0
780  }
781  
782  # Cleanup on exit
783  cleanup() {
784    log_info "Cleaning up..."
785    # Cleanup logic here
786  }
787  trap cleanup EXIT
788  ```
789  
790  ### Parallel Execution Pattern
791  
792  ```bash
793  # Run tasks in parallel
794  run_parallel() {
795    local pids=()
796  
797    for task in "${TASKS[@]}"; do
798      run_task "$task" &
799      pids+=($!)
800    done
801  
802    # Wait for all tasks
803    local failed=0
804    for pid in "${pids[@]}"; do
805      if ! wait "$pid"; then
806        ((failed++))
807      fi
808    done
809  
810    return $failed
811  }
812  ```
813  
814  ## Environment Variables
815  
816  Scripts respect these environment variables:
817  
818  ```bash
819  # Paths
820  export ECHO_ROOT="/path/to/echo"
821  export SHARED_DIR="$ECHO_ROOT/shared"
822  export AGENTS_DIR="$ECHO_ROOT/agents"
823  
824  # Database
825  export DB_HOST="localhost"
826  export DB_PORT="5432"
827  export DB_NAME="echo_org"
828  
829  # Redis
830  export REDIS_HOST="localhost"
831  export REDIS_PORT="6379"
832  
833  # Ollama
834  export OLLAMA_ENDPOINT="http://localhost:11434"
835  
836  # Script behavior
837  export VERBOSE=false           # Enable verbose output
838  export DRY_RUN=false          # Show what would be done
839  export PARALLEL=true          # Run tasks in parallel
840  export LOG_LEVEL="info"       # debug|info|warn|error
841  ```
842  
843  ## Creating New Scripts
844  
845  ### Guidelines
846  
847  1. **Use the template** - Start with the script template above
848  2. **Add help text** - Always provide --help option
849  3. **Handle errors** - Use `set -euo pipefail` and trap errors
850  4. **Log clearly** - Use log_info, log_warn, log_error consistently
851  5. **Make it idempotent** - Safe to run multiple times
852  6. **Test thoroughly** - Test with various inputs and edge cases
853  
854  ### Checklist
855  
856  - [ ] Shebang line: `#!/bin/bash`
857  - [ ] Set options: `set -euo pipefail`
858  - [ ] Usage function with examples
859  - [ ] Argument parsing
860  - [ ] Error handling
861  - [ ] Cleanup trap
862  - [ ] Logging
863  - [ ] Exit codes (0=success, non-zero=failure)
864  - [ ] Executable: `chmod +x script.sh`
865  
866  ## Debugging Scripts
867  
868  ### Enable Debug Mode
869  
870  ```bash
871  # Run with bash debug output
872  bash -x ./script.sh
873  
874  # Or set in script
875  set -x  # Enable debug mode
876  ```
877  
878  ### Check Variables
879  
880  ```bash
881  # Print all variables
882  set | grep ECHO_
883  
884  # Verify paths
885  echo "Project root: $PROJECT_ROOT"
886  echo "Script dir: $SCRIPT_DIR"
887  ```
888  
889  ### Test Without Execution
890  
891  ```bash
892  # Dry run mode
893  ./script.sh --dry-run
894  
895  # Or in script:
896  if [[ "$DRY_RUN" == "true" ]]; then
897    echo "Would execute: $command"
898  else
899    $command
900  fi
901  ```
902  
903  ## Related Documentation
904  
905  - **Parent:** [../CLAUDE.md](../CLAUDE.md) - Project overview
906  - **Setup:** Main project setup using these scripts
907  - **Testing:** [../training/claude.md](../training/claude.md) - Test scripts details
908  - **Deployment:** [../docker/claude.md](../docker/claude.md) - Deployment scripts
909  
910  ---
911  
912  **Remember:** Scripts should be simple, well-documented, and safe to run repeatedly. When in doubt, add a --dry-run mode.