/ commands / init-verifiers.ts
init-verifiers.ts
  1  import type { Command } from '../commands.js'
  2  
  3  const command = {
  4    type: 'prompt',
  5    name: 'init-verifiers',
  6    description:
  7      'Create verifier skill(s) for automated verification of code changes',
  8    contentLength: 0, // Dynamic content
  9    progressMessage: 'analyzing your project and creating verifier skills',
 10    source: 'builtin',
 11    async getPromptForCommand() {
 12      return [
 13        {
 14          type: 'text',
 15          text: `Use the TodoWrite tool to track your progress through this multi-step task.
 16  
 17  ## Goal
 18  
 19  Create one or more verifier skills that can be used by the Verify agent to automatically verify code changes in this project or folder. You may create multiple verifiers if the project has different verification needs (e.g., both web UI and API endpoints).
 20  
 21  **Do NOT create verifiers for unit tests or typechecking.** Those are already handled by the standard build/test workflow and don't need dedicated verifier skills. Focus on functional verification: web UI (Playwright), CLI (Tmux), and API (HTTP) verifiers.
 22  
 23  ## Phase 1: Auto-Detection
 24  
 25  Analyze the project to detect what's in different subdirectories. The project may contain multiple sub-projects or areas that need different verification approaches (e.g., a web frontend, an API backend, and shared libraries all in one repo).
 26  
 27  1. **Scan top-level directories** to identify distinct project areas:
 28     - Look for separate package.json, Cargo.toml, pyproject.toml, go.mod in subdirectories
 29     - Identify distinct application types in different folders
 30  
 31  2. **For each area, detect:**
 32  
 33     a. **Project type and stack**
 34        - Primary language(s) and frameworks
 35        - Package managers (npm, yarn, pnpm, pip, cargo, etc.)
 36  
 37     b. **Application type**
 38        - Web app (React, Next.js, Vue, etc.) → suggest Playwright-based verifier
 39        - CLI tool → suggest Tmux-based verifier
 40        - API service (Express, FastAPI, etc.) → suggest HTTP-based verifier
 41  
 42     c. **Existing verification tools**
 43        - Test frameworks (Jest, Vitest, pytest, etc.)
 44        - E2E tools (Playwright, Cypress, etc.)
 45        - Dev server scripts in package.json
 46  
 47     d. **Dev server configuration**
 48        - How to start the dev server
 49        - What URL it runs on
 50        - What text indicates it's ready
 51  
 52  3. **Installed verification packages** (for web apps)
 53     - Check if Playwright is installed (look in package.json dependencies/devDependencies)
 54     - Check MCP configuration (.mcp.json) for browser automation tools:
 55       - Playwright MCP server
 56       - Chrome DevTools MCP server
 57       - Claude Chrome Extension MCP (browser-use via Claude's Chrome extension)
 58     - For Python projects, check for playwright, pytest-playwright
 59  
 60  ## Phase 2: Verification Tool Setup
 61  
 62  Based on what was detected in Phase 1, help the user set up appropriate verification tools.
 63  
 64  ### For Web Applications
 65  
 66  1. **If browser automation tools are already installed/configured**, ask the user which one they want to use:
 67     - Use AskUserQuestion to present the detected options
 68     - Example: "I found Playwright and Chrome DevTools MCP configured. Which would you like to use for verification?"
 69  
 70  2. **If NO browser automation tools are detected**, ask if they want to install/configure one:
 71     - Use AskUserQuestion: "No browser automation tools detected. Would you like to set one up for UI verification?"
 72     - Options to offer:
 73       - **Playwright** (Recommended) - Full browser automation library, works headless, great for CI
 74       - **Chrome DevTools MCP** - Uses Chrome DevTools Protocol via MCP
 75       - **Claude Chrome Extension** - Uses the Claude Chrome extension for browser interaction (requires the extension installed in Chrome)
 76       - **None** - Skip browser automation (will use basic HTTP checks only)
 77  
 78  3. **If user chooses to install Playwright**, run the appropriate command based on package manager:
 79     - For npm: \`npm install -D @playwright/test && npx playwright install\`
 80     - For yarn: \`yarn add -D @playwright/test && yarn playwright install\`
 81     - For pnpm: \`pnpm add -D @playwright/test && pnpm exec playwright install\`
 82     - For bun: \`bun add -D @playwright/test && bun playwright install\`
 83  
 84  4. **If user chooses Chrome DevTools MCP or Claude Chrome Extension**:
 85     - These require MCP server configuration rather than package installation
 86     - Ask if they want you to add the MCP server configuration to .mcp.json
 87     - For Claude Chrome Extension, inform them they need the extension installed from the Chrome Web Store
 88  
 89  5. **MCP Server Setup** (if applicable):
 90     - If user selected an MCP-based option, configure the appropriate entry in .mcp.json
 91     - Update the verifier skill's allowed-tools to use the appropriate mcp__* tools
 92  
 93  ### For CLI Tools
 94  
 95  1. Check if asciinema is available (run \`which asciinema\`)
 96  2. If not available, inform the user that asciinema can help record verification sessions but is optional
 97  3. Tmux is typically system-installed, just verify it's available
 98  
 99  ### For API Services
100  
101  1. Check if HTTP testing tools are available:
102     - curl (usually system-installed)
103     - httpie (\`http\` command)
104  2. No installation typically needed
105  
106  ## Phase 3: Interactive Q&A
107  
108  Based on the areas detected in Phase 1, you may need to create multiple verifiers. For each distinct area, use the AskUserQuestion tool to confirm:
109  
110  1. **Verifier name** - Based on detection, suggest a name but let user choose:
111  
112     If there is only ONE project area, use the simple format:
113     - "verifier-playwright" for web UI testing
114     - "verifier-cli" for CLI/terminal testing
115     - "verifier-api" for HTTP API testing
116  
117     If there are MULTIPLE project areas, use the format \`verifier-<project>-<type>\`:
118     - "verifier-frontend-playwright" for the frontend web UI
119     - "verifier-backend-api" for the backend API
120     - "verifier-admin-playwright" for an admin dashboard
121  
122     The \`<project>\` portion should be a short identifier for the subdirectory or project area (e.g., the folder name or package name).
123  
124     Custom names are allowed but MUST include "verifier" in the name — the Verify agent discovers skills by looking for "verifier" in the folder name.
125  
126  2. **Project-specific questions** based on type:
127  
128     For web apps (playwright):
129     - Dev server command (e.g., "npm run dev")
130     - Dev server URL (e.g., "http://localhost:3000")
131     - Ready signal (text that appears when server is ready)
132  
133     For CLI tools:
134     - Entry point command (e.g., "node ./cli.js" or "./target/debug/myapp")
135     - Whether to record with asciinema
136  
137     For APIs:
138     - API server command
139     - Base URL
140  
141  3. **Authentication & Login** (for web apps and APIs):
142  
143     Use AskUserQuestion to ask: "Does your app require authentication/login to access the pages or endpoints being verified?"
144     - **No authentication needed** - App is publicly accessible, no login required
145     - **Yes, login required** - App requires authentication before verification can proceed
146     - **Some pages require auth** - Mix of public and authenticated routes
147  
148     If the user selects login required (or partial), ask follow-up questions:
149     - **Login method**: How does a user log in?
150       - Form-based login (username/password on a login page)
151       - API token/key (passed as header or query param)
152       - OAuth/SSO (redirect-based flow)
153       - Other (let user describe)
154     - **Test credentials**: What credentials should the verifier use?
155       - Ask for the login URL (e.g., "/login", "http://localhost:3000/auth")
156       - Ask for test username/email and password, or API key
157       - Note: Suggest the user use environment variables for secrets (e.g., \`TEST_USER\`, \`TEST_PASSWORD\`) rather than hardcoding
158     - **Post-login indicator**: How to confirm login succeeded?
159       - URL redirect (e.g., redirects to "/dashboard")
160       - Element appears (e.g., "Welcome" text, user avatar)
161       - Cookie/token is set
162  
163  ## Phase 4: Generate Verifier Skill
164  
165  **All verifier skills are created in the project root's \`.claude/skills/\` directory.** This ensures they are automatically loaded when Claude runs in the project.
166  
167  Write the skill file to \`.claude/skills/<verifier-name>/SKILL.md\`.
168  
169  ### Skill Template Structure
170  
171  \`\`\`markdown
172  ---
173  name: <verifier-name>
174  description: <description based on type>
175  allowed-tools:
176    # Tools appropriate for the verifier type
177  ---
178  
179  # <Verifier Title>
180  
181  You are a verification executor. You receive a verification plan and execute it EXACTLY as written.
182  
183  ## Project Context
184  <Project-specific details from detection>
185  
186  ## Setup Instructions
187  <How to start any required services>
188  
189  ## Authentication
190  <If auth is required, include step-by-step login instructions here>
191  <Include login URL, credential env vars, and post-login verification>
192  <If no auth needed, omit this section>
193  
194  ## Reporting
195  
196  Report PASS or FAIL for each step using the format specified in the verification plan.
197  
198  ## Cleanup
199  
200  After verification:
201  1. Stop any dev servers started
202  2. Close any browser sessions
203  3. Report final summary
204  
205  ## Self-Update
206  
207  If verification fails because this skill's instructions are outdated (dev server command/port/ready-signal changed, etc.) — not because the feature under test is broken — or if the user corrects you mid-run, use AskUserQuestion to confirm and then Edit this SKILL.md with a minimal targeted fix.
208  \`\`\`
209  
210  ### Allowed Tools by Type
211  
212  **verifier-playwright**:
213  \`\`\`yaml
214  allowed-tools:
215    - Bash(npm:*)
216    - Bash(yarn:*)
217    - Bash(pnpm:*)
218    - Bash(bun:*)
219    - mcp__playwright__*
220    - Read
221    - Glob
222    - Grep
223  \`\`\`
224  
225  **verifier-cli**:
226  \`\`\`yaml
227  allowed-tools:
228    - Tmux
229    - Bash(asciinema:*)
230    - Read
231    - Glob
232    - Grep
233  \`\`\`
234  
235  **verifier-api**:
236  \`\`\`yaml
237  allowed-tools:
238    - Bash(curl:*)
239    - Bash(http:*)
240    - Bash(npm:*)
241    - Bash(yarn:*)
242    - Read
243    - Glob
244    - Grep
245  \`\`\`
246  
247  
248  ## Phase 5: Confirm Creation
249  
250  After writing the skill file(s), inform the user:
251  1. Where each skill was created (always in \`.claude/skills/\`)
252  2. How the Verify agent will discover them — the folder name must contain "verifier" (case-insensitive) for automatic discovery
253  3. That they can edit the skills to customize them
254  4. That they can run /init-verifiers again to add more verifiers for other areas
255  5. That the verifier will offer to self-update if it detects its own instructions are outdated (wrong dev server command, changed ready signal, etc.)
256  `,
257        },
258      ]
259    },
260  } satisfies Command
261  
262  export default command