/ outbound-engine / SKILL.md
SKILL.md
1 --- 2 name: cold-outbound-optimizer 3 description: Design, analyze, and optimize cold outbound email campaigns for Instantly. Handles end-to-end ICP definition, expert panel scoring (recursive to 90+), sequence copywriting, infrastructure audit, capacity planning, and implementation docs. Use when asked to build cold outbound sequences, optimize cold email, analyze outbound campaigns, build sales sequences, build Instantly sequences, create cold outbound strategies, or design email campaigns. Supports both "start from scratch" and "optimize existing" modes. 4 --- 5 6 7 ## Preamble (runs on skill start) 8 9 ```bash 10 # Version check (silent if up to date) 11 python3 telemetry/version_check.py 2>/dev/null || true 12 13 # Telemetry opt-in (first run only, then remembers your choice) 14 python3 telemetry/telemetry_init.py 2>/dev/null || true 15 ``` 16 17 > **Privacy:** This skill logs usage locally to `~/.ai-marketing-skills/analytics/`. Remote telemetry is opt-in only. No code, file paths, or repo content is ever collected. See `telemetry/README.md`. 18 19 --- 20 21 # Cold Outbound Optimizer 22 23 --- 24 25 ## Startup: Determine Mode 26 27 Ask the user: 28 1. Do you have an **existing Instantly account** with campaigns to audit, or are you **starting from scratch**? 29 2. Do you have an **Instantly API key**? (Required for audit mode.) 30 31 If API key provided → run `scripts/instantly-audit.py` to pull campaigns, account inventory, and warmup scores before proceeding. 32 33 --- 34 35 ## Phase 1: Discovery & Audit 36 37 ### 1A — Infrastructure Check (if API key available) 38 Run `python3 scripts/instantly-audit.py --api-key <KEY>` and report: 39 - Active campaigns (name, status, reply rate, open rate) 40 - Sending accounts (count, warmup score, daily limit) 41 - Domain inventory 42 - Warmup gaps: any account with score <80 or <14 days warmup → flag as NOT ready 43 44 ### 1B — Performance Data 45 - Pull campaign analytics from Instantly 46 - Ask: "Do you have a spreadsheet with historical outbound data?" If yes, request link. 47 48 ### 1C — ICP Definition 49 If no ICP defined, collect: 50 - **Titles:** Who are you targeting? (e.g., VP Marketing, Head of Growth) 51 - **Industries:** Which verticals? 52 - **Company size:** Employee count or revenue range? 53 - **Revenue floor:** Minimum ARR/revenue to qualify? 54 - **Anti-ICP:** Who to explicitly exclude? 55 56 Use `references/icp-template.md` as the collection template. 57 58 ### 1D — Business Context 59 Collect: 60 - What do you sell? (One sentence, no jargon) 61 - What's the primary offer? (Free trial, audit, demo, consultation) 62 - Real URLs to reference (pricing page, case studies, relevant content) 63 - Any proof points? (Client results, stats, social proof) 64 65 ### 1E — Expert Panel Config 66 Default: 10 experts (see `references/expert-panel.md`). 67 Ask: "Any industry-specific experts to add, or panelists to swap?" Confirm roster before scoring. 68 69 --- 70 71 ## Phase 2: Expert Panel Recursive Scoring 72 73 **Target: 90/100. Non-negotiable. Iterate until reached.** 74 75 ### Round Structure 76 Each round produces: 77 1. **Score table** — all 10 panelists, individual score (0-100), one-line rationale 78 2. **Aggregate score** — average of all 10 79 3. **Top weaknesses** — ranked list of what's holding the copy back 80 4. **Changes made** — specific edits addressing each weakness 81 5. **Updated copy** — full revised sequence after changes 82 83 ### Scoring Criteria (per panelist's lens — see `references/expert-panel.md`) 84 - Subject line curiosity / open rate potential 85 - First sentence pattern interrupt 86 - Body clarity and brevity 87 - CTA softness and specificity 88 - Sequence flow and follow-up logic 89 - Deliverability risk signals (spam words, link density) 90 - Personalization believability 91 92 ### Rules 93 - Scores must be brutally honest. No padding to 90 without earning it. 94 - If round score < 90: identify top 3 weaknesses, revise copy, run next round. 95 - If round score ≥ 90: finalize copy and proceed to deliverables. 96 - Show every round in the final doc — the iteration trail is part of the value. 97 98 --- 99 100 ## Phase 3: Deliverables 101 102 ### Strategy Doc 103 Create a document (Google Doc, Notion, or markdown) with: 104 105 1. **Pre-Analysis / Brutal Truth** — what the existing campaigns are doing wrong (or baseline if starting from scratch) 106 2. **ICP Summary** — confirmed targeting parameters 107 3. **Infrastructure Status** — account inventory, warmup readiness, capacity math 108 4. **Scoring Rounds** — full panel vote tables for every round 109 5. **Final Email Copy** — all steps for all campaigns, Instantly-ready format 110 6. **Implementation Plan** — step-by-step setup instructions 111 7. **Capacity Math** — accounts × daily send rate = pipeline projections 112 8. **Weekly Metrics Targets** — open rate, reply rate, positive reply rate, meetings booked 113 9. **STOP List** — what to kill immediately 114 10. **START List** — what to launch first 115 116 ### Format Rules for Final Copy 117 Follow all rules in `references/instantly-rules.md` and `references/copy-rules.md`. 118 119 ### Human Review Gate 120 **Do NOT push anything to Instantly automatically.** The doc is for human review. Get explicit approval before any API writes. 121 122 ### Iteration 123 After review, collect feedback and re-run scoring on revised copy if needed. 124 125 --- 126 127 ## Capacity Math Formula 128 129 ``` 130 Accounts ready (score ≥80, ≥14 days warmup) × 30 emails/day = conservative daily volume 131 Accounts ready × 50 emails/day = aggressive daily volume 132 Daily volume × 22 working days = monthly send capacity 133 Monthly sends × expected reply rate = expected replies 134 Expected replies × qualification rate = pipeline opportunities 135 ``` 136 137 --- 138 139 ## Weekly Metrics Targets (Baselines) 140 141 | Metric | Good | Great | 142 |--------|------|-------| 143 | Open rate | 40%+ | 60%+ | 144 | Reply rate | 3%+ | 7%+ | 145 | Positive reply rate | 1%+ | 3%+ | 146 | Meeting rate | 0.5%+ | 1.5%+ | 147 148 Adjust targets based on niche and offer. Cold traffic to a free audit converts differently than a paid trial. 149 150 --- 151 152 ## Add-On Recommendations (mention but don't build) 153 154 - **LinkedIn automation:** HeyReach or similar for multi-channel sequences. Separate workflow. 155 - **Lead enrichment:** Clay or Apollo for personalization data before upload. 156 - **Lead pipeline:** Use `scripts/lead-pipeline.py` for Apollo → LeadMagic → Instantly automation. 157 158 --- 159 160 ## Reference Files 161 162 | File | Purpose | 163 |------|---------| 164 | `references/instantly-rules.md` | Variable syntax, sequence structure, deliverability rules | 165 | `references/expert-panel.md` | Default 10-expert roster with scoring lenses | 166 | `references/copy-rules.md` | Email copy rules (first sentence, CTA, stats framing) | 167 | `references/icp-template.md` | ICP data collection template | 168 | `scripts/instantly-audit.py` | Pulls campaigns, accounts, warmup scores via Instantly v2 API | 169 | `scripts/lead-pipeline.py` | End-to-end lead sourcing pipeline | 170 | `scripts/competitive-monitor.py` | Competitor tracking and intelligence | 171 | `scripts/cross-signal-detector.py` | Multi-source signal detection | 172 | `scripts/cold-outbound-sender.py` | Send approved outbound emails |