/ UBIQUITOUS_LANGUAGE.org
UBIQUITOUS_LANGUAGE.org
  1  #+title: Pi-Tokenomics — Ubiquitous Language
  2  #+author: Xavier Brinon
  3  #+date: [2026-04-19]
  4  #+startup: indent
  5  #+options: toc:2 num:nil ^:{}
  6  
  7  * Scope
  8  
  9  Canonical vocabulary for Pi-Tokenomics, extracted from
 10  =token_economics.org=, the two AISLE articles (/AI Cybersecurity After
 11  Mythos — the Jagged Frontier/ and /System Over Model: Zero-Day
 12  Discovery at the Jagged Frontier/), and the cross-stack applicability
 13  evaluation (TypeScript / Scheme / TanStack).
 14  
 15  Purpose: fix a shared language before the framework drifts between
 16  AISLE's cybersec idiom and the domain-neutral production-function
 17  framing this project is building.
 18  
 19  * Token-economic metrics
 20  
 21  | Term                     | Definition                                                                                          | Aliases to avoid                    |
 22  |--------------------------+-----------------------------------------------------------------------------------------------------+-------------------------------------|
 23  | /Intelligence per token/ | Useful task progress per token consumed, conditional on =(model, task)=.                            | "model intelligence" (model-only)   |
 24  | /Tokens per dollar/      | Cost per /successful/ unit of work, after the applied discount stack (caching, batch, tier).        | "price per token" (headline only)   |
 25  | /Tokens per second/      | Generation speed; always reported as one of TTFT, TPOT, or sustained throughput.                    | "speed" (underspecified)            |
 26  | /TTFT/                   | Time to first token — latency to first streamed output. Matters for interactive UX.                 | "response time" (ambiguous)         |
 27  | /TPOT/                   | Time per output token — steady-state generation rate. Matters for long generations.                 | "throughput" (different concept)    |
 28  | /Sustained throughput/   | Tokens/sec held across N parallel requests, reported at p50 and p95.                                | "throughput" (conflates with TPOT)  |
 29  | /Scaffold/               | Non-model machinery (prompts, verifiers, retries, orchestration) that lets weaker+many substitute for stronger+one. | "prompting"               |
 30  
 31  * Architectural patterns
 32  
 33  | Term                                  | Definition                                                                                                    | Aliases to avoid                          |
 34  |---------------------------------------+---------------------------------------------------------------------------------------------------------------+-------------------------------------------|
 35  | /Cheap fan-out → expensive fan-in/    | Pipeline where a cheap stage scans everything (recall-biased) and an expensive stage filters (precision-biased). | "thousand detectives" (rhetoric, not term) |
 36  | /Expensive plan → cheap execution/    | Pipeline where an expensive stage produces a plan that cheap stages execute unit-by-unit.                     | "plan-then-execute"                       |
 37  | /Routing/                             | Choosing /which/ model to call for a given stage.                                                             | "orchestration" (distinct job)            |
 38  | /Orchestration/                       | Filtering and composing model outputs so the final result earns downstream trust.                             | "routing" (distinct job)                  |
 39  | /Stage/                               | A discrete unit of an AI workflow, classifiable by the eight stage-classifying properties.                    | "step", "phase" (loose)                   |
 40  | /Model-assignment table/              | A per-stage mapping from task properties to chosen model tier.                                                | "model selection", "routing config"       |
 41  
 42  * Stage-classifying properties
 43  
 44  The eight axes that determine which token-economic metric dominates
 45  for a given stage.
 46  
 47  | Term                 | Definition                                                                                 |
 48  |----------------------+--------------------------------------------------------------------------------------------|
 49  | /Volume/             | Items processed per unit time; after the /cost-ratio threshold/, also a /decision/.        |
 50  | /Verifiability/      | Whether stage output can be cheaply auto-checked (tests pass, exploit reproduces).         |
 51  | /Latency tolerance/  | Acceptable delay — interactive / async / batch.                                            |
 52  | /Reasoning depth/    | Number of deductive steps a correct answer requires.                                       |
 53  | /Context size/       | Input volume per single call.                                                              |
 54  | /Retry cost/         | Cost of a wrong output — silent, recoverable, or compounding.                              |
 55  | /Blast radius/       | External consequence of a wrong answer (destructive vs contained).                         |
 56  | /Position in pipeline/ | Whether the stage is upstream (errors propagate) or terminal.                            |
 57  
 58  * Pipeline-quality constructs
 59  
 60  | Term                  | Definition                                                                                                     | Aliases to avoid      |
 61  |-----------------------+----------------------------------------------------------------------------------------------------------------+-----------------------|
 62  | /Intelligence floor/  | Minimum reasoning capability a stage requires to produce usable output.                                        | "smart enough"        |
 63  | /Recall floor/        | Minimum fraction of real items a stage must surface; /cannot/ be recovered downstream.                         | "sensitivity"         |
 64  | /Precision floor/     | Minimum fraction of surfaced items that must be real; /can/ be raised downstream by orchestration.             | "specificity"         |
 65  | /False negative/      | Real item missed by a stage — lost forever once the stage completes.                                           | "miss"                |
 66  | /False positive/      | Noise surfaced by a stage — filterable by a downstream orchestration pass.                                     | "false alarm"         |
 67  | /Compounding error/   | Error introduced upstream that silently degrades every downstream stage.                                       | "pipeline error"      |
 68  | /Cost-ratio threshold/ | Cheap/expensive model cost ratio at which fan-out becomes viable (~10×) or unlocks new pipeline shapes (~100×). | "price ratio" (loose) |
 69  
 70  * Framework meta-terms
 71  
 72  | Term                   | Definition                                                                                            | Aliases to avoid              |
 73  |------------------------+-------------------------------------------------------------------------------------------------------+-------------------------------|
 74  | /Pareto frontier/      | Set of model choices not dominated on all three token-economic metrics simultaneously.                | "best model"                  |
 75  | /Coverage as strategy/ | Treating volume as a /decision/ unlocked by low cost-per-item, not a fixed input property.            | "scan everything"             |
 76  | /Non-verifiable task/  | Task whose output quality cannot be cheaply auto-checked (creative, advisory).                        | "subjective task"             |
 77  | /Domain eval/          | Domain-specific test set used to measure task-conditional intelligence per token.                     | "benchmark" (too generic)     |
 78  | /Maintainer trust/     | Whether downstream humans trust orchestrated small-model output enough to act on it.                  | "credibility"                 |
 79  | /Production function/  | Multi-input model of AI-workflow output. Open question whether Cobb-Douglas (smooth) or Leontief (fixed per stage + across-stage scaffold substitution) fits. | —                            |
 80  
 81  * Jagged Frontier case-study vocabulary
 82  
 83  AISLE-specific terms. Kept in a separate group so cybersec idiom
 84  cannot leak into cross-domain framework prose.
 85  
 86  | Term                 | Definition                                                                                                             | Aliases to avoid                |
 87  |----------------------+------------------------------------------------------------------------------------------------------------------------+---------------------------------|
 88  | /Jagged frontier/    | Small models' paradoxical capability profile: failing at easy structured tasks (valid JSON) while succeeding at deep pattern detection. | "uneven capability"   |
 89  | /System over model/  | AISLE's thesis that orchestrating adequate models at high throughput beats a single frontier model on selected inputs. | "cheap is enough"              |
 90  | /Nano-analyzer/      | AISLE's three-stage (context → scan → skeptical triage) embarrassingly-parallel scanner powered by small models.       | "small-model scanner"          |
 91  | /Mythos/             | AISLE's prior frontier-model flagship; the cost/selectivity baseline against which /nano-analyzer/ is compared.        | "big model"                    |
 92  | /Skeptical triage/   | /Nano-analyzer/'s third stage — multi-round review with grep evidence and an arbiter model.                            | "filtering"                    |
 93  | /Thousand detectives/ | AISLE's rhetorical framing of /Cheap fan-out → expensive fan-in/. Rhetoric, not the canonical term.                  | —                              |
 94  
 95  * Relationships
 96  
 97  - A /stage/ is classified by some or all of the eight
 98    /stage-classifying properties/.
 99  - Each /stage/ has a specified /intelligence floor/, /recall floor/,
100    and /precision floor/ — the three are independent constraints.
101  - A /Cheap fan-out → expensive fan-in/ pipeline composes at least two
102    /stages/: a high-/recall-floor/ cheap stage and a
103    high-/precision-floor/ expensive stage.
104  - /Orchestration/ is the mechanism that raises precision from a noisy
105    upstream /stage/'s output. /Routing/ is a distinct job that selects
106    a model for a /stage/; it does not filter output.
107  - A /scaffold/ is the concrete substitution mechanism that lets many
108    cheap /stages/ replace one expensive /stage/ — the fourth input in
109    the production-function framing.
110  - The /cost-ratio threshold/ gates /scaffold/ viability: below ~10×
111    the pattern collapses into "same pipeline, cheaper"; above ~100× it
112    unlocks /coverage as strategy/.
113  - /Nano-analyzer/ is one concrete instance of /Cheap fan-out →
114    expensive fan-in/, applied to security scanning on C-kernel
115    corpora.
116  - /Maintainer trust/ is an external constraint on any /orchestrated/
117    output — not itself a token-economic metric, but gates whether the
118    metrics cash out.
119  
120  * Example dialogue
121  
122  A framework-application conversation between Xavier and a
123  hypothetical collaborator auditing a TanStack codebase.
124  
125  #+begin_quote
126  /Collaborator:/ I want a nano-analyzer-style pass over our TanStack
127  loaders. How do I design the /stages/?
128  
129  /Xavier:/ Two /stages/. Stage one is the cheap fan-out — Haiku over
130  every loader, flagging any path where untrusted input reaches the
131  response. High /recall floor/, low /precision floor/, because
132  downstream /orchestration/ recovers precision. Stage two is the
133  /skeptical triage/: a stronger model plus grep evidence, raising the
134  /precision floor/ before a human ever sees output.
135  
136  /Collaborator:/ How do I pick the /intelligence floor/ for stage one?
137  
138  /Xavier:/ Walk the eight /stage-classifying properties/. High
139  /volume/, high /verifiability/ (you can grep sink patterns), low
140  /reasoning depth/ per file, contained /blast radius/. That
141  combination says optimise /tokens per dollar/, not /intelligence per
142  token/. The /intelligence floor/ is whatever Haiku clears.
143  
144  /Collaborator:/ Does the /cost-ratio threshold/ help here?
145  
146  /Xavier:/ That's the real question. Haiku to Opus is roughly 75× on
147  input tokens — below the 100× regime. So you probably get "same
148  pipeline, cheaper" rather than /coverage as strategy/. The
149  categorical unlock requires a bigger gap; that's why AISLE's
150  /nano-analyzer/ wins at 600× and a TanStack analogue is a softer
151  result.
152  
153  /Collaborator:/ And /maintainer trust/ — does it bind here?
154  
155  /Xavier:/ Less than in cybersec. A false positive costs developer
156  time, not the credibility of a CVE disclosure. So the /precision
157  floor/ on stage two can loosen, because the /blast radius/ of a
158  wrong flag is an ignored comment, not an unfounded vulnerability
159  report.
160  #+end_quote
161  
162  * Flagged ambiguities
163  
164  - /"Orchestration" vs "routing"/ :: =token_economics.org= (Deeper
165    Question section, lines 282–302) conflates them, then explicitly
166    separates them under /Directions for Further Exploration/.
167    Canonical split: /routing/ picks a model, /orchestration/ filters
168    and composes output. Fix the /Orchestration Layer/ table heading
169    and prose.
170  - /"Intelligence"/ :: Used to mean /capability of the model alone/
171    (industry usage) and /task-conditional capability/ (this
172    framework). Always task-conditional here; the standalone meaning
173    is an alias to avoid.
174  - /"Volume"/ :: Appears once as an axis (stage property) and once as
175    a decision (/coverage as strategy/). Resolution: /volume/ is the
176    property; /coverage as strategy/ is the framing move that converts
177    it into a decision once the /cost-ratio threshold/ is crossed.
178    Keep both terms — they live at different levels.
179  - /"Production function"/ :: Source doc Open Question #2. Cobb-Douglas
180    (smooth substitution of inputs) vs Leontief (fixed proportions per
181    stage + across-stage substitution via /scaffold/). Top-level prose
182    still reads Cobb-Douglas; /Four Insights/ implicitly argues
183    Leontief. Pick one and edit.
184  - /"Accuracy"/ :: Decomposed into /recall/ and /precision/ in /Four
185    Insights/. Do not use /accuracy/ in framework prose — it hides the
186    error-type split that makes /Cheap fan-out → expensive fan-in/
187    work.
188  - /"Small model" / "cheap model" / "weak model" / "nano-model"/ ::
189    Used interchangeably across the articles and conversation.
190    Canonical: /small model/ for capability claims, /cheap model/ for
191    cost claims, /nano-analyzer/ only for AISLE's specific scanner.
192    Avoid /weak/ — it is judgmental and model-task-conditional
193    intelligence makes it imprecise.
194  - /"Frontier model" / "strong model" / "Opus" / "Mythos"/ :: Same
195    cluster, opposite pole. Canonical: /frontier model/ for the
196    category, named models only when the claim is about a specific
197    product.
198  - /"Scaffold" vs "scaffolding"/ :: Both appear in source. Prefer
199    /scaffold/ (singular noun) for the concept, /scaffolding/ (plural
200    or verb) for the activity of building them. Aligns with the AISLE
201    article's /scaffold/ as the fourth input.