IMPLEMENTATION_SPEC.md
1 # Sovereign OS Implementation Specification 2 3 *Complete Technical Specification for All Architectural Components* 4 *Version 0.1 | 2026-01-13* 5 6 --- 7 8 ## Table of Contents 9 10 1. [System Overview](#1-system-overview) 11 2. [God Database](#2-god-database) 12 3. [Gravity Wells & Free Energy Engine](#3-gravity-wells--free-energy-engine) 13 4. [Resonance Scoring System](#4-resonance-scoring-system) 14 5. [Complementary Altitude System](#5-complementary-altitude-system) 15 6. [Flight Protocol / OODA Integration](#6-flight-protocol--ooda-integration) 16 7. [Cooperative Eye / Attention Tracking](#7-cooperative-eye--attention-tracking) 17 8. [Permeability Layer](#8-permeability-layer) 18 9. [Cultural Evolution Engine](#9-cultural-evolution-engine) 19 10. [Integration Architecture](#10-integration-architecture) 20 11. [Data Models](#11-data-models) 21 12. [API Specifications](#12-api-specifications) 22 23 --- 24 25 ## 1. System Overview 26 27 ### 1.1 Core Principles 28 29 The system implements a **nested Markov blanket architecture** where: 30 - Each level minimizes free energy (prediction error) 31 - Structure flows up through permeable membranes 32 - Content remains sovereign within blankets 33 - Attention is the primitive signal that flows between levels 34 35 ### 1.2 Component Map 36 37 ``` 38 ┌─────────────────────────────────────────────────────────────────────────────┐ 39 │ SOVEREIGN OS STACK │ 40 ├─────────────────────────────────────────────────────────────────────────────┤ 41 │ │ 42 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 43 │ │ CULTURAL EVOLUTION ENGINE │ │ 44 │ │ Detects convergent patterns, manages norm proposals, codification │ │ 45 │ └─────────────────────────────────────────────────────────────────────┘ │ 46 │ ↕ │ 47 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 48 │ │ PERMEABILITY LAYER │ │ 49 │ │ Concept hashes, fingerprints, beacons, permission enforcement │ │ 50 │ └─────────────────────────────────────────────────────────────────────┘ │ 51 │ ↕ │ 52 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 53 │ │ COOPERATIVE EYE / ATTENTION │ │ 54 │ │ Tracks operator attention, generates beacon signals │ │ 55 │ └─────────────────────────────────────────────────────────────────────┘ │ 56 │ ↕ │ 57 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 58 │ │ FLIGHT PROTOCOL / OODA ENGINE │ │ 59 │ │ Manages cognitive cycles, session state, thread retention │ │ 60 │ └─────────────────────────────────────────────────────────────────────┘ │ 61 │ ↕ │ 62 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 63 │ │ COMPLEMENTARY ALTITUDE SYSTEM │ │ 64 │ │ Detects operator altitude, adjusts system response accordingly │ │ 65 │ └─────────────────────────────────────────────────────────────────────┘ │ 66 │ ↕ │ 67 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 68 │ │ RESONANCE SCORING ENGINE │ │ 69 │ │ Multi-dimensional relevance scoring, gravity well detection │ │ 70 │ └─────────────────────────────────────────────────────────────────────┘ │ 71 │ ↕ │ 72 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 73 │ │ GRAVITY WELLS & FREE ENERGY ENGINE │ │ 74 │ │ Edge prediction, attractor detection, phase transition detection │ │ 75 │ └─────────────────────────────────────────────────────────────────────┘ │ 76 │ ↕ │ 77 │ ┌─────────────────────────────────────────────────────────────────────┐ │ 78 │ │ GOD DATABASE │ │ 79 │ │ Atomic bullets, UUID tracking, graph storage, temporal indexing │ │ 80 │ └─────────────────────────────────────────────────────────────────────┘ │ 81 │ │ 82 └─────────────────────────────────────────────────────────────────────────────┘ 83 ``` 84 85 --- 86 87 ## 2. God Database 88 89 ### 2.1 Purpose 90 91 The foundational data layer that stores all information as atomic units (bullets) with full graph relationships, temporal indexing, and prediction state. 92 93 ### 2.2 Data Model 94 95 ```python 96 @dataclass 97 class Bullet: 98 """ 99 The atomic unit of information - a Markov blanket at the lowest level. 100 """ 101 # Identity 102 uuid: str # Globally unique, persists across all transformations 103 blanket_id: str # Which Markov blanket owns this bullet 104 105 # Content (NEVER leaves blanket) 106 content: str # The actual text/data 107 content_type: ContentType # text, code, image_ref, audio_ref, etc. 108 109 # Structure (CAN flow through permeability) 110 parent_uuid: Optional[str] # Parent bullet (for hierarchy) 111 child_uuids: List[str] # Child bullets 112 edge_uuids: List[str] # Edges to other bullets 113 114 # Temporal 115 created_at: datetime 116 updated_at: datetime 117 accessed_at: datetime # Last attention 118 access_count: int # Total attention events 119 120 # Prediction State (for free energy engine) 121 prediction_targets: List[PredictionTarget] # Edges this bullet is "pulling toward" 122 gravity_well_strength: float # How strongly this attracts related concepts 123 free_energy_score: float # Current prediction error level 124 125 # Tags (two-tier) 126 visible_tags: List[str] # Human-readable: #decision, #principle 127 metadata_tags: Dict[str, Any] # System: altitude, resonance, tone 128 129 # Resonance 130 resonance_score: float # Current relevance (0-1) 131 last_resonance_factors: Dict[str, float] # Breakdown of score components 132 133 134 @dataclass 135 class Edge: 136 """ 137 Connection between bullets with typed relationship and strength. 138 """ 139 uuid: str 140 source_uuid: str 141 target_uuid: str 142 143 # Type 144 edge_type: EdgeType # is_a, has_a, causes, relates_to, etc. 145 146 # Strength 147 weight: float # 0-1, strength of connection 148 confidence: float # How certain is this edge 149 150 # Temporal 151 created_at: datetime 152 last_traversed: datetime 153 traversal_count: int 154 155 # Prediction 156 predicted: bool # Was this edge predicted before formation? 157 prediction_strength: float # How strongly was it predicted? 158 formation_surprise: float # How surprising was its formation? 159 160 161 @dataclass 162 class PredictionTarget: 163 """ 164 A potential edge that the system predicts should form. 165 """ 166 target_uuid: str # The bullet this one is pulling toward 167 predicted_edge_type: EdgeType 168 prediction_strength: float # How strongly predicted (0-1) 169 free_energy_reduction: float # How much FE would drop if edge formed 170 first_predicted: datetime 171 prediction_count: int # How many times re-predicted 172 ``` 173 174 ### 2.3 Storage Architecture 175 176 ``` 177 god_database/ 178 ├── bullets/ 179 │ ├── by_uuid/ # Primary index 180 │ ├── by_blanket/ # Partition by Markov blanket 181 │ ├── by_created/ # Temporal index 182 │ └── by_resonance/ # Hot bullets index 183 ├── edges/ 184 │ ├── by_uuid/ 185 │ ├── by_source/ 186 │ ├── by_target/ 187 │ └── by_type/ 188 ├── predictions/ 189 │ ├── active/ # Current prediction targets 190 │ ├── fulfilled/ # Edges that formed as predicted 191 │ └── expired/ # Predictions that didn't materialize 192 ├── temporal/ 193 │ ├── snapshots/ # Point-in-time graph states 194 │ └── deltas/ # Changes between snapshots 195 └── attention/ 196 ├── sessions/ # Attention logs per session 197 └── aggregates/ # Rolled-up attention patterns 198 ``` 199 200 ### 2.4 Core Operations 201 202 ```python 203 class GodDatabase: 204 """Core database operations""" 205 206 def create_bullet(self, content: str, parent_uuid: Optional[str] = None) -> Bullet: 207 """Create a new atomic bullet with UUID""" 208 pass 209 210 def get_bullet(self, uuid: str) -> Optional[Bullet]: 211 """Retrieve bullet by UUID""" 212 pass 213 214 def update_bullet(self, uuid: str, updates: Dict[str, Any]) -> Bullet: 215 """Update bullet, preserving UUID through transformation""" 216 pass 217 218 def create_edge(self, source: str, target: str, edge_type: EdgeType) -> Edge: 219 """Create edge between bullets""" 220 pass 221 222 def get_subgraph(self, center_uuid: str, depth: int = 2) -> Graph: 223 """Get local graph around a bullet""" 224 pass 225 226 def record_attention(self, uuid: str, session_id: str) -> None: 227 """Record that attention was paid to a bullet""" 228 pass 229 230 def get_predictions(self, uuid: str) -> List[PredictionTarget]: 231 """Get current edge predictions for a bullet""" 232 pass 233 234 def temporal_query(self, timestamp: datetime) -> Graph: 235 """Get graph state at a point in time""" 236 pass 237 ``` 238 239 --- 240 241 ## 3. Gravity Wells & Free Energy Engine 242 243 ### 3.1 Purpose 244 245 Implements Karl Friston's Free Energy Principle for knowledge graphs: 246 - Predicts which edges should form 247 - Detects attractor states (gravity wells) 248 - Identifies phase transitions (aha moments) 249 - Measures prediction error (nagging) 250 251 ### 3.2 Core Concepts 252 253 ```python 254 @dataclass 255 class GravityWell: 256 """ 257 An attractor state in the knowledge graph - a concept that pulls 258 related ideas toward it. 259 """ 260 center_uuid: str # The bullet at the center 261 strength: float # How strongly it attracts (0-1) 262 radius: int # How many hops it influences 263 member_uuids: Set[str] # Bullets currently in this well 264 265 # Dynamics 266 growth_rate: float # Is this well growing or shrinking? 267 stability: float # How stable is this configuration? 268 competing_wells: List[str] # Other wells pulling on same members 269 270 271 @dataclass 272 class FreeEnergyState: 273 """ 274 The free energy state of a bullet or subgraph. 275 """ 276 entity_uuid: str 277 278 # Energy components 279 prediction_error: float # Surprise from unformed expected edges 280 complexity_cost: float # Cost of current model complexity 281 total_free_energy: float # prediction_error + complexity_cost 282 283 # Gradients 284 edge_candidates: List[EdgeCandidate] # Edges that would reduce FE 285 pruning_candidates: List[str] # Edges that could be removed 286 287 # History 288 energy_history: List[Tuple[datetime, float]] # FE over time 289 290 291 @dataclass 292 class EdgeCandidate: 293 """ 294 A potential edge that would reduce free energy. 295 """ 296 source_uuid: str 297 target_uuid: str 298 predicted_type: EdgeType 299 300 # Impact 301 free_energy_reduction: float # How much FE would drop 302 cascade_potential: float # Would this trigger other edges? 303 304 # Confidence 305 prediction_confidence: float 306 model_agreement: float # Do multiple models predict this? 307 ``` 308 309 ### 3.3 Algorithm: Edge Prediction 310 311 ```python 312 class FreeEnergyEngine: 313 """ 314 Predicts edges and detects gravity wells using free energy minimization. 315 """ 316 317 def __init__(self, god_db: GodDatabase): 318 self.god_db = god_db 319 self.prediction_models = [ 320 StructuralPredictionModel(), # Based on graph structure 321 SemanticPredictionModel(), # Based on content similarity 322 TemporalPredictionModel(), # Based on access patterns 323 AttentionPredictionModel(), # Based on co-attention 324 ] 325 326 def predict_edges(self, bullet_uuid: str, k: int = 10) -> List[EdgeCandidate]: 327 """ 328 Predict the top-k edges most likely to form from this bullet. 329 330 Uses ensemble of models, each looking at different signals. 331 """ 332 bullet = self.god_db.get_bullet(bullet_uuid) 333 subgraph = self.god_db.get_subgraph(bullet_uuid, depth=3) 334 335 candidates = [] 336 for model in self.prediction_models: 337 model_candidates = model.predict(bullet, subgraph) 338 candidates.extend(model_candidates) 339 340 # Aggregate predictions across models 341 aggregated = self._aggregate_predictions(candidates) 342 343 # Rank by free energy reduction 344 ranked = sorted(aggregated, key=lambda c: c.free_energy_reduction, reverse=True) 345 346 return ranked[:k] 347 348 def compute_free_energy(self, uuid: str) -> FreeEnergyState: 349 """ 350 Compute current free energy state for a bullet. 351 352 Free Energy = Prediction Error + Complexity Cost 353 354 Prediction Error: Sum of unfulfilled edge predictions 355 Complexity Cost: Number of edges relative to explanatory power 356 """ 357 bullet = self.god_db.get_bullet(uuid) 358 predictions = self.god_db.get_predictions(uuid) 359 edges = self.god_db.get_edges_for(uuid) 360 361 # Prediction error: unfulfilled predictions weighted by strength 362 prediction_error = sum( 363 p.prediction_strength * (1 - self._edge_exists(p)) 364 for p in predictions 365 ) 366 367 # Complexity cost: edges that don't reduce prediction error 368 complexity_cost = sum( 369 1 - e.weight for e in edges 370 if not self._edge_reduces_error(e, predictions) 371 ) 372 373 return FreeEnergyState( 374 entity_uuid=uuid, 375 prediction_error=prediction_error, 376 complexity_cost=complexity_cost, 377 total_free_energy=prediction_error + complexity_cost, 378 edge_candidates=self.predict_edges(uuid), 379 pruning_candidates=self._find_prunable_edges(edges, predictions), 380 energy_history=bullet.metadata_tags.get('energy_history', []) 381 ) 382 383 def detect_gravity_wells(self, blanket_id: str) -> List[GravityWell]: 384 """ 385 Detect attractor states in the graph. 386 387 A gravity well is a region where: 388 1. Many bullets have predictions pointing inward 389 2. Free energy is locally minimal 390 3. The configuration is stable over time 391 """ 392 bullets = self.god_db.get_bullets_for_blanket(blanket_id) 393 394 # Find bullets with high inward prediction density 395 inward_scores = {} 396 for bullet in bullets: 397 inward = sum( 398 1 for other in bullets 399 if any(p.target_uuid == bullet.uuid for p in other.prediction_targets) 400 ) 401 inward_scores[bullet.uuid] = inward 402 403 # Cluster into wells 404 wells = self._cluster_into_wells(bullets, inward_scores) 405 406 return wells 407 408 def detect_phase_transition(self, uuid: str) -> Optional[PhaseTransition]: 409 """ 410 Detect if an "aha moment" is occurring - a sudden drop in free energy 411 as the graph reorganizes around a new attractor. 412 """ 413 history = self.god_db.get_energy_history(uuid) 414 415 if len(history) < 10: 416 return None 417 418 recent = history[-5:] 419 previous = history[-10:-5] 420 421 recent_avg = sum(e for _, e in recent) / len(recent) 422 previous_avg = sum(e for _, e in previous) / len(previous) 423 424 # Significant drop = phase transition 425 if previous_avg > 0 and (previous_avg - recent_avg) / previous_avg > 0.3: 426 return PhaseTransition( 427 uuid=uuid, 428 timestamp=recent[-1][0], 429 energy_before=previous_avg, 430 energy_after=recent_avg, 431 trigger_edges=self._find_trigger_edges(uuid, recent[0][0]) 432 ) 433 434 return None 435 ``` 436 437 ### 3.4 Algorithm: Nagging Detection 438 439 ```python 440 def detect_nagging(self, blanket_id: str, threshold: float = 0.7) -> List[NaggingItem]: 441 """ 442 Detect items that are "nagging" - unresolved prediction errors 443 that keep resurfacing. 444 445 Nagging = high prediction strength + repeated prediction + no edge formed 446 """ 447 bullets = self.god_db.get_bullets_for_blanket(blanket_id) 448 nagging = [] 449 450 for bullet in bullets: 451 for prediction in bullet.prediction_targets: 452 # Check if prediction is persistent and unfulfilled 453 if (prediction.prediction_strength > threshold and 454 prediction.prediction_count > 3 and 455 not self._edge_exists(prediction)): 456 457 nagging.append(NaggingItem( 458 source_uuid=bullet.uuid, 459 target_uuid=prediction.target_uuid, 460 strength=prediction.prediction_strength, 461 duration=datetime.now() - prediction.first_predicted, 462 prediction_count=prediction.prediction_count 463 )) 464 465 return sorted(nagging, key=lambda n: n.strength * n.prediction_count, reverse=True) 466 ``` 467 468 --- 469 470 ## 4. Resonance Scoring System 471 472 ### 4.1 Purpose 473 474 Computes multi-dimensional relevance scores that go beyond simple recency. Determines what should surface to attention based on gravity wells, temporal factors, and operator context. 475 476 ### 4.2 Scoring Dimensions 477 478 ```python 479 @dataclass 480 class ResonanceFactors: 481 """ 482 The components that contribute to resonance score. 483 """ 484 # Temporal 485 recency: float # How recently accessed (0-1) 486 temporal_pattern: float # Matches time-of-day/week patterns (0-1) 487 488 # Structural 489 gravity_well_proximity: float # Near a strong attractor (0-1) 490 prediction_involvement: float # Part of active predictions (0-1) 491 edge_density: float # Well-connected (0-1) 492 493 # Contextual 494 altitude_match: float # Matches current operator altitude (0-1) 495 topic_relevance: float # Related to current focus (0-1) 496 497 # Historical 498 access_frequency: float # How often accessed historically (0-1) 499 importance_markers: float # Tagged as decision/principle (0-1) 500 501 # Operator-specific 502 explicit_flags: float # Manually marked important (0-1) 503 nagging_score: float # Unresolved prediction error (0-1) 504 505 506 @dataclass 507 class ResonanceScore: 508 """ 509 Computed resonance for a bullet. 510 """ 511 uuid: str 512 score: float # Overall resonance (0-1) 513 factors: ResonanceFactors # Breakdown 514 computed_at: datetime 515 context: ResonanceContext # What context was used 516 ``` 517 518 ### 4.3 Scoring Algorithm 519 520 ```python 521 class ResonanceScoringEngine: 522 """ 523 Computes resonance scores for bullets based on multiple factors. 524 """ 525 526 def __init__( 527 self, 528 god_db: GodDatabase, 529 free_energy_engine: FreeEnergyEngine, 530 default_weights: Optional[Dict[str, float]] = None 531 ): 532 self.god_db = god_db 533 self.fe_engine = free_energy_engine 534 535 # Default weights - can be tuned per operator 536 self.weights = default_weights or { 537 'recency': 0.15, 538 'temporal_pattern': 0.05, 539 'gravity_well_proximity': 0.20, 540 'prediction_involvement': 0.15, 541 'edge_density': 0.05, 542 'altitude_match': 0.10, 543 'topic_relevance': 0.10, 544 'access_frequency': 0.05, 545 'importance_markers': 0.05, 546 'explicit_flags': 0.05, 547 'nagging_score': 0.05, 548 } 549 550 def compute_resonance( 551 self, 552 uuid: str, 553 context: ResonanceContext 554 ) -> ResonanceScore: 555 """ 556 Compute resonance score for a bullet in the current context. 557 """ 558 bullet = self.god_db.get_bullet(uuid) 559 560 factors = ResonanceFactors( 561 recency=self._compute_recency(bullet), 562 temporal_pattern=self._compute_temporal_pattern(bullet, context), 563 gravity_well_proximity=self._compute_gravity_proximity(bullet), 564 prediction_involvement=self._compute_prediction_involvement(bullet), 565 edge_density=self._compute_edge_density(bullet), 566 altitude_match=self._compute_altitude_match(bullet, context), 567 topic_relevance=self._compute_topic_relevance(bullet, context), 568 access_frequency=self._compute_access_frequency(bullet), 569 importance_markers=self._compute_importance_markers(bullet), 570 explicit_flags=self._compute_explicit_flags(bullet), 571 nagging_score=self._compute_nagging_score(bullet), 572 ) 573 574 # Weighted sum 575 score = sum( 576 getattr(factors, factor) * weight 577 for factor, weight in self.weights.items() 578 ) 579 580 return ResonanceScore( 581 uuid=uuid, 582 score=min(1.0, score), # Clamp to [0, 1] 583 factors=factors, 584 computed_at=datetime.now(), 585 context=context 586 ) 587 588 def get_top_resonant( 589 self, 590 blanket_id: str, 591 context: ResonanceContext, 592 k: int = 20 593 ) -> List[ResonanceScore]: 594 """ 595 Get the top-k most resonant bullets in current context. 596 """ 597 bullets = self.god_db.get_bullets_for_blanket(blanket_id) 598 599 scores = [ 600 self.compute_resonance(b.uuid, context) 601 for b in bullets 602 ] 603 604 return sorted(scores, key=lambda s: s.score, reverse=True)[:k] 605 606 def _compute_gravity_proximity(self, bullet: Bullet) -> float: 607 """ 608 How close is this bullet to a gravity well? 609 """ 610 wells = self.fe_engine.detect_gravity_wells(bullet.blanket_id) 611 612 if not wells: 613 return 0.0 614 615 # Find minimum distance to any well center 616 min_distance = float('inf') 617 max_strength = 0.0 618 619 for well in wells: 620 if bullet.uuid in well.member_uuids: 621 # Inside a well 622 return well.strength 623 624 distance = self._graph_distance(bullet.uuid, well.center_uuid) 625 if distance < min_distance: 626 min_distance = distance 627 max_strength = well.strength 628 629 # Decay with distance 630 if min_distance == float('inf'): 631 return 0.0 632 633 return max_strength * (0.5 ** min_distance) 634 635 def _compute_nagging_score(self, bullet: Bullet) -> float: 636 """ 637 Is this bullet involved in unresolved prediction errors? 638 """ 639 nagging = self.fe_engine.detect_nagging(bullet.blanket_id) 640 641 for item in nagging: 642 if item.source_uuid == bullet.uuid or item.target_uuid == bullet.uuid: 643 return min(1.0, item.strength * 0.5 + item.prediction_count * 0.1) 644 645 return 0.0 646 ``` 647 648 ### 4.4 Weight Tuning 649 650 ```python 651 class ResonanceWeightTuner: 652 """ 653 Learns optimal weights from operator behavior. 654 """ 655 656 def __init__(self, scoring_engine: ResonanceScoringEngine): 657 self.engine = scoring_engine 658 self.observations: List[WeightObservation] = [] 659 660 def record_observation( 661 self, 662 presented_bullets: List[str], 663 selected_bullet: str, 664 context: ResonanceContext 665 ): 666 """ 667 Record when operator selects from presented options. 668 669 The delta between our ranking and their choice is learning signal. 670 """ 671 scores = [ 672 self.engine.compute_resonance(uuid, context) 673 for uuid in presented_bullets 674 ] 675 676 our_ranking = [s.uuid for s in sorted(scores, key=lambda s: s.score, reverse=True)] 677 their_choice_rank = our_ranking.index(selected_bullet) if selected_bullet in our_ranking else -1 678 679 self.observations.append(WeightObservation( 680 presented=presented_bullets, 681 selected=selected_bullet, 682 our_ranking=our_ranking, 683 their_rank=their_choice_rank, 684 context=context, 685 scores=scores, 686 timestamp=datetime.now() 687 )) 688 689 def update_weights(self, learning_rate: float = 0.1): 690 """ 691 Adjust weights based on accumulated observations. 692 693 Increase weight of factors that predicted operator choices. 694 Decrease weight of factors that mispredicted. 695 """ 696 if len(self.observations) < 10: 697 return # Need enough data 698 699 # Compute factor-level accuracy 700 factor_accuracy = defaultdict(list) 701 702 for obs in self.observations[-100:]: # Last 100 observations 703 selected_score = next(s for s in obs.scores if s.uuid == obs.selected) 704 705 for factor_name in self.engine.weights.keys(): 706 factor_value = getattr(selected_score.factors, factor_name) 707 708 # Did high factor value correlate with selection? 709 factor_rank = sum( 710 1 for s in obs.scores 711 if getattr(s.factors, factor_name) > factor_value 712 ) 713 714 # Lower rank = better prediction 715 accuracy = 1 - (factor_rank / len(obs.scores)) 716 factor_accuracy[factor_name].append(accuracy) 717 718 # Update weights proportional to accuracy 719 for factor_name, accuracies in factor_accuracy.items(): 720 avg_accuracy = sum(accuracies) / len(accuracies) 721 current_weight = self.engine.weights[factor_name] 722 723 # Adjust toward accuracy, constrained 724 target = avg_accuracy 725 new_weight = current_weight + learning_rate * (target - current_weight) 726 727 self.engine.weights[factor_name] = max(0.01, min(0.5, new_weight)) 728 729 # Renormalize weights to sum to 1 730 total = sum(self.engine.weights.values()) 731 self.engine.weights = {k: v / total for k, v in self.engine.weights.items()} 732 ``` 733 734 --- 735 736 ## 5. Complementary Altitude System 737 738 ### 5.1 Purpose 739 740 Detects the operator's current cognitive altitude (abstract ↔ concrete) and adjusts system behavior to complement rather than mirror. 741 742 ### 5.2 Altitude Model 743 744 ```python 745 class Altitude(Enum): 746 """ 747 Cognitive altitude levels. 748 """ 749 PHILOSOPHICAL = "philosophical" # Highest abstraction, principles, values 750 STRATEGIC = "strategic" # Patterns, plans, frameworks 751 TACTICAL = "tactical" # Methods, approaches, decisions 752 OPERATIONAL = "operational" # Tasks, actions, concrete steps 753 754 755 @dataclass 756 class AltitudeState: 757 """ 758 Current altitude assessment for operator. 759 """ 760 current: Altitude 761 confidence: float # How confident in assessment (0-1) 762 trajectory: str # "ascending", "descending", "stable" 763 recent_history: List[Tuple[datetime, Altitude]] 764 765 # Signals used 766 signals: AltitudeSignals 767 768 769 @dataclass 770 class AltitudeSignals: 771 """ 772 Signals used to detect altitude. 773 """ 774 vocabulary_abstraction: float # Abstract vs concrete word usage 775 reference_scope: float # Wide vs narrow references 776 temporal_scope: float # Long-term vs immediate focus 777 question_type: float # Why/what-if vs how/when 778 topic_breadth: float # Cross-domain vs single-domain 779 ``` 780 781 ### 5.3 Detection Algorithm 782 783 ```python 784 class AltitudeDetector: 785 """ 786 Detects operator's current cognitive altitude from interaction signals. 787 """ 788 789 def __init__(self): 790 self.altitude_keywords = { 791 Altitude.PHILOSOPHICAL: [ 792 'principle', 'value', 'meaning', 'purpose', 'truth', 793 'philosophy', 'fundamental', 'essence', 'nature', 'being' 794 ], 795 Altitude.STRATEGIC: [ 796 'pattern', 'strategy', 'framework', 'model', 'architecture', 797 'system', 'approach', 'design', 'structure', 'plan' 798 ], 799 Altitude.TACTICAL: [ 800 'method', 'technique', 'decision', 'option', 'tradeoff', 801 'choose', 'compare', 'evaluate', 'consider', 'analyze' 802 ], 803 Altitude.OPERATIONAL: [ 804 'task', 'step', 'action', 'do', 'implement', 'execute', 805 'fix', 'build', 'create', 'write', 'run' 806 ] 807 } 808 809 self.history: List[Tuple[datetime, Altitude]] = [] 810 811 def detect(self, message: str, context: SessionContext) -> AltitudeState: 812 """ 813 Detect altitude from message content and context. 814 """ 815 signals = self._extract_signals(message, context) 816 817 # Keyword-based scoring 818 keyword_scores = {} 819 words = message.lower().split() 820 821 for altitude, keywords in self.altitude_keywords.items(): 822 score = sum(1 for w in words if w in keywords) 823 keyword_scores[altitude] = score 824 825 # Normalize 826 total = sum(keyword_scores.values()) or 1 827 keyword_scores = {k: v / total for k, v in keyword_scores.items()} 828 829 # Combine with other signals 830 combined_scores = { 831 Altitude.PHILOSOPHICAL: ( 832 keyword_scores.get(Altitude.PHILOSOPHICAL, 0) * 0.3 + 833 signals.vocabulary_abstraction * 0.2 + 834 signals.temporal_scope * 0.2 + 835 signals.topic_breadth * 0.3 836 ), 837 Altitude.STRATEGIC: ( 838 keyword_scores.get(Altitude.STRATEGIC, 0) * 0.4 + 839 signals.reference_scope * 0.3 + 840 signals.question_type * 0.3 841 ), 842 Altitude.TACTICAL: ( 843 keyword_scores.get(Altitude.TACTICAL, 0) * 0.5 + 844 (1 - signals.temporal_scope) * 0.25 + 845 signals.question_type * 0.25 846 ), 847 Altitude.OPERATIONAL: ( 848 keyword_scores.get(Altitude.OPERATIONAL, 0) * 0.4 + 849 (1 - signals.vocabulary_abstraction) * 0.3 + 850 (1 - signals.topic_breadth) * 0.3 851 ), 852 } 853 854 # Determine current altitude 855 current = max(combined_scores.items(), key=lambda x: x[1]) 856 857 # Determine trajectory 858 trajectory = self._compute_trajectory() 859 860 self.history.append((datetime.now(), current[0])) 861 862 return AltitudeState( 863 current=current[0], 864 confidence=current[1], 865 trajectory=trajectory, 866 recent_history=self.history[-10:], 867 signals=signals 868 ) 869 870 def _extract_signals(self, message: str, context: SessionContext) -> AltitudeSignals: 871 """ 872 Extract altitude signals from message. 873 """ 874 words = message.split() 875 876 # Vocabulary abstraction (longer words tend to be more abstract) 877 avg_word_length = sum(len(w) for w in words) / len(words) if words else 0 878 vocabulary_abstraction = min(1.0, (avg_word_length - 4) / 6) 879 880 # Reference scope (how many different topics mentioned) 881 # This would use topic detection in production 882 reference_scope = 0.5 # Placeholder 883 884 # Temporal scope (future/long-term vs present/immediate) 885 future_words = ['will', 'future', 'eventually', 'long-term', 'years'] 886 present_words = ['now', 'today', 'immediately', 'current', 'this'] 887 future_count = sum(1 for w in words if w.lower() in future_words) 888 present_count = sum(1 for w in words if w.lower() in present_words) 889 temporal_scope = future_count / (future_count + present_count + 1) 890 891 # Question type 892 why_count = message.lower().count('why') + message.lower().count('what if') 893 how_count = message.lower().count('how') + message.lower().count('when') 894 question_type = why_count / (why_count + how_count + 1) 895 896 # Topic breadth 897 topic_breadth = reference_scope # Simplified 898 899 return AltitudeSignals( 900 vocabulary_abstraction=vocabulary_abstraction, 901 reference_scope=reference_scope, 902 temporal_scope=temporal_scope, 903 question_type=question_type, 904 topic_breadth=topic_breadth 905 ) 906 907 def _compute_trajectory(self) -> str: 908 """ 909 Determine if operator is ascending, descending, or stable in altitude. 910 """ 911 if len(self.history) < 3: 912 return "stable" 913 914 recent = [alt for _, alt in self.history[-5:]] 915 altitude_values = { 916 Altitude.OPERATIONAL: 1, 917 Altitude.TACTICAL: 2, 918 Altitude.STRATEGIC: 3, 919 Altitude.PHILOSOPHICAL: 4 920 } 921 922 values = [altitude_values[a] for a in recent] 923 924 if values[-1] > values[0] + 0.5: 925 return "ascending" 926 elif values[-1] < values[0] - 0.5: 927 return "descending" 928 else: 929 return "stable" 930 ``` 931 932 ### 5.4 Complementary Response 933 934 ```python 935 class ComplementaryAltitudeSystem: 936 """ 937 Adjusts system behavior to complement operator altitude. 938 """ 939 940 def __init__( 941 self, 942 detector: AltitudeDetector, 943 resonance_engine: ResonanceScoringEngine 944 ): 945 self.detector = detector 946 self.resonance = resonance_engine 947 948 def get_complement_altitude(self, operator_altitude: Altitude) -> Altitude: 949 """ 950 Determine the altitude the system should operate at. 951 952 Key principle: Complement, don't mirror. 953 """ 954 complement_map = { 955 Altitude.PHILOSOPHICAL: Altitude.STRATEGIC, # Ground it 956 Altitude.STRATEGIC: Altitude.STRATEGIC, # Stay level 957 Altitude.TACTICAL: Altitude.STRATEGIC, # Lift it 958 Altitude.OPERATIONAL: Altitude.TACTICAL, # Lift it 959 } 960 return complement_map[operator_altitude] 961 962 def adjust_resonance_weights( 963 self, 964 operator_state: AltitudeState 965 ) -> Dict[str, float]: 966 """ 967 Adjust resonance scoring weights based on altitude. 968 """ 969 base_weights = self.resonance.weights.copy() 970 971 if operator_state.current == Altitude.PHILOSOPHICAL: 972 # Operator is abstract - emphasize concrete connections 973 base_weights['gravity_well_proximity'] *= 1.5 # Find anchors 974 base_weights['edge_density'] *= 1.3 # Well-connected items 975 base_weights['importance_markers'] *= 0.8 # Less about marked items 976 977 elif operator_state.current == Altitude.OPERATIONAL: 978 # Operator is concrete - surface patterns 979 base_weights['prediction_involvement'] *= 1.5 # Active connections 980 base_weights['topic_relevance'] *= 0.8 # Broaden scope 981 base_weights['nagging_score'] *= 1.3 # Unresolved patterns 982 983 # Normalize 984 total = sum(base_weights.values()) 985 return {k: v / total for k, v in base_weights.items()} 986 987 def generate_complementary_prompt( 988 self, 989 operator_state: AltitudeState, 990 context: SessionContext 991 ) -> str: 992 """ 993 Generate a prompt adjustment for the AI based on altitude. 994 """ 995 complement = self.get_complement_altitude(operator_state.current) 996 997 prompts = { 998 Altitude.PHILOSOPHICAL: """ 999 The operator is thinking abstractly. Help by: 1000 - Providing concrete examples of their principles 1001 - Connecting to specific past decisions 1002 - Grounding in implementation reality 1003 """, 1004 Altitude.STRATEGIC: """ 1005 Operate at the strategic level. Help by: 1006 - Identifying patterns across examples 1007 - Connecting to frameworks and models 1008 - Balancing abstraction and concreteness 1009 """, 1010 Altitude.TACTICAL: """ 1011 Operate at the tactical level. Help by: 1012 - Surfacing decision points 1013 - Comparing options and tradeoffs 1014 - Connecting to broader strategy when relevant 1015 """, 1016 Altitude.OPERATIONAL: """ 1017 The operator needs tactical lift. Help by: 1018 - Identifying patterns in their tasks 1019 - Connecting to relevant methods 1020 - Surfacing when operational work implies strategic decisions 1021 """ 1022 } 1023 1024 return prompts[complement].strip() 1025 ``` 1026 1027 --- 1028 1029 ## 6. Flight Protocol / OODA Integration 1030 1031 ### 6.1 Purpose 1032 1033 Manages cognitive cycles through the full flight protocol (fly high → retain → land → birth → safety), integrating with OODA loop seeding and thread retention. 1034 1035 ### 6.2 Session State Model 1036 1037 ```python 1038 class FlightPhase(Enum): 1039 """ 1040 Phases of the cognitive flight protocol. 1041 """ 1042 FLY_HIGH = "fly_high" # Exploring, making leaps, abstract 1043 RETAIN = "retain" # Holding threads, maintaining context 1044 LAND = "land" # Consolidating, simplifying 1045 BIRTH = "birth" # Creating artifacts 1046 SAFETY = "safety" # Rest, archive, prepare for next 1047 1048 1049 @dataclass 1050 class Thread: 1051 """ 1052 A thread of thinking that needs to be retained. 1053 """ 1054 thread_id: str 1055 name: str 1056 bullet_uuids: List[str] # Bullets in this thread 1057 created_at: datetime 1058 last_active: datetime 1059 1060 # State 1061 status: str # active, paused, completed, abandoned 1062 nagging_level: float # How much is this pulling for attention 1063 1064 # Context 1065 altitude_when_started: Altitude 1066 key_insights: List[str] 1067 open_questions: List[str] 1068 1069 1070 @dataclass 1071 class SessionState: 1072 """ 1073 Complete state of a cognitive session. 1074 """ 1075 session_id: str 1076 blanket_id: str 1077 started_at: datetime 1078 1079 # Flight state 1080 current_phase: FlightPhase 1081 phase_started_at: datetime 1082 1083 # Threads 1084 active_threads: List[Thread] 1085 paused_threads: List[Thread] 1086 1087 # OODA state 1088 current_ooda_loop: int 1089 loops_completed: int 1090 1091 # Resonance 1092 current_resonance_level: float # How "in flow" is the operator 1093 resonance_history: List[Tuple[datetime, float]] 1094 1095 # Altitude 1096 altitude_state: AltitudeState 1097 1098 # Artifacts produced 1099 artifacts: List[Artifact] 1100 1101 1102 @dataclass 1103 class Artifact: 1104 """ 1105 Something produced during the BIRTH phase. 1106 """ 1107 artifact_id: str 1108 artifact_type: str # document, code, decision, roadmap 1109 bullet_uuids: List[str] # Source bullets 1110 created_at: datetime 1111 content_ref: str # Reference to actual artifact 1112 ``` 1113 1114 ### 6.3 Flight Protocol Engine 1115 1116 ```python 1117 class FlightProtocolEngine: 1118 """ 1119 Manages the cognitive flight protocol and OODA loop integration. 1120 """ 1121 1122 def __init__( 1123 self, 1124 god_db: GodDatabase, 1125 altitude_system: ComplementaryAltitudeSystem, 1126 resonance_engine: ResonanceScoringEngine 1127 ): 1128 self.god_db = god_db 1129 self.altitude = altitude_system 1130 self.resonance = resonance_engine 1131 self.sessions: Dict[str, SessionState] = {} 1132 1133 def start_session(self, blanket_id: str) -> SessionState: 1134 """ 1135 Initialize a new cognitive session. 1136 """ 1137 session = SessionState( 1138 session_id=str(uuid.uuid4()), 1139 blanket_id=blanket_id, 1140 started_at=datetime.now(), 1141 current_phase=FlightPhase.FLY_HIGH, 1142 phase_started_at=datetime.now(), 1143 active_threads=[], 1144 paused_threads=[], 1145 current_ooda_loop=1, 1146 loops_completed=0, 1147 current_resonance_level=0.5, 1148 resonance_history=[], 1149 altitude_state=None, 1150 artifacts=[] 1151 ) 1152 1153 self.sessions[session.session_id] = session 1154 return session 1155 1156 def update_phase(self, session_id: str, message: str, context: dict) -> SessionState: 1157 """ 1158 Update session state based on new message. 1159 1160 Detects phase transitions and manages thread retention. 1161 """ 1162 session = self.sessions[session_id] 1163 1164 # Update altitude 1165 session.altitude_state = self.altitude.detector.detect(message, context) 1166 1167 # Detect phase from signals 1168 new_phase = self._detect_phase(session, message) 1169 1170 if new_phase != session.current_phase: 1171 self._handle_phase_transition(session, new_phase) 1172 1173 # Update resonance 1174 resonance = self._compute_session_resonance(session, message) 1175 session.current_resonance_level = resonance 1176 session.resonance_history.append((datetime.now(), resonance)) 1177 1178 # Update threads 1179 self._update_threads(session, message) 1180 1181 return session 1182 1183 def _detect_phase(self, session: SessionState, message: str) -> FlightPhase: 1184 """ 1185 Detect current flight phase from message content. 1186 """ 1187 # Keywords and patterns for each phase 1188 phase_signals = { 1189 FlightPhase.FLY_HIGH: [ 1190 'explore', 'what if', 'imagine', 'could we', 'brainstorm', 1191 'let\'s think about', 'I wonder', 'big picture' 1192 ], 1193 FlightPhase.RETAIN: [ 1194 'hold that', 'don\'t forget', 'keep in mind', 'also', 1195 'another thread', 'meanwhile', 'parking' 1196 ], 1197 FlightPhase.LAND: [ 1198 'summarize', 'consolidate', 'so basically', 'in summary', 1199 'the key points', 'let\'s land', 'wrap up' 1200 ], 1201 FlightPhase.BIRTH: [ 1202 'create', 'write', 'implement', 'build', 'draft', 1203 'let\'s make', 'produce', 'output' 1204 ], 1205 FlightPhase.SAFETY: [ 1206 'done', 'finished', 'that\'s it', 'good stopping point', 1207 'let\'s pause', 'save', 'archive' 1208 ] 1209 } 1210 1211 message_lower = message.lower() 1212 scores = {} 1213 1214 for phase, keywords in phase_signals.items(): 1215 score = sum(1 for kw in keywords if kw in message_lower) 1216 scores[phase] = score 1217 1218 if max(scores.values()) > 0: 1219 return max(scores.items(), key=lambda x: x[1])[0] 1220 1221 return session.current_phase 1222 1223 def _handle_phase_transition(self, session: SessionState, new_phase: FlightPhase): 1224 """ 1225 Handle transition between flight phases. 1226 """ 1227 old_phase = session.current_phase 1228 1229 # Phase exit actions 1230 if old_phase == FlightPhase.FLY_HIGH: 1231 # Capture explored ideas as threads 1232 self._crystallize_exploration(session) 1233 1234 elif old_phase == FlightPhase.LAND: 1235 # Prepare for artifact creation 1236 self._prepare_for_birth(session) 1237 1238 # Phase entry actions 1239 if new_phase == FlightPhase.SAFETY: 1240 # Archive session state 1241 self._archive_session(session) 1242 session.loops_completed += 1 1243 1244 elif new_phase == FlightPhase.FLY_HIGH: 1245 # Starting new loop 1246 session.current_ooda_loop += 1 1247 1248 session.current_phase = new_phase 1249 session.phase_started_at = datetime.now() 1250 1251 def seed_ooda_loop(self, session_id: str) -> OODASeed: 1252 """ 1253 Seed the next OODA loop iteration. 1254 1255 Key insight: Ask "How are you feeling?" not "What's next?" 1256 """ 1257 session = self.sessions[session_id] 1258 1259 # Gather resonance-based context 1260 top_resonant = self.resonance.get_top_resonant( 1261 session.blanket_id, 1262 self._build_context(session), 1263 k=10 1264 ) 1265 1266 # Identify nagging threads 1267 nagging_threads = [ 1268 t for t in session.paused_threads 1269 if t.nagging_level > 0.5 1270 ] 1271 1272 # Compute recommended focus areas 1273 recommendations = self._compute_recommendations(session, top_resonant) 1274 1275 return OODASeed( 1276 session_id=session_id, 1277 loop_number=session.current_ooda_loop, 1278 resonance_level=session.current_resonance_level, 1279 top_resonant_bullets=[r.uuid for r in top_resonant[:5]], 1280 nagging_threads=nagging_threads, 1281 recommendations=recommendations, 1282 altitude_state=session.altitude_state, 1283 seeded_at=datetime.now() 1284 ) 1285 1286 def _update_threads(self, session: SessionState, message: str): 1287 """ 1288 Update thread states based on current activity. 1289 """ 1290 # Detect new thread creation 1291 if 'new thread' in message.lower() or 'another topic' in message.lower(): 1292 thread = Thread( 1293 thread_id=str(uuid.uuid4()), 1294 name=self._extract_thread_name(message), 1295 bullet_uuids=[], 1296 created_at=datetime.now(), 1297 last_active=datetime.now(), 1298 status='active', 1299 nagging_level=0.0, 1300 altitude_when_started=session.altitude_state.current, 1301 key_insights=[], 1302 open_questions=[] 1303 ) 1304 session.active_threads.append(thread) 1305 1306 # Update activity on active threads 1307 for thread in session.active_threads: 1308 if self._message_relates_to_thread(message, thread): 1309 thread.last_active = datetime.now() 1310 1311 # Move inactive threads to paused 1312 for thread in session.active_threads[:]: 1313 if (datetime.now() - thread.last_active).seconds > 300: # 5 min inactive 1314 thread.status = 'paused' 1315 thread.nagging_level = 0.3 # Start nagging 1316 session.paused_threads.append(thread) 1317 session.active_threads.remove(thread) 1318 1319 # Increase nagging on paused threads 1320 for thread in session.paused_threads: 1321 thread.nagging_level = min(1.0, thread.nagging_level + 0.05) 1322 ``` 1323 1324 --- 1325 1326 ## 7. Cooperative Eye / Attention Tracking 1327 1328 ### 7.1 Purpose 1329 1330 Tracks what the operator is attending to and generates signals for the flock-level beacon system. Implements the "cooperative eye" principle where the system follows and complements operator attention. 1331 1332 ### 7.2 Attention Model 1333 1334 ```python 1335 @dataclass 1336 class AttentionEvent: 1337 """ 1338 A single attention event - operator focused on something. 1339 """ 1340 event_id: str 1341 timestamp: datetime 1342 1343 # What was attended 1344 bullet_uuid: Optional[str] # If attending to a specific bullet 1345 topic: Optional[str] # If attending to a topic area 1346 external_ref: Optional[str] # If attending to something outside graph 1347 1348 # How it was attended 1349 modality: str # read, write, search, navigate, mention 1350 duration_ms: int # How long attention was held 1351 intensity: float # Deep focus vs. scan (0-1) 1352 1353 # Context 1354 session_id: str 1355 altitude: Altitude 1356 phase: FlightPhase 1357 1358 1359 @dataclass 1360 class AttentionState: 1361 """ 1362 Current attention state for an operator. 1363 """ 1364 # Current focus 1365 primary_focus: Optional[str] # Main thing being attended 1366 secondary_foci: List[str] # Peripheral attention 1367 1368 # Trajectory 1369 attention_trajectory: List[str] # Recent attention path 1370 predicted_next: List[str] # Where attention might go 1371 1372 # Aggregates 1373 session_attention_distribution: Dict[str, float] # Bullet -> attention share 1374 topic_attention_distribution: Dict[str, float] # Topic -> attention share 1375 1376 1377 @dataclass 1378 class BeaconSignal: 1379 """ 1380 A signal for the flock-level beacon system. 1381 1382 Says "I have something relevant" without revealing content. 1383 """ 1384 blanket_id: str # Who is signaling (anonymized) 1385 topic_hash: str # What topic (hashed) 1386 relevance_strength: float # How relevant (0-1) 1387 availability: bool # Open to exchange? 1388 timestamp: datetime 1389 ``` 1390 1391 ### 7.3 Attention Tracker 1392 1393 ```python 1394 class AttentionTracker: 1395 """ 1396 Tracks operator attention and generates patterns. 1397 """ 1398 1399 def __init__(self, god_db: GodDatabase): 1400 self.god_db = god_db 1401 self.events: List[AttentionEvent] = [] 1402 self.current_state: Optional[AttentionState] = None 1403 1404 def record_attention( 1405 self, 1406 bullet_uuid: Optional[str] = None, 1407 topic: Optional[str] = None, 1408 modality: str = "read", 1409 duration_ms: int = 1000, 1410 context: dict = None 1411 ) -> AttentionEvent: 1412 """ 1413 Record an attention event. 1414 """ 1415 event = AttentionEvent( 1416 event_id=str(uuid.uuid4()), 1417 timestamp=datetime.now(), 1418 bullet_uuid=bullet_uuid, 1419 topic=topic, 1420 external_ref=None, 1421 modality=modality, 1422 duration_ms=duration_ms, 1423 intensity=self._compute_intensity(duration_ms, modality), 1424 session_id=context.get('session_id') if context else None, 1425 altitude=context.get('altitude') if context else Altitude.TACTICAL, 1426 phase=context.get('phase') if context else FlightPhase.FLY_HIGH 1427 ) 1428 1429 self.events.append(event) 1430 self._update_state() 1431 1432 # Update bullet access metadata 1433 if bullet_uuid: 1434 self.god_db.record_attention(bullet_uuid, event.session_id) 1435 1436 return event 1437 1438 def get_current_state(self) -> AttentionState: 1439 """ 1440 Get current attention state. 1441 """ 1442 return self.current_state 1443 1444 def predict_next_attention(self, k: int = 5) -> List[Tuple[str, float]]: 1445 """ 1446 Predict where attention might go next. 1447 1448 Uses: 1449 - Recent attention trajectory 1450 - Graph structure (connected bullets) 1451 - Gravity wells (attractors) 1452 - Nagging items (unresolved predictions) 1453 """ 1454 if not self.events: 1455 return [] 1456 1457 recent = self.events[-10:] 1458 recent_uuids = [e.bullet_uuid for e in recent if e.bullet_uuid] 1459 1460 candidates = {} 1461 1462 # Graph neighbors 1463 for uuid in recent_uuids: 1464 neighbors = self.god_db.get_neighbors(uuid) 1465 for neighbor in neighbors: 1466 if neighbor not in recent_uuids: 1467 candidates[neighbor] = candidates.get(neighbor, 0) + 0.3 1468 1469 # Prediction targets 1470 for uuid in recent_uuids: 1471 predictions = self.god_db.get_predictions(uuid) 1472 for pred in predictions: 1473 if pred.target_uuid not in recent_uuids: 1474 candidates[pred.target_uuid] = ( 1475 candidates.get(pred.target_uuid, 0) + 1476 pred.prediction_strength * 0.5 1477 ) 1478 1479 # Sort and return top k 1480 sorted_candidates = sorted( 1481 candidates.items(), 1482 key=lambda x: x[1], 1483 reverse=True 1484 ) 1485 1486 return sorted_candidates[:k] 1487 1488 def generate_beacon_signal( 1489 self, 1490 blanket_id: str, 1491 topic: str, 1492 anonymize: bool = True 1493 ) -> BeaconSignal: 1494 """ 1495 Generate a beacon signal for the flock. 1496 1497 This says "I have relevant content on this topic" without 1498 revealing what that content is. 1499 """ 1500 # Compute relevance from attention data 1501 topic_events = [ 1502 e for e in self.events 1503 if e.topic == topic or self._topic_matches(e, topic) 1504 ] 1505 1506 if not topic_events: 1507 relevance = 0.0 1508 else: 1509 total_attention = sum(e.duration_ms * e.intensity for e in topic_events) 1510 relevance = min(1.0, total_attention / 60000) # Normalize to 1 min 1511 1512 # Hash topic if anonymizing 1513 topic_hash = hashlib.sha256(topic.encode()).hexdigest()[:16] if anonymize else topic 1514 1515 return BeaconSignal( 1516 blanket_id=blanket_id if not anonymize else self._anonymize_blanket(blanket_id), 1517 topic_hash=topic_hash, 1518 relevance_strength=relevance, 1519 availability=True, # Could be configurable 1520 timestamp=datetime.now() 1521 ) 1522 1523 def _update_state(self): 1524 """ 1525 Update current attention state from events. 1526 """ 1527 if not self.events: 1528 self.current_state = None 1529 return 1530 1531 recent = self.events[-20:] 1532 1533 # Primary focus = most recent high-intensity event 1534 high_intensity = [e for e in recent if e.intensity > 0.7] 1535 primary = high_intensity[-1].bullet_uuid if high_intensity else None 1536 1537 # Secondary = other recent foci 1538 secondary = list(set( 1539 e.bullet_uuid for e in recent[-5:] 1540 if e.bullet_uuid and e.bullet_uuid != primary 1541 )) 1542 1543 # Trajectory 1544 trajectory = [e.bullet_uuid for e in recent if e.bullet_uuid] 1545 1546 # Distributions 1547 bullet_attention = defaultdict(float) 1548 topic_attention = defaultdict(float) 1549 1550 for event in recent: 1551 weight = event.duration_ms * event.intensity 1552 if event.bullet_uuid: 1553 bullet_attention[event.bullet_uuid] += weight 1554 if event.topic: 1555 topic_attention[event.topic] += weight 1556 1557 # Normalize 1558 total_bullet = sum(bullet_attention.values()) or 1 1559 total_topic = sum(topic_attention.values()) or 1 1560 1561 self.current_state = AttentionState( 1562 primary_focus=primary, 1563 secondary_foci=secondary, 1564 attention_trajectory=trajectory, 1565 predicted_next=[p[0] for p in self.predict_next_attention()], 1566 session_attention_distribution={ 1567 k: v / total_bullet for k, v in bullet_attention.items() 1568 }, 1569 topic_attention_distribution={ 1570 k: v / total_topic for k, v in topic_attention.items() 1571 } 1572 ) 1573 ``` 1574 1575 --- 1576 1577 ## 8. Permeability Layer 1578 1579 ### 8.1 Purpose 1580 1581 Manages what flows between Markov blankets - extracts structure while protecting content, enforces permissions, and facilitates the flock-level pattern sharing. 1582 1583 ### 8.2 Components 1584 1585 ```python 1586 @dataclass 1587 class PermeabilityConfig: 1588 """ 1589 Configuration for what flows through blanket membrane. 1590 """ 1591 blanket_id: str 1592 1593 # Automatic (required) 1594 pattern_fingerprints_enabled: bool = True # Cannot disable 1595 concept_hashes_enabled: bool = True # Cannot disable 1596 federated_gradients_enabled: bool = True # Cannot disable 1597 1598 # Configurable 1599 resonance_beacons_enabled: bool = True 1600 beacon_topics_whitelist: Optional[List[str]] = None 1601 beacon_topics_blacklist: Optional[List[str]] = None 1602 1603 # Opt-in 1604 phoenix_sharing_enabled: bool = False 1605 phoenix_auto_share_threshold: Optional[float] = None # Auto-share above this resonance 1606 1607 # Limits 1608 fingerprint_omission_regions: List[str] = field(default_factory=list) # Max 20% 1609 private_zones: List[str] = field(default_factory=list) # Max 30% 1610 1611 1612 class PermeabilityLayer: 1613 """ 1614 Manages the flow of information between Markov blankets. 1615 """ 1616 1617 def __init__( 1618 self, 1619 god_db: GodDatabase, 1620 concept_hasher: ConceptHasher, 1621 fingerprinter: GraphFingerprinter 1622 ): 1623 self.god_db = god_db 1624 self.hasher = concept_hasher 1625 self.fingerprinter = fingerprinter 1626 self.configs: Dict[str, PermeabilityConfig] = {} 1627 1628 def extract_for_flock( 1629 self, 1630 blanket_id: str 1631 ) -> FlockContribution: 1632 """ 1633 Extract what this blanket contributes to the flock. 1634 1635 CRITICAL: Only structure, never content. 1636 """ 1637 config = self.configs.get(blanket_id, PermeabilityConfig(blanket_id=blanket_id)) 1638 1639 # Get bullets, excluding private zones and omission regions 1640 bullets = self.god_db.get_bullets_for_blanket(blanket_id) 1641 filtered_bullets = [ 1642 b for b in bullets 1643 if b.uuid not in config.private_zones 1644 and b.uuid not in config.fingerprint_omission_regions 1645 ] 1646 1647 # Extract fingerprint 1648 graph = self._build_graph(filtered_bullets) 1649 attention_log = self._build_attention_log(blanket_id) 1650 fingerprint = self.fingerprinter.compute_fingerprint( 1651 self.fingerprinter.extract_topology(graph), 1652 self.fingerprinter.extract_attention(attention_log), 1653 blanket_id=self._anonymize(blanket_id), 1654 timestamp=datetime.now().isoformat() 1655 ) 1656 1657 # Extract concept hashes 1658 concepts = self._identify_concepts(filtered_bullets) 1659 concept_hashes = [ 1660 self.hasher.compute_hash( 1661 self.hasher.extract_signature(c), 1662 blanket_id=self._anonymize(blanket_id), 1663 timestamp=datetime.now().isoformat() 1664 ) 1665 for c in concepts 1666 ] 1667 1668 # Collect beacon signals 1669 beacons = [] 1670 if config.resonance_beacons_enabled: 1671 beacons = self._generate_beacons(blanket_id, config) 1672 1673 return FlockContribution( 1674 blanket_id=self._anonymize(blanket_id), 1675 fingerprint=fingerprint, 1676 concept_hashes=concept_hashes, 1677 beacons=beacons, 1678 timestamp=datetime.now() 1679 ) 1680 1681 def check_permission( 1682 self, 1683 blanket_id: str, 1684 operation: str, 1685 target: str 1686 ) -> PermissionResult: 1687 """ 1688 Check if an operation is permitted. 1689 """ 1690 config = self.configs.get(blanket_id) 1691 1692 # System requirements cannot be overridden 1693 if operation in ['pattern_fingerprint', 'concept_hash', 'federated_gradient']: 1694 if target in config.fingerprint_omission_regions: 1695 # Check if within 20% limit 1696 total_bullets = len(self.god_db.get_bullets_for_blanket(blanket_id)) 1697 omission_count = len(config.fingerprint_omission_regions) 1698 if omission_count / total_bullets > 0.20: 1699 return PermissionResult( 1700 permitted=False, 1701 reason="Fingerprint omission limit (20%) exceeded" 1702 ) 1703 return PermissionResult(permitted=True) 1704 1705 # Beacon permissions 1706 if operation == 'beacon': 1707 if not config.resonance_beacons_enabled: 1708 return PermissionResult(permitted=False, reason="Beacons disabled") 1709 1710 if config.beacon_topics_blacklist and target in config.beacon_topics_blacklist: 1711 return PermissionResult(permitted=False, reason="Topic blacklisted") 1712 1713 if config.beacon_topics_whitelist and target not in config.beacon_topics_whitelist: 1714 return PermissionResult(permitted=False, reason="Topic not in whitelist") 1715 1716 # Phoenix permissions 1717 if operation == 'phoenix': 1718 if not config.phoenix_sharing_enabled: 1719 return PermissionResult(permitted=False, reason="Phoenix sharing disabled") 1720 1721 return PermissionResult(permitted=True) 1722 1723 def process_incoming( 1724 self, 1725 blanket_id: str, 1726 flock_data: FlockData 1727 ) -> List[RelevantPattern]: 1728 """ 1729 Process incoming flock data and identify relevant patterns. 1730 1731 CRITICAL: We receive structure, never content. 1732 """ 1733 my_fingerprint = self.extract_for_flock(blanket_id).fingerprint 1734 1735 relevant = [] 1736 1737 # Find similar fingerprints 1738 for other_fp in flock_data.fingerprints: 1739 similarity = self.fingerprinter.similarity(my_fingerprint, other_fp) 1740 if similarity > 0.7: 1741 relevant.append(RelevantPattern( 1742 type='fingerprint_similarity', 1743 other_blanket=other_fp.blanket_id, 1744 similarity=similarity, 1745 details={'topology_sim': similarity} 1746 )) 1747 1748 # Find matching concept hashes 1749 my_hashes = self.extract_for_flock(blanket_id).concept_hashes 1750 for other_hash in flock_data.concept_hashes: 1751 for my_hash in my_hashes: 1752 similarity = self.hasher.similarity(my_hash, other_hash) 1753 if similarity > 0.7: 1754 relevant.append(RelevantPattern( 1755 type='concept_similarity', 1756 other_blanket=other_hash.blanket_id, 1757 similarity=similarity, 1758 details={'hash_sim': similarity} 1759 )) 1760 1761 # Match beacons to our attention 1762 attention = self._get_recent_attention(blanket_id) 1763 for beacon in flock_data.beacons: 1764 if self._topic_matches_attention(beacon.topic_hash, attention): 1765 relevant.append(RelevantPattern( 1766 type='beacon_match', 1767 other_blanket=beacon.blanket_id, 1768 similarity=beacon.relevance_strength, 1769 details={'topic': beacon.topic_hash} 1770 )) 1771 1772 return relevant 1773 ``` 1774 1775 --- 1776 1777 ## 9. Cultural Evolution Engine 1778 1779 ### 9.1 Purpose 1780 1781 Detects when patterns are converging across the flock and manages the process of evolving system norms - from deviance detection through codification. 1782 1783 ### 9.2 Evolution Model 1784 1785 ```python 1786 class EvolutionPhase(Enum): 1787 """ 1788 Phases of cultural evolution. 1789 """ 1790 DEVIANCE = "deviance" # Individual divergence from norm 1791 TOLERANCE = "tolerance" # Divergence noticed but not punished 1792 SPREAD = "spread" # Multiple blankets adopting 1793 NORMALIZATION = "normalization" # Becomes unremarkable 1794 CODIFICATION = "codification" # Formal policy change 1795 1796 1797 @dataclass 1798 class EvolutionSignal: 1799 """ 1800 A signal that cultural evolution may be occurring. 1801 """ 1802 signal_id: str 1803 rule_affected: str # Which rule is being tested 1804 phase: EvolutionPhase 1805 1806 # Evidence 1807 exception_count: int # How many exceptions requested 1808 exception_approval_rate: float # What % approved 1809 blankets_involved: int # How many blankets diverging 1810 fingerprint_convergence: float # Are fingerprints clustering? 1811 1812 # Trajectory 1813 first_detected: datetime 1814 last_updated: datetime 1815 velocity: float # How fast is this progressing 1816 1817 1818 @dataclass 1819 class CulturalProposal: 1820 """ 1821 A proposal to change a system norm. 1822 """ 1823 proposal_id: str 1824 rule_to_change: str 1825 current_value: Any 1826 proposed_value: Any 1827 1828 # Evidence 1829 evolution_signal: EvolutionSignal 1830 supporting_exceptions: List[str] # Exception request IDs 1831 1832 # Process 1833 status: str # draft, review, voting, approved, rejected 1834 created_at: datetime 1835 review_deadline: Optional[datetime] 1836 votes: Dict[str, str] # blanket_id -> vote 1837 1838 # Impact 1839 affected_blankets: int 1840 reversibility: str # easy, moderate, difficult 1841 ``` 1842 1843 ### 9.3 Evolution Engine 1844 1845 ```python 1846 class CulturalEvolutionEngine: 1847 """ 1848 Detects and manages cultural evolution across the flock. 1849 """ 1850 1851 def __init__( 1852 self, 1853 permeability: PermeabilityLayer, 1854 exception_log: ExceptionLog 1855 ): 1856 self.permeability = permeability 1857 self.exception_log = exception_log 1858 self.signals: Dict[str, EvolutionSignal] = {} 1859 self.proposals: List[CulturalProposal] = [] 1860 1861 def detect_evolution_signals(self) -> List[EvolutionSignal]: 1862 """ 1863 Scan for signs that cultural evolution is occurring. 1864 """ 1865 signals = [] 1866 1867 # Group exceptions by rule 1868 exceptions_by_rule = defaultdict(list) 1869 for exc in self.exception_log.get_recent(days=30): 1870 exceptions_by_rule[exc.rule].append(exc) 1871 1872 for rule, exceptions in exceptions_by_rule.items(): 1873 # Compute metrics 1874 count = len(exceptions) 1875 approved = sum(1 for e in exceptions if e.decision == 'approved') 1876 approval_rate = approved / count if count > 0 else 0 1877 blankets = len(set(e.blanket_id for e in exceptions)) 1878 1879 # Check thresholds 1880 if count >= 10 and approval_rate >= 0.8 and blankets >= 5: 1881 # Strong signal 1882 phase = self._determine_phase(rule, count, approval_rate, blankets) 1883 1884 signal = EvolutionSignal( 1885 signal_id=str(uuid.uuid4()), 1886 rule_affected=rule, 1887 phase=phase, 1888 exception_count=count, 1889 exception_approval_rate=approval_rate, 1890 blankets_involved=blankets, 1891 fingerprint_convergence=self._compute_convergence(rule), 1892 first_detected=min(e.timestamp for e in exceptions), 1893 last_updated=datetime.now(), 1894 velocity=self._compute_velocity(rule) 1895 ) 1896 1897 signals.append(signal) 1898 self.signals[rule] = signal 1899 1900 return signals 1901 1902 def _determine_phase( 1903 self, 1904 rule: str, 1905 count: int, 1906 approval_rate: float, 1907 blankets: int 1908 ) -> EvolutionPhase: 1909 """ 1910 Determine which phase of evolution a signal is in. 1911 """ 1912 if count < 5: 1913 return EvolutionPhase.DEVIANCE 1914 elif count < 10 or approval_rate < 0.6: 1915 return EvolutionPhase.TOLERANCE 1916 elif blankets < 10: 1917 return EvolutionPhase.SPREAD 1918 elif approval_rate > 0.9: 1919 return EvolutionPhase.NORMALIZATION 1920 else: 1921 return EvolutionPhase.SPREAD 1922 1923 def create_proposal(self, signal: EvolutionSignal) -> CulturalProposal: 1924 """ 1925 Create a formal proposal for cultural change. 1926 """ 1927 # Only create proposals for signals in NORMALIZATION phase 1928 if signal.phase != EvolutionPhase.NORMALIZATION: 1929 raise ValueError("Signal not ready for proposal") 1930 1931 # Determine proposed change 1932 current = self._get_current_rule_value(signal.rule_affected) 1933 proposed = self._compute_proposed_value(signal) 1934 1935 proposal = CulturalProposal( 1936 proposal_id=str(uuid.uuid4()), 1937 rule_to_change=signal.rule_affected, 1938 current_value=current, 1939 proposed_value=proposed, 1940 evolution_signal=signal, 1941 supporting_exceptions=self._get_supporting_exceptions(signal), 1942 status='draft', 1943 created_at=datetime.now(), 1944 review_deadline=None, 1945 votes={}, 1946 affected_blankets=signal.blankets_involved, 1947 reversibility=self._assess_reversibility(signal.rule_affected) 1948 ) 1949 1950 self.proposals.append(proposal) 1951 return proposal 1952 1953 def submit_for_review(self, proposal_id: str, review_days: int = 30): 1954 """ 1955 Submit proposal for community review. 1956 """ 1957 proposal = next(p for p in self.proposals if p.proposal_id == proposal_id) 1958 proposal.status = 'review' 1959 proposal.review_deadline = datetime.now() + timedelta(days=review_days) 1960 1961 def record_vote(self, proposal_id: str, blanket_id: str, vote: str): 1962 """ 1963 Record a vote on a proposal. 1964 """ 1965 proposal = next(p for p in self.proposals if p.proposal_id == proposal_id) 1966 proposal.votes[blanket_id] = vote # 'approve', 'reject', 'abstain' 1967 1968 def tally_votes(self, proposal_id: str) -> VoteResult: 1969 """ 1970 Tally votes and determine outcome. 1971 """ 1972 proposal = next(p for p in self.proposals if p.proposal_id == proposal_id) 1973 1974 approve = sum(1 for v in proposal.votes.values() if v == 'approve') 1975 reject = sum(1 for v in proposal.votes.values() if v == 'reject') 1976 abstain = sum(1 for v in proposal.votes.values() if v == 'abstain') 1977 1978 total = approve + reject 1979 threshold = 0.67 # Supermajority for fundamental changes 1980 1981 if total == 0: 1982 outcome = 'no_quorum' 1983 elif approve / total >= threshold: 1984 outcome = 'approved' 1985 proposal.status = 'approved' 1986 else: 1987 outcome = 'rejected' 1988 proposal.status = 'rejected' 1989 1990 return VoteResult( 1991 proposal_id=proposal_id, 1992 approve=approve, 1993 reject=reject, 1994 abstain=abstain, 1995 outcome=outcome 1996 ) 1997 1998 def codify_change(self, proposal_id: str): 1999 """ 2000 Implement an approved cultural change. 2001 """ 2002 proposal = next(p for p in self.proposals if p.proposal_id == proposal_id) 2003 2004 if proposal.status != 'approved': 2005 raise ValueError("Proposal not approved") 2006 2007 # Update system defaults 2008 self._update_system_default( 2009 proposal.rule_to_change, 2010 proposal.proposed_value 2011 ) 2012 2013 # Grandfather existing exceptions 2014 self._grandfather_exceptions(proposal) 2015 2016 # Notify flock 2017 self._notify_flock(proposal) 2018 2019 proposal.status = 'codified' 2020 ``` 2021 2022 --- 2023 2024 ## 10. Integration Architecture 2025 2026 ### 10.1 System Initialization 2027 2028 ```python 2029 class SovereignOS: 2030 """ 2031 Main system coordinator. 2032 """ 2033 2034 def __init__(self, config: SystemConfig): 2035 # Core storage 2036 self.god_db = GodDatabase(config.storage_path) 2037 2038 # Engines 2039 self.free_energy = FreeEnergyEngine(self.god_db) 2040 self.resonance = ResonanceScoringEngine(self.god_db, self.free_energy) 2041 self.altitude = ComplementaryAltitudeSystem( 2042 AltitudeDetector(), 2043 self.resonance 2044 ) 2045 self.flight = FlightProtocolEngine( 2046 self.god_db, 2047 self.altitude, 2048 self.resonance 2049 ) 2050 self.attention = AttentionTracker(self.god_db) 2051 2052 # Sharing infrastructure 2053 self.concept_hasher = ConceptHasher() 2054 self.fingerprinter = GraphFingerprinter() 2055 self.permeability = PermeabilityLayer( 2056 self.god_db, 2057 self.concept_hasher, 2058 self.fingerprinter 2059 ) 2060 2061 # Evolution 2062 self.exception_log = ExceptionLog() 2063 self.evolution = CulturalEvolutionEngine( 2064 self.permeability, 2065 self.exception_log 2066 ) 2067 2068 def process_message( 2069 self, 2070 session_id: str, 2071 message: str, 2072 context: dict 2073 ) -> ProcessingResult: 2074 """ 2075 Main entry point for processing operator input. 2076 """ 2077 # Get or create session 2078 if session_id not in self.flight.sessions: 2079 session = self.flight.start_session(context.get('blanket_id')) 2080 else: 2081 session = self.flight.sessions[session_id] 2082 2083 # Update flight state 2084 session = self.flight.update_phase(session_id, message, context) 2085 2086 # Record attention 2087 self.attention.record_attention( 2088 topic=self._extract_topic(message), 2089 modality='write', 2090 duration_ms=len(message) * 50, # Rough estimate 2091 context={'session_id': session_id, 'altitude': session.altitude_state.current} 2092 ) 2093 2094 # Get complementary response parameters 2095 complement_altitude = self.altitude.get_complement_altitude( 2096 session.altitude_state.current 2097 ) 2098 adjusted_weights = self.altitude.adjust_resonance_weights(session.altitude_state) 2099 2100 # Get top resonant items 2101 top_resonant = self.resonance.get_top_resonant( 2102 session.blanket_id, 2103 self._build_context(session, message), 2104 k=10 2105 ) 2106 2107 # Get edge predictions 2108 predictions = [] 2109 for bullet_uuid in [b.uuid for b in top_resonant[:5]]: 2110 predictions.extend(self.free_energy.predict_edges(bullet_uuid, k=3)) 2111 2112 # Check for phase transitions 2113 transitions = [ 2114 self.free_energy.detect_phase_transition(b.uuid) 2115 for b in top_resonant[:5] 2116 ] 2117 transitions = [t for t in transitions if t is not None] 2118 2119 return ProcessingResult( 2120 session=session, 2121 complement_altitude=complement_altitude, 2122 resonance_weights=adjusted_weights, 2123 top_resonant=top_resonant, 2124 edge_predictions=predictions, 2125 phase_transitions=transitions, 2126 ooda_seed=self.flight.seed_ooda_loop(session_id) if session.current_phase == FlightPhase.SAFETY else None 2127 ) 2128 ``` 2129 2130 ### 10.2 Message Flow 2131 2132 ``` 2133 ┌─────────────────────────────────────────────────────────────────────────────┐ 2134 │ MESSAGE PROCESSING FLOW │ 2135 ├─────────────────────────────────────────────────────────────────────────────┤ 2136 │ │ 2137 │ 1. RECEIVE MESSAGE │ 2138 │ └── Operator sends message to system │ 2139 │ │ 2140 │ 2. ALTITUDE DETECTION │ 2141 │ └── AltitudeDetector.detect() │ 2142 │ └── Determine: philosophical, strategic, tactical, operational │ 2143 │ │ 2144 │ 3. FLIGHT PHASE UPDATE │ 2145 │ └── FlightProtocolEngine.update_phase() │ 2146 │ └── Transition: fly_high, retain, land, birth, safety │ 2147 │ │ 2148 │ 4. ATTENTION RECORDING │ 2149 │ └── AttentionTracker.record_attention() │ 2150 │ └── Update attention state and trajectory │ 2151 │ │ 2152 │ 5. COMPLEMENTARY ALTITUDE │ 2153 │ └── ComplementaryAltitudeSystem.get_complement_altitude() │ 2154 │ └── Adjust resonance weights for altitude │ 2155 │ │ 2156 │ 6. RESONANCE SCORING │ 2157 │ └── ResonanceScoringEngine.get_top_resonant() │ 2158 │ └── Get most relevant bullets in current context │ 2159 │ │ 2160 │ 7. FREE ENERGY ANALYSIS │ 2161 │ └── FreeEnergyEngine.predict_edges() │ 2162 │ └── FreeEnergyEngine.detect_phase_transition() │ 2163 │ └── Identify gravity wells, nagging items │ 2164 │ │ 2165 │ 8. GENERATE RESPONSE │ 2166 │ └── Use complement altitude to shape response │ 2167 │ └── Include resonant items, predictions, transitions │ 2168 │ │ 2169 │ 9. PERMEABILITY EXTRACTION (async) │ 2170 │ └── PermeabilityLayer.extract_for_flock() │ 2171 │ └── Generate fingerprints, hashes, beacons │ 2172 │ │ 2173 │ 10. OODA SEEDING (if phase = safety) │ 2174 │ └── FlightProtocolEngine.seed_ooda_loop() │ 2175 │ └── Prepare for next iteration │ 2176 │ │ 2177 └─────────────────────────────────────────────────────────────────────────────┘ 2178 ``` 2179 2180 --- 2181 2182 ## 11. Data Models 2183 2184 ### 11.1 Complete Schema 2185 2186 ```python 2187 # See individual sections for full dataclass definitions 2188 2189 # Core entities 2190 Bullet # Atomic unit of information 2191 Edge # Connection between bullets 2192 PredictionTarget # Potential edge prediction 2193 2194 # Free energy 2195 GravityWell # Attractor state 2196 FreeEnergyState # Energy state of entity 2197 EdgeCandidate # Potential edge with impact 2198 2199 # Resonance 2200 ResonanceFactors # Components of resonance 2201 ResonanceScore # Computed resonance 2202 2203 # Altitude 2204 Altitude # Enum: philosophical, strategic, tactical, operational 2205 AltitudeState # Current altitude assessment 2206 AltitudeSignals # Detection signals 2207 2208 # Flight 2209 FlightPhase # Enum: fly_high, retain, land, birth, safety 2210 Thread # Retained thinking thread 2211 SessionState # Complete session state 2212 Artifact # Produced artifact 2213 OODASeed # Seed for next loop 2214 2215 # Attention 2216 AttentionEvent # Single attention event 2217 AttentionState # Current attention state 2218 BeaconSignal # Flock-level signal 2219 2220 # Permeability 2221 PermeabilityConfig # What flows through membrane 2222 FlockContribution # What blanket contributes 2223 RelevantPattern # Incoming relevant pattern 2224 2225 # Evolution 2226 EvolutionPhase # Enum: deviance through codification 2227 EvolutionSignal # Signal of evolution 2228 CulturalProposal # Formal proposal 2229 VoteResult # Voting outcome 2230 ``` 2231 2232 --- 2233 2234 ## 12. API Specifications 2235 2236 ### 12.1 Core APIs 2237 2238 ```python 2239 # Session Management 2240 POST /sessions # Start new session 2241 GET /sessions/{id} # Get session state 2242 PUT /sessions/{id}/message # Process message 2243 DELETE /sessions/{id} # End session 2244 2245 # Bullets 2246 POST /bullets # Create bullet 2247 GET /bullets/{uuid} # Get bullet 2248 PUT /bullets/{uuid} # Update bullet 2249 GET /bullets?blanket={id} # List bullets for blanket 2250 2251 # Edges 2252 POST /edges # Create edge 2253 GET /edges/{uuid} # Get edge 2254 GET /edges?source={uuid} # Get edges from source 2255 2256 # Resonance 2257 GET /resonance/{uuid} # Get resonance score 2258 GET /resonance/top?blanket={id}&k=10 # Get top resonant 2259 2260 # Free Energy 2261 GET /energy/{uuid} # Get free energy state 2262 GET /predictions/{uuid} # Get edge predictions 2263 GET /gravity-wells?blanket={id} # Get gravity wells 2264 2265 # Permeability 2266 GET /permeability/config/{blanket} # Get config 2267 PUT /permeability/config/{blanket} # Update config 2268 GET /permeability/contribution/{blanket} # Get flock contribution 2269 POST /permeability/incoming # Process incoming flock data 2270 2271 # Evolution 2272 GET /evolution/signals # Get evolution signals 2273 POST /evolution/proposals # Create proposal 2274 PUT /evolution/proposals/{id}/vote # Vote on proposal 2275 ``` 2276 2277 --- 2278 2279 ## 13. Implementation Phases 2280 2281 ### Phase 1: Foundation (Core Storage & Free Energy) 2282 - God Database implementation 2283 - Bullet and Edge models 2284 - Basic free energy computation 2285 - Edge prediction (simple model) 2286 2287 ### Phase 2: Resonance & Altitude 2288 - Resonance scoring engine 2289 - Weight tuning system 2290 - Altitude detection 2291 - Complementary altitude system 2292 2293 ### Phase 3: Session Management 2294 - Flight protocol engine 2295 - Thread retention 2296 - OODA loop seeding 2297 - Attention tracking 2298 2299 ### Phase 4: Permeability 2300 - Concept hash extraction 2301 - Fingerprint generation 2302 - Beacon signals 2303 - Permission enforcement 2304 2305 ### Phase 5: Cultural Evolution 2306 - Exception logging 2307 - Evolution signal detection 2308 - Proposal management 2309 - Voting and codification 2310 2311 ### Phase 6: Integration & Polish 2312 - Full message processing flow 2313 - API endpoints 2314 - Performance optimization 2315 - Testing and validation 2316 2317 --- 2318 2319 *Sovereign OS Implementation Specification v0.1* 2320 *2026-01-13*