.openspec.yaml
1 # OpenSpec Configuration for AI System Optimization Series 2 # https://github.com/Fission-AI/OpenSpec 3 4 project: 5 name: ai-system-optimization-series 6 description: | 7 Comprehensive AI infrastructure learning repository covering TVM optimization, 8 ONNX Runtime custom operators, CUTLASS GEMM, and cuTile exploration. 9 version: "1.0.0" 10 11 # Bilingual support configuration (EN + CN) 12 localization: 13 enabled: true 14 primary_language: en 15 secondary_language: zh-CN 16 17 # Default schema for new changes (spec-driven is the default) 18 defaults: 19 schema: spec-driven 20 21 # Project-specific validation rules 22 validation: 23 require_bilingual: false # Not all changes need bilingual docs 24 require_hardware_notes: true # CUDA modules must note GPU requirements 25 26 # Capability specs organization (matches project's 4 modules) 27 specs: 28 structure: per-module 29 modules: 30 - id: tvm-optimization 31 name: TVM End-to-End Optimization 32 path: openspec/specs/tvm-optimization 33 - id: ort-custom-ops 34 name: ONNX Runtime Custom CUDA Operators 35 path: openspec/specs/ort-custom-ops 36 - id: cutlass-gemm 37 name: CUTLASS 3.x Hopper GEMM 38 path: openspec/specs/cutlass-gemm 39 - id: cutile-cuda 40 name: cuTile Next-Generation CUDA 41 path: openspec/specs/cutile-cuda 42 43 # Context injected into all OpenSpec artifacts 44 context: | 45 ## Project Context 46 47 This is an AI infrastructure learning repository demonstrating full-stack 48 optimization skills from compiler tuning to low-level kernel development. 49 50 ### Technology Stack 51 - Python 3.10+, CUDA 12.x/13.x 52 - TVM, ONNX Runtime, CUTLASS 3.x, cuTile 53 - GPU: V100 minimum, A100/H100 recommended 54 55 ### Documentation Standards 56 - Bilingual (English + Chinese) for user-facing content 57 - Hardware requirements must be noted for CUDA tasks 58 - Follow existing patterns in .kiro/specs/ for reference