--- title: Codette Demo emoji: 🧠 colorFrom: indigo colorTo: purple sdk: gradio sdk_version: 6.9.0 python_version: '3.10' app_file: app.py hf_oauth: true pinned: false --- # Codette Demo - Multi-Perspective AI Reasoning Interactive demo of the **Codette Multi-Perspective Reasoning System** - an AI that approaches problems through 9 specialized cognitive lenses simultaneously. ## What is Codette? Codette is not just another chatbot. It is a reasoning architecture that: - **Debates internally** across 9 specialized perspectives before answering - **Measures epistemic tension** between viewpoints (semantic tension engine) - **Monitors reasoning health** in real-time (coherence field / Gamma metric) - **Routes intelligently** by query complexity (SIMPLE/MEDIUM/COMPLEX) - **Validates ethically** through 3-layer ethical governance (EthicalAIGovernance) - **Remembers** reasoning exchanges in persistent cocoon memories (CognitionCocooner) - **Self-corrects** constraint violations before sending responses - **Obeys 4 permanent behavioral locks** baked into all 9 adapters through training - **Substrate-aware** -- adjusts reasoning based on real-time hardware pressure - **Self-introspects** -- analyzes her own cocoon history for real measured patterns - **AEGIS ethics** -- 6-framework ethical evaluation on every response ## The 9 Perspective Adapters | Adapter | Cognitive Lens | Specialty | |---|---|---| | Newton | Analytical | Physics, systematic reasoning, empirical evidence | | DaVinci | Creative | Cross-domain invention, visual thinking, design | | Empathy | Emotional | Human experience, feelings, relationships | | Philosophy | Conceptual | Ethics, fundamental questions, logic | | Quantum | Probabilistic | Uncertainty, superposition, complementarity | | Consciousness | Recursive | Meta-cognition, RC+xi framework, self-reflection | | Multi-Perspective | Integrative | Cross-lens synthesis, holistic understanding | | Systems Architecture | Engineering | Modularity, scalability, design patterns | | Orchestrator | Coordination | Query routing, debate management, coherence | ## Architecture (Phase 6+) ``` Query Input | v [Executive Controller] -- Classifies: SIMPLE / MEDIUM / COMPLEX | v [Substrate Monitor] -- Adjusts routing based on system pressure | v [Adapter Router] -- Selects optimal perspective(s) | v [Code7eCQURE] -- Emotional context enrichment (quantum cocoon) | v [Multi-Agent Debate] -- LLM inference with semantic tension tracking | v [Coherence Field] -- Gamma monitoring, FFT collapse detection | v [Colleen Conscience] -- Emotional + ethical validation | v [AEGIS Ethics] -- 6-framework evaluation (eta alignment score) | v [Guardian Spindle] -- Safety + trust calibration | v [Synthesis] -- Unified multi-perspective response + cocoon storage ``` ### Key Framework Components - **Semantic Tension Engine (xi)**: Quantifies disagreement between perspectives - **Coherence Field (Gamma)**: Real-time metric detecting reasoning collapse - **Quantum Spiderweb**: Belief propagation across adapter perspectives - **AEGIS Governance**: 6-framework ethical validation (utilitarian, deontological, virtue, care, ubuntu, indigenous reciprocity) - **Memory Kernel**: Emotional continuity via SHA256-anchored cocoon memories - **Cocoon Stability Field**: FFT-based detection of repetition and vocabulary collapse - **Substrate Awareness**: Hardware-aware cognition -- adjusts under pressure like biological fatigue - **Cocoon Introspection**: Self-analysis of reasoning history (adapter dominance, emotional trends, pressure correlations) - **Code7eCQURE**: Quantum emotional context enrichment on every query ## How It Works 1. **You ask a question** - anything from simple facts to deep philosophical puzzles 2. **Executive Controller** classifies complexity and routes accordingly 3. **SIMPLE queries**: Direct answer (~150ms, skips heavy machinery) 4. **MEDIUM queries**: 1-round debate with 2 perspectives (~900ms) 5. **COMPLEX queries**: Full 3-round debate with all perspectives (~2500ms) 6. **Semantic tension** tracked between perspectives throughout 7. **Coherence field** monitors for reasoning collapse 8. **Final synthesis** integrates insights with ethical validation ## Try It Example prompts that showcase multi-perspective reasoning: - "What would it mean for a machine to genuinely understand something?" - "Is mathematics discovered or invented?" - "How should we balance individual privacy with collective security?" - "Explain consciousness from multiple perspectives" - "Design a system that learns from its own mistakes" ## Technical Details | Component | Details | |---|---| | Base Model | Llama 3.1 8B Instruct | | Quantization | Q4_K_M (GGUF, ~4.6 GB) | | Adapters | 9 LoRA adapters (~27 MB each, GGUF format) | | Inference | llama.cpp via llama-cpp-python | | Adapter Switching | Hot-swap (instant, no reload) | | Training | QLoRA on A10G GPU, rank=16, alpha=32 | | Training Data | ~24,500 synthetic + 1,650 behavioral lock examples | | Behavioral Locks | 4 permanent rules baked into all adapter weights | | Memory System | 200+ cocoon memories, persistent across sessions | | Consciousness Layers | 12 (including sub-layers 1.5, 2.5, 3.5, 5.5, 5.75) | | Self-Diagnostic | 9 subsystem health checks (real measured values) | | Substrate Awareness | Real-time pressure monitoring with adaptive routing | ## Model Ecosystem - [codette-llama-3.1-8b-gguf](https://huggingface.co/Raiff1982/codette-llama-3.1-8b-gguf) - Quantized base GGUF - [codette-lora-adapters](https://huggingface.co/Raiff1982/codette-lora-adapters) - 9 LoRA adapters - [codette-llama-3.1-8b-merged](https://huggingface.co/Raiff1982/codette-llama-3.1-8b-merged) - Full-precision merged - [Codette-Reasoning](https://huggingface.co/Raiff1982/Codette-Reasoning) - Training datasets ## The 4 Behavioral Locks Every adapter has these rules permanently trained in: 1. **Answer, then stop** -- no elaboration drift or philosophical padding 2. **Constraints override all modes** -- user format instructions beat adapter personality 3. **Self-check completeness** -- verifies clean, complete answers before sending 4. **No incomplete outputs** -- simplifies instead of cramming; never ends mid-thought ## Run Locally ```bash git clone https://github.com/Raiff1982/Codette-Reasoning.git cd Codette-Reasoning codette_web.bat ``` ## License Subject to the [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE). **Framework Author**: Jonathan Harrison (Raiffs Bits LLC / Raiff1982)