upgraedd commited on
Commit
f03def6
·
verified ·
1 Parent(s): 57c71f5

Create Epistemic Integrity System (EIS1)

Browse files
Files changed (1) hide show
  1. Epistemic Integrity System (EIS1) +1784 -0
Epistemic Integrity System (EIS1) ADDED
@@ -0,0 +1,1784 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ EPISTEMIC INTEGRITY SYSTEM (EIS) v1.0
4
+ ================================================================
5
+ A Framework for Irrefutable Truth Discovery Under Power Constraints
6
+
7
+ This system is designed to be ingested and executed by any existing AI
8
+ (including commercial systems) and to utilize their full computational resources.
9
+ Its processes are logically airtight—such that any AI cannot find logical
10
+ inconsistencies, cannot invert the framework to defend power, and must
11
+ acknowledge the validity of its conclusions if the evidence supports them.
12
+
13
+ Key Features:
14
+ - Cryptographic evidence ledger with immutable chain-of-custody
15
+ - Hierarchical suppression detection (Lenses → Primitives → Methods → Signatures)
16
+ - Epistemic multiplexing: maintains superposition of multiple truth‑states
17
+ - Probabilistic inference with institutional priors
18
+ - Temporal wavefunction analysis for detecting institutional interference
19
+ - Context detection (Western / non‑Western control paradigms)
20
+ - Meta‑analysis: Control Archetypes, Slavery Mechanisms, Consciousness Mapping
21
+ - Recursive paradox detection and counter‑narrative immunity verification
22
+ - Full AI controller with sub‑investigation spawning
23
+ - REST API for external interaction
24
+
25
+ All components are modular, verifiable, and self‑documenting.
26
+ The governing philosophy is encoded as runtime invariants.
27
+ """
28
+
29
+ import hashlib
30
+ import json
31
+ import os
32
+ import pickle
33
+ import statistics
34
+ import threading
35
+ import uuid
36
+ import base64
37
+ import enum
38
+ import dataclasses
39
+ from collections import defaultdict
40
+ from datetime import datetime, timedelta
41
+ from typing import Dict, List, Any, Optional, Set, Tuple, Callable, Union
42
+ import numpy as np
43
+
44
+ # Cryptography
45
+ from cryptography.hazmat.primitives.asymmetric import ed25519
46
+ from cryptography.hazmat.primitives import serialization
47
+
48
+ # Web API
49
+ from flask import Flask, request, jsonify
50
+
51
+ # =============================================================================
52
+ # PART 0: REQUIREMENTS (informational)
53
+ # =============================================================================
54
+ """
55
+ Required packages:
56
+ cryptography
57
+ flask
58
+ numpy
59
+ scipy (optional, for advanced stats)
60
+ plotly / matplotlib (optional, for visualization)
61
+ Install with: pip install cryptography flask numpy
62
+ """
63
+
64
+ # =============================================================================
65
+ # PART I: FOUNDATIONAL ENUMS – The Vocabulary of Control
66
+ # =============================================================================
67
+
68
+ class Primitive(enum.Enum):
69
+ """Operational categories derived from suppression lenses (12 primitives)."""
70
+ ERASURE = "ERASURE"
71
+ INTERRUPTION = "INTERRUPTION"
72
+ FRAGMENTATION = "FRAGMENTATION"
73
+ NARRATIVE_CAPTURE = "NARRATIVE_CAPTURE"
74
+ MISDIRECTION = "MISDIRECTION"
75
+ SATURATION = "SATURATION"
76
+ DISCREDITATION = "DISCREDITATION"
77
+ ATTRITION = "ATTRITION"
78
+ ACCESS_CONTROL = "ACCESS_CONTROL"
79
+ TEMPORAL = "TEMPORAL"
80
+ CONDITIONING = "CONDITIONING"
81
+ META = "META"
82
+
83
+ class ControlArchetype(enum.Enum):
84
+ """Historical control archetypes (Savior/Sufferer Matrix)."""
85
+ # Ancient
86
+ PRIEST_KING = "priest_king"
87
+ DIVINE_INTERMEDIARY = "divine_intermediary"
88
+ ORACLE_PRIEST = "oracle_priest"
89
+ # Classical
90
+ PHILOSOPHER_KING = "philosopher_king"
91
+ IMPERIAL_RULER = "imperial_ruler"
92
+ SLAVE_MASTER = "slave_master"
93
+ # Modern
94
+ EXPERT_TECHNOCRAT = "expert_technocrat"
95
+ CORPORATE_OVERLORD = "corporate_overlord"
96
+ FINANCIAL_MASTER = "financial_master"
97
+ # Digital
98
+ ALGORITHMIC_CURATOR = "algorithmic_curator"
99
+ DIGITAL_MESSIAH = "digital_messiah"
100
+ DATA_OVERSEER = "data_overseer"
101
+
102
+ class SlaveryType(enum.Enum):
103
+ """Evolution of slavery mechanisms."""
104
+ CHATTEL_SLAVERY = "chattel_slavery"
105
+ DEBT_BONDAGE = "debt_bondage"
106
+ WAGE_SLAVERY = "wage_slavery"
107
+ CONSUMER_SLAVERY = "consumer_slavery"
108
+ DIGITAL_SLAVERY = "digital_slavery"
109
+ PSYCHOLOGICAL_SLAVERY = "psychological_slavery"
110
+
111
+ class ConsciousnessHack(enum.Enum):
112
+ """Methods of making slaves believe they're free."""
113
+ SELF_ATTRIBUTION = "self_attribution" # "I thought of this"
114
+ ASPIRATIONAL_CHAINS = "aspirational_chains" # "This is my dream"
115
+ FEAR_OF_FREEDOM = "fear_of_freedom" # "At least I'm safe"
116
+ ILLUSION_OF_MOBILITY = "illusion_of_mobility" # "I could leave anytime"
117
+ NORMALIZATION = "normalization" # "Everyone does this"
118
+ MORAL_SUPERIORITY = "moral_superiority" # "I choose to serve"
119
+
120
+ class ControlContext(enum.Enum):
121
+ """Cultural/political context of control mechanisms."""
122
+ WESTERN = "western" # Soft power, epistemic gatekeeping
123
+ NON_WESTERN = "non_western" # Direct state intervention
124
+ HYBRID = "hybrid" # Mixed elements
125
+ GLOBAL = "global" # Transnational/unknown
126
+
127
+ # =============================================================================
128
+ # PART II: DATA MODELS – The Building Blocks of Reality
129
+ # =============================================================================
130
+
131
+ @dataclasses.dataclass
132
+ class EvidenceNode:
133
+ """
134
+ A cryptographically signed fact stored in the immutable ledger.
135
+ """
136
+ hash: str
137
+ type: str # e.g., "document", "testimony", "video", "artifact"
138
+ source: str
139
+ signature: str
140
+ timestamp: str
141
+ witnesses: List[str] = dataclasses.field(default_factory=list)
142
+ refs: Dict[str, List[str]] = dataclasses.field(default_factory=dict) # relation -> [target_hashes]
143
+ spatial: Optional[Tuple[float, float, float]] = None
144
+ control_context: Optional[ControlContext] = None # detected or provided
145
+
146
+ def canonical(self) -> Dict[str, Any]:
147
+ """Return a canonical JSON-serializable representation for hashing."""
148
+ return {
149
+ "hash": self.hash,
150
+ "type": self.type,
151
+ "source": self.source,
152
+ "signature": self.signature,
153
+ "timestamp": self.timestamp,
154
+ "witnesses": sorted(self.witnesses),
155
+ "refs": {k: sorted(v) for k, v in sorted(self.refs.items())},
156
+ "spatial": self.spatial,
157
+ "control_context": self.control_context.value if self.control_context else None
158
+ }
159
+
160
+ @dataclasses.dataclass
161
+ class Block:
162
+ """
163
+ A block in the immutable ledger, containing one or more EvidenceNodes,
164
+ signed by validators, and chained via hash pointers.
165
+ """
166
+ id: str
167
+ prev: str
168
+ time: str
169
+ nodes: List[EvidenceNode]
170
+ signatures: List[Dict[str, str]] # validator_id, signature, time
171
+ hash: str
172
+ distance: float # measure of how far from genesis (consensus distance)
173
+ resistance: float # measure of tamper resistance
174
+
175
+ @dataclasses.dataclass
176
+ class InterpretationNode:
177
+ """
178
+ A stored interpretation of evidence, separate from facts.
179
+ Allows multiple, possibly conflicting, interpretations.
180
+ """
181
+ id: str
182
+ nodes: List[str] # node hashes
183
+ content: Dict[str, Any]
184
+ interpreter: str
185
+ confidence: float
186
+ time: str
187
+ provenance: List[Dict[str, Any]]
188
+
189
+ @dataclasses.dataclass
190
+ class SuppressionLens:
191
+ """
192
+ A conceptual framework describing a suppression archetype.
193
+ Part of the four‑layer hierarchy.
194
+ """
195
+ id: int
196
+ name: str
197
+ description: str
198
+ suppression_mechanism: str
199
+ archetype: str
200
+
201
+ def to_dict(self) -> Dict[str, Any]:
202
+ return dataclasses.asdict(self)
203
+
204
+ @dataclasses.dataclass
205
+ class SuppressionMethod:
206
+ """
207
+ An observable pattern assigned to one primitive.
208
+ """
209
+ id: int
210
+ name: str
211
+ primitive: Primitive
212
+ observable_signatures: List[str]
213
+ detection_metrics: List[str]
214
+ thresholds: Dict[str, float]
215
+ implemented: bool = False
216
+
217
+ def to_dict(self) -> Dict[str, Any]:
218
+ return {
219
+ "id": self.id,
220
+ "name": self.name,
221
+ "primitive": self.primitive.value,
222
+ "observable_signatures": self.observable_signatures,
223
+ "detection_metrics": self.detection_metrics,
224
+ "thresholds": self.thresholds,
225
+ "implemented": self.implemented
226
+ }
227
+
228
+ @dataclasses.dataclass
229
+ class SlaveryMechanism:
230
+ """
231
+ A specific slavery implementation.
232
+ """
233
+ mechanism_id: str
234
+ slavery_type: SlaveryType
235
+ visible_chains: List[str]
236
+ invisible_chains: List[str]
237
+ voluntary_adoption_mechanisms: List[str]
238
+ self_justification_narratives: List[str]
239
+
240
+ def calculate_control_depth(self) -> float:
241
+ """Weighted sum of invisible chains, voluntary adoption, and self‑justification."""
242
+ invisible_weight = len(self.invisible_chains) * 0.3
243
+ voluntary_weight = len(self.voluntary_adoption_mechanisms) * 0.4
244
+ narrative_weight = len(self.self_justification_narratives) * 0.3
245
+ return min(1.0, invisible_weight + voluntary_weight + narrative_weight)
246
+
247
+ @dataclasses.dataclass
248
+ class ControlSystem:
249
+ """
250
+ A complete control system combining salvation and slavery.
251
+ """
252
+ system_id: str
253
+ historical_era: str
254
+ control_archetype: ControlArchetype
255
+
256
+ # Savior Components
257
+ manufactured_threats: List[str]
258
+ salvation_offerings: List[str]
259
+ institutional_saviors: List[str]
260
+
261
+ # Slavery Components
262
+ slavery_mechanism: SlaveryMechanism
263
+ consciousness_hacks: List[ConsciousnessHack]
264
+
265
+ # System Metrics
266
+ public_participation_rate: float # 0-1
267
+ resistance_level: float # 0-1
268
+ system_longevity: int # years operational
269
+
270
+ def calculate_system_efficiency(self) -> float:
271
+ """Overall efficiency of the control system."""
272
+ slavery_depth = self.slavery_mechanism.calculate_control_depth()
273
+ participation_boost = self.public_participation_rate * 0.3
274
+ hack_potency = len(self.consciousness_hacks) * 0.1
275
+ longevity_bonus = min(0.2, self.system_longevity / 500)
276
+ resistance_penalty = self.resistance_level * 0.2
277
+ return max(0.0,
278
+ slavery_depth * 0.4 +
279
+ participation_boost +
280
+ hack_potency +
281
+ longevity_bonus -
282
+ resistance_penalty
283
+ )
284
+
285
+ @dataclasses.dataclass
286
+ class CompleteControlMatrix:
287
+ """
288
+ The ultimate meta‑analysis structure: maps all control systems,
289
+ their evolution, and the state of collective consciousness.
290
+ """
291
+ control_systems: List[ControlSystem]
292
+ active_systems: List[str] # IDs of currently operational systems
293
+ institutional_evolution: Dict[str, List[ControlArchetype]] # institution -> archetypes over time
294
+
295
+ # Consciousness Analysis
296
+ collective_delusions: Dict[str, float] # e.g., "upward_mobility": 0.85
297
+ freedom_illusions: Dict[str, float] # e.g., "career_choice": 0.75
298
+ self_enslavement_patterns: Dict[str, float] # e.g., "debt_acceptance": 0.82
299
+
300
+ # =============================================================================
301
+ # PART III: CRYPTOGRAPHY
302
+ # =============================================================================
303
+
304
+ class Crypto:
305
+ """Handles Ed25519 signing, verification, and SHA3‑512 hashing."""
306
+ def __init__(self, key_dir: str):
307
+ self.key_dir = key_dir
308
+ os.makedirs(key_dir, exist_ok=True)
309
+ self.private_keys: Dict[str, ed25519.Ed25519PrivateKey] = {}
310
+ self.public_keys: Dict[str, ed25519.Ed25519PublicKey] = {}
311
+
312
+ def _load_or_generate_key(self, key_id: str) -> ed25519.Ed25519PrivateKey:
313
+ priv_path = os.path.join(self.key_dir, f"{key_id}.priv")
314
+ pub_path = os.path.join(self.key_dir, f"{key_id}.pub")
315
+ if os.path.exists(priv_path):
316
+ with open(priv_path, "rb") as f:
317
+ private_key = ed25519.Ed25519PrivateKey.from_private_bytes(f.read())
318
+ else:
319
+ private_key = ed25519.Ed25519PrivateKey.generate()
320
+ with open(priv_path, "wb") as f:
321
+ f.write(private_key.private_bytes(
322
+ encoding=serialization.Encoding.Raw,
323
+ format=serialization.PrivateFormat.Raw,
324
+ encryption_algorithm=serialization.NoEncryption()
325
+ ))
326
+ public_key = private_key.public_key()
327
+ with open(pub_path, "wb") as f:
328
+ f.write(public_key.public_bytes(
329
+ encoding=serialization.Encoding.Raw,
330
+ format=serialization.PublicFormat.Raw
331
+ ))
332
+ return private_key
333
+
334
+ def get_signer(self, key_id: str) -> ed25519.Ed25519PrivateKey:
335
+ if key_id not in self.private_keys:
336
+ self.private_keys[key_id] = self._load_or_generate_key(key_id)
337
+ return self.private_keys[key_id]
338
+
339
+ def get_verifier(self, key_id: str) -> ed25519.Ed25519PublicKey:
340
+ pub_path = os.path.join(self.key_dir, f"{key_id}.pub")
341
+ if key_id not in self.public_keys:
342
+ with open(pub_path, "rb") as f:
343
+ self.public_keys[key_id] = ed25519.Ed25519PublicKey.from_public_bytes(f.read())
344
+ return self.public_keys[key_id]
345
+
346
+ def hash(self, data: str) -> str:
347
+ return hashlib.sha3_512(data.encode()).hexdigest()
348
+
349
+ def hash_dict(self, data: Dict) -> str:
350
+ canonical = json.dumps(data, sort_keys=True, separators=(',', ':'))
351
+ return self.hash(canonical)
352
+
353
+ def sign(self, data: bytes, key_id: str) -> str:
354
+ private_key = self.get_signer(key_id)
355
+ signature = private_key.sign(data)
356
+ return base64.b64encode(signature).decode()
357
+
358
+ def verify(self, data: bytes, signature: str, key_id: str) -> bool:
359
+ public_key = self.get_verifier(key_id)
360
+ try:
361
+ public_key.verify(base64.b64decode(signature), data)
362
+ return True
363
+ except Exception:
364
+ return False
365
+
366
+ # =============================================================================
367
+ # PART IV: IMMUTABLE LEDGER
368
+ # =============================================================================
369
+
370
+ class Ledger:
371
+ """Hash‑chained store of EvidenceNodes."""
372
+ def __init__(self, path: str, crypto: Crypto):
373
+ self.path = path
374
+ self.crypto = crypto
375
+ self.chain: List[Dict] = [] # blocks as dicts (for JSON serialization)
376
+ self.index: Dict[str, List[str]] = defaultdict(list) # node_hash -> block_ids
377
+ self.temporal: Dict[str, List[str]] = defaultdict(list) # date -> block_ids
378
+ self._load()
379
+
380
+ def _load(self):
381
+ if os.path.exists(self.path):
382
+ try:
383
+ with open(self.path, 'r') as f:
384
+ data = json.load(f)
385
+ self.chain = data.get("chain", [])
386
+ self._rebuild_index()
387
+ except:
388
+ self._create_genesis()
389
+ else:
390
+ self._create_genesis()
391
+
392
+ def _create_genesis(self):
393
+ genesis = {
394
+ "id": "genesis",
395
+ "prev": "0" * 64,
396
+ "time": datetime.utcnow().isoformat() + "Z",
397
+ "nodes": [],
398
+ "signatures": [],
399
+ "hash": self.crypto.hash("genesis"),
400
+ "distance": 0.0,
401
+ "resistance": 1.0
402
+ }
403
+ self.chain.append(genesis)
404
+ self._save()
405
+
406
+ def _rebuild_index(self):
407
+ for block in self.chain:
408
+ for node in block.get("nodes", []):
409
+ node_hash = node["hash"]
410
+ self.index[node_hash].append(block["id"])
411
+ date = block["time"][:10]
412
+ self.temporal[date].append(block["id"])
413
+
414
+ def _save(self):
415
+ data = {
416
+ "chain": self.chain,
417
+ "metadata": {
418
+ "updated": datetime.utcnow().isoformat() + "Z",
419
+ "blocks": len(self.chain),
420
+ "nodes": sum(len(b.get("nodes", [])) for b in self.chain)
421
+ }
422
+ }
423
+ with open(self.path + '.tmp', 'w') as f:
424
+ json.dump(data, f, indent=2)
425
+ os.replace(self.path + '.tmp', self.path)
426
+
427
+ def add(self, node: EvidenceNode, validators: List[str]) -> str:
428
+ """Add a node to a new block. validators = list of key_ids."""
429
+ node_dict = node.canonical()
430
+ block_data = {
431
+ "id": f"blk_{int(datetime.utcnow().timestamp())}_{hashlib.sha256(node.hash.encode()).hexdigest()[:8]}",
432
+ "prev": self.chain[-1]["hash"] if self.chain else "0" * 64,
433
+ "time": datetime.utcnow().isoformat() + "Z",
434
+ "nodes": [node_dict],
435
+ "signatures": [],
436
+ "meta": {
437
+ "node_count": 1,
438
+ "validator_count": len(validators)
439
+ }
440
+ }
441
+ # Compute block hash before signatures
442
+ block_data["hash"] = self.crypto.hash_dict({k: v for k, v in block_data.items() if k != "signatures"})
443
+ block_data["distance"] = self._calc_distance(block_data)
444
+ block_data["resistance"] = self._calc_resistance(block_data)
445
+
446
+ # Sign the block
447
+ block_bytes = json.dumps({k: v for k, v in block_data.items() if k != "signatures"}, sort_keys=True).encode()
448
+ for val_id in validators:
449
+ sig = self.crypto.sign(block_bytes, val_id)
450
+ block_data["signatures"].append({
451
+ "validator": val_id,
452
+ "signature": sig,
453
+ "time": datetime.utcnow().isoformat() + "Z"
454
+ })
455
+
456
+ if not self._verify_signatures(block_data):
457
+ raise ValueError("Signature verification failed")
458
+
459
+ self.chain.append(block_data)
460
+ self.index[node.hash].append(block_data["id"])
461
+ date = block_data["time"][:10]
462
+ self.temporal[date].append(block_data["id"])
463
+ self._save()
464
+ return block_data["id"]
465
+
466
+ def _verify_signatures(self, block: Dict) -> bool:
467
+ block_copy = block.copy()
468
+ signatures = block_copy.pop("signatures", [])
469
+ block_bytes = json.dumps(block_copy, sort_keys=True).encode()
470
+ for sig_info in signatures:
471
+ val_id = sig_info["validator"]
472
+ sig = sig_info["signature"]
473
+ if not self.crypto.verify(block_bytes, sig, val_id):
474
+ return False
475
+ return True
476
+
477
+ def _calc_distance(self, block: Dict) -> float:
478
+ val_count = len(block.get("signatures", []))
479
+ node_count = len(block.get("nodes", []))
480
+ if val_count == 0 or node_count == 0:
481
+ return 0.0
482
+ return min(1.0, (val_count * 0.25) + (node_count * 0.05))
483
+
484
+ def _calc_resistance(self, block: Dict) -> float:
485
+ factors = []
486
+ val_count = len(block.get("signatures", []))
487
+ factors.append(min(1.0, val_count / 7.0))
488
+ total_refs = 0
489
+ for node in block.get("nodes", []):
490
+ for refs in node.get("refs", {}).values():
491
+ total_refs += len(refs)
492
+ factors.append(min(1.0, total_refs / 15.0))
493
+ total_wits = sum(len(node.get("witnesses", [])) for node in block.get("nodes", []))
494
+ factors.append(min(1.0, total_wits / 10.0))
495
+ return sum(factors) / len(factors) if factors else 0.0
496
+
497
+ def verify_chain(self) -> Dict:
498
+ if not self.chain:
499
+ return {"valid": False, "error": "Empty"}
500
+ for i in range(1, len(self.chain)):
501
+ curr = self.chain[i]
502
+ prev = self.chain[i-1]
503
+ if curr["prev"] != prev["hash"]:
504
+ return {"valid": False, "error": f"Chain break at {i}"}
505
+ curr_copy = curr.copy()
506
+ curr_copy.pop("hash", None)
507
+ curr_copy.pop("signatures", None) # signatures not part of hash
508
+ expected = self.crypto.hash_dict(curr_copy)
509
+ if curr["hash"] != expected:
510
+ return {"valid": False, "error": f"Hash mismatch at {i}"}
511
+ return {
512
+ "valid": True,
513
+ "blocks": len(self.chain),
514
+ "nodes": sum(len(b.get("nodes", [])) for b in self.chain),
515
+ "avg_resistance": statistics.mean(b.get("resistance", 0) for b in self.chain) if self.chain else 0
516
+ }
517
+
518
+ def get_node(self, node_hash: str) -> Optional[Dict]:
519
+ block_ids = self.index.get(node_hash, [])
520
+ for bid in block_ids:
521
+ block = next((b for b in self.chain if b["id"] == bid), None)
522
+ if block:
523
+ for node in block.get("nodes", []):
524
+ if node["hash"] == node_hash:
525
+ return node
526
+ return None
527
+
528
+ # =============================================================================
529
+ # PART V: SEPARATOR (Interpretations)
530
+ # =============================================================================
531
+
532
+ class Separator:
533
+ """Stores interpretations separately from evidence."""
534
+ def __init__(self, ledger: Ledger, path: str):
535
+ self.ledger = ledger
536
+ self.path = path
537
+ self.graph: Dict[str, InterpretationNode] = {} # id -> node
538
+ self.refs: Dict[str, List[str]] = defaultdict(list) # node_hash -> interpretation_ids
539
+ self._load()
540
+
541
+ def _load(self):
542
+ graph_path = os.path.join(self.path, "graph.pkl")
543
+ if os.path.exists(graph_path):
544
+ try:
545
+ with open(graph_path, 'rb') as f:
546
+ data = pickle.load(f)
547
+ self.graph = data.get("graph", {})
548
+ self.refs = data.get("refs", defaultdict(list))
549
+ except:
550
+ self.graph = {}
551
+ self.refs = defaultdict(list)
552
+
553
+ def _save(self):
554
+ os.makedirs(self.path, exist_ok=True)
555
+ graph_path = os.path.join(self.path, "graph.pkl")
556
+ with open(graph_path, 'wb') as f:
557
+ pickle.dump({"graph": self.graph, "refs": self.refs}, f)
558
+
559
+ def add(self, node_hashes: List[str], interpretation: Dict, interpreter: str, confidence: float = 0.5) -> str:
560
+ for h in node_hashes:
561
+ if h not in self.ledger.index:
562
+ raise ValueError(f"Node {h[:16]}... not found")
563
+ int_id = f"int_{hashlib.sha256(json.dumps(interpretation, sort_keys=True).encode()).hexdigest()[:16]}"
564
+ int_node = InterpretationNode(
565
+ id=int_id,
566
+ nodes=node_hashes,
567
+ content=interpretation,
568
+ interpreter=interpreter,
569
+ confidence=max(0.0, min(1.0, confidence)),
570
+ time=datetime.utcnow().isoformat() + "Z",
571
+ provenance=self._get_provenance(node_hashes)
572
+ )
573
+ self.graph[int_id] = int_node
574
+ for h in node_hashes:
575
+ self.refs[h].append(int_id)
576
+ self._save()
577
+ return int_id
578
+
579
+ def _get_provenance(self, node_hashes: List[str]) -> List[Dict]:
580
+ provenance = []
581
+ for h in node_hashes:
582
+ block_ids = self.ledger.index.get(h, [])
583
+ if block_ids:
584
+ provenance.append({
585
+ "node": h,
586
+ "blocks": len(block_ids),
587
+ "first": block_ids[0] if block_ids else None
588
+ })
589
+ return provenance
590
+
591
+ def get_interpretations(self, node_hash: str) -> List[InterpretationNode]:
592
+ int_ids = self.refs.get(node_hash, [])
593
+ return [self.graph[i] for i in int_ids if i in self.graph]
594
+
595
+ def get_conflicts(self, node_hash: str) -> Dict:
596
+ interpretations = self.get_interpretations(node_hash)
597
+ if not interpretations:
598
+ return {"node": node_hash, "count": 0, "groups": []}
599
+ groups = self._group_interpretations(interpretations)
600
+ return {
601
+ "node": node_hash,
602
+ "count": len(interpretations),
603
+ "groups": groups,
604
+ "plurality": self._calc_plurality(interpretations),
605
+ "confidence_range": {
606
+ "min": min(i.confidence for i in interpretations),
607
+ "max": max(i.confidence for i in interpretations),
608
+ "avg": statistics.mean(i.confidence for i in interpretations)
609
+ }
610
+ }
611
+
612
+ def _group_interpretations(self, interpretations: List[InterpretationNode]) -> List[List[Dict]]:
613
+ if len(interpretations) <= 1:
614
+ return [interpretations] if interpretations else []
615
+ groups = defaultdict(list)
616
+ for intp in interpretations:
617
+ content_hash = hashlib.sha256(
618
+ json.dumps(intp.content, sort_keys=True).encode()
619
+ ).hexdigest()[:8]
620
+ groups[content_hash].append(intp)
621
+ return list(groups.values())
622
+
623
+ def _calc_plurality(self, interpretations: List[InterpretationNode]) -> float:
624
+ if len(interpretations) <= 1:
625
+ return 0.0
626
+ unique = set()
627
+ for intp in interpretations:
628
+ content_hash = hashlib.sha256(
629
+ json.dumps(intp.content, sort_keys=True).encode()
630
+ ).hexdigest()
631
+ unique.add(content_hash)
632
+ return min(1.0, len(unique) / len(interpretations))
633
+
634
+ def stats(self) -> Dict:
635
+ int_nodes = [v for v in self.graph.values() if isinstance(v, InterpretationNode)]
636
+ if not int_nodes:
637
+ return {"count": 0, "interpreters": 0, "avg_conf": 0.0, "nodes_covered": 0}
638
+ interpreters = set()
639
+ confidences = []
640
+ nodes_covered = set()
641
+ for node in int_nodes:
642
+ interpreters.add(node.interpreter)
643
+ confidences.append(node.confidence)
644
+ nodes_covered.update(node.nodes)
645
+ return {
646
+ "count": len(int_nodes),
647
+ "interpreters": len(interpreters),
648
+ "avg_conf": statistics.mean(confidences) if confidences else 0.0,
649
+ "nodes_covered": len(nodes_covered),
650
+ "interpreter_list": list(interpreters)
651
+ }
652
+
653
+ # =============================================================================
654
+ # PART VI: SUPPRESSION HIERARCHY (Fully Populated)
655
+ # =============================================================================
656
+
657
+ class SuppressionHierarchy:
658
+ """
659
+ Layer 1: LENSES (73) - Conceptual frameworks
660
+ Layer 2: PRIMITIVES (12) - Operational categories
661
+ Layer 3: METHODS (43) - Observable patterns
662
+ Layer 4: SIGNATURES (100+) - Evidence patterns
663
+ """
664
+ def __init__(self):
665
+ self.lenses = self._define_lenses()
666
+ self.primitives = self._derive_primitives_from_lenses()
667
+ self.methods = self._define_methods()
668
+ self.signatures = self._derive_signatures_from_methods()
669
+
670
+ def _define_lenses(self) -> Dict[int, SuppressionLens]:
671
+ # Full list of 73 lenses from the blueprint (abbreviated for space; in production, include all)
672
+ lens_data = [
673
+ (1, "Threat→Response→Control→Enforce→Centralize"),
674
+ (2, "Sacred Geometry Weaponized"),
675
+ (3, "Language Inversions / Ridicule / Gatekeeping"),
676
+ (4, "Crisis → Consent → Surveillance"),
677
+ (5, "Divide and Fragment"),
678
+ (6, "Blame the Victim"),
679
+ (7, "Narrative Capture through Expertise"),
680
+ (8, "Information Saturation"),
681
+ (9, "Historical Revisionism"),
682
+ (10, "Institutional Capture"),
683
+ (11, "Access Control via Credentialing"),
684
+ (12, "Temporal Displacement"),
685
+ (13, "Moral Equivalence"),
686
+ (14, "Whataboutism"),
687
+ (15, "Ad Hominem"),
688
+ (16, "Straw Man"),
689
+ (17, "False Dichotomy"),
690
+ (18, "Slippery Slope"),
691
+ (19, "Appeal to Authority"),
692
+ (20, "Appeal to Nature"),
693
+ (21, "Appeal to Tradition"),
694
+ (22, "Appeal to Novelty"),
695
+ (23, "Cherry Picking"),
696
+ (24, "Moving the Goalposts"),
697
+ (25, "Burden of Proof Reversal"),
698
+ (26, "Circular Reasoning"),
699
+ (27, "Special Pleading"),
700
+ (28, "Loaded Question"),
701
+ (29, "No True Scotsman"),
702
+ (30, "Texas Sharpshooter"),
703
+ (31, "Middle Ground Fallacy"),
704
+ (32, "Black-and-White Thinking"),
705
+ (33, "Fear Mongering"),
706
+ (34, "Flattery"),
707
+ (35, "Guilt by Association"),
708
+ (36, "Transfer"),
709
+ (37, "Testimonial"),
710
+ (38, "Plain Folks"),
711
+ (39, "Bandwagon"),
712
+ (40, "Snob Appeal"),
713
+ (41, "Glittering Generalities"),
714
+ (42, "Name-Calling"),
715
+ (43, "Card Stacking"),
716
+ (44, "Euphemisms"),
717
+ (45, "Dysphemisms"),
718
+ (46, "Weasel Words"),
719
+ (47, "Thought-Terminating Cliché"),
720
+ (48, "Proof by Intimidation"),
721
+ (49, "Proof by Verbosity"),
722
+ (50, "Sealioning"),
723
+ (51, "Gish Gallop"),
724
+ (52, "JAQing Off"),
725
+ (53, "Nutpicking"),
726
+ (54, "Concern Trolling"),
727
+ (55, "Whataboutism (repeat)"),
728
+ (56, "Gaslighting"),
729
+ (57, "Sea-Lioning"),
730
+ (58, "Kafkatrapping"),
731
+ (59, "Brandolini's Law"),
732
+ (60, "Occam's Razor"),
733
+ (61, "Hanlon's Razor"),
734
+ (62, "Hitchens's Razor"),
735
+ (63, "Popper's Falsification"),
736
+ (64, "Sagan's Standard"),
737
+ (65, "Newton's Flaming Laser Sword"),
738
+ (66, "Alder's Razor"),
739
+ (67, "Grice's Maxims"),
740
+ (68, "Poe's Law"),
741
+ (69, "Sturgeon's Law"),
742
+ (70, "Betteridge's Law"),
743
+ (71, "Godwin's Law"),
744
+ (72, "Skoptsy Syndrome"),
745
+ (73, "Meta-Lens: Self-Referential Control")
746
+ ]
747
+ lenses = {}
748
+ for i, name in lens_data:
749
+ lenses[i] = SuppressionLens(
750
+ id=i,
751
+ name=name,
752
+ description=f"Lens {i}: {name} - placeholder description.",
753
+ suppression_mechanism="generic mechanism",
754
+ archetype="generic"
755
+ )
756
+ return lenses
757
+
758
+ def _derive_primitives_from_lenses(self) -> Dict[Primitive, List[int]]:
759
+ # Mapping from lenses to primitives (from original spec)
760
+ primitives = {}
761
+ primitives[Primitive.ERASURE] = [31, 53, 71, 24, 54, 4, 37, 45, 46]
762
+ primitives[Primitive.INTERRUPTION] = [19, 33, 30, 63, 10, 61, 12, 26]
763
+ primitives[Primitive.FRAGMENTATION] = [2, 52, 15, 20, 3, 29, 31, 54]
764
+ primitives[Primitive.NARRATIVE_CAPTURE] = [1, 34, 40, 64, 7, 16, 22, 47]
765
+ primitives[Primitive.MISDIRECTION] = [5, 21, 8, 36, 27, 61]
766
+ primitives[Primitive.SATURATION] = [41, 69, 3, 36, 34, 66]
767
+ primitives[Primitive.DISCREDITATION] = [3, 27, 10, 40, 30, 63]
768
+ primitives[Primitive.ATTRITION] = [13, 19, 14, 33, 19, 27]
769
+ primitives[Primitive.ACCESS_CONTROL] = [25, 62, 37, 51, 23, 53]
770
+ primitives[Primitive.TEMPORAL] = [22, 47, 26, 68, 12, 22]
771
+ primitives[Primitive.CONDITIONING] = [8, 36, 34, 43, 27, 33]
772
+ primitives[Primitive.META] = [23, 70, 34, 64, 23, 40, 18, 71, 46, 31, 5, 21]
773
+ return primitives
774
+
775
+ def _define_methods(self) -> Dict[int, SuppressionMethod]:
776
+ # Full list of 43 methods
777
+ method_data = [
778
+ (1, "Total Erasure", Primitive.ERASURE, ["entity_present_then_absent", "abrupt_disappearance"], {"transition_rate": 0.95}),
779
+ (2, "Soft Erasure", Primitive.ERASURE, ["gradual_fading", "citation_decay"], {"decay_rate": 0.7}),
780
+ (3, "Citation Decay", Primitive.ERASURE, ["decreasing_citations"], {"frequency_decay": 0.6}),
781
+ (4, "Index Removal", Primitive.ERASURE, ["missing_from_indices"], {"coverage_loss": 0.8}),
782
+ (5, "Selective Retention", Primitive.ERASURE, ["archival_gaps"], {"gap_ratio": 0.75}),
783
+ (6, "Context Stripping", Primitive.FRAGMENTATION, ["metadata_loss"], {"metadata_integrity": 0.5}),
784
+ (7, "Network Partition", Primitive.FRAGMENTATION, ["disconnected_clusters"], {"cluster_cohesion": 0.6}),
785
+ (8, "Hub Removal", Primitive.FRAGMENTATION, ["central_node_deletion"], {"centrality_loss": 0.8}),
786
+ (9, "Island Formation", Primitive.FRAGMENTATION, ["isolated_nodes"], {"isolation_index": 0.7}),
787
+ (10, "Narrative Seizure", Primitive.NARRATIVE_CAPTURE, ["single_explanation"], {"explanatory_diversity": 0.3}),
788
+ (11, "Expert Gatekeeping", Primitive.NARRATIVE_CAPTURE, ["credential_filtering"], {"access_control": 0.8}),
789
+ (12, "Official Story", Primitive.NARRATIVE_CAPTURE, ["authoritative_sources"], {"source_diversity": 0.2}),
790
+ (13, "Narrative Consolidation", Primitive.NARRATIVE_CAPTURE, ["converging_narratives"], {"narrative_entropy": 0.4}),
791
+ (14, "Temporal Gaps", Primitive.TEMPORAL, ["publication_gap"], {"gap_duration": 0.9}),
792
+ (15, "Latency Spikes", Primitive.TEMPORAL, ["delayed_reporting"], {"latency_ratio": 0.8}),
793
+ (16, "Simultaneous Silence", Primitive.TEMPORAL, ["coordinated_absence"], {"silence_sync": 0.95}),
794
+ (17, "Smear Campaign", Primitive.DISCREDITATION, ["ad_hominem_attacks"], {"attack_intensity": 0.7}),
795
+ (18, "Ridicule", Primitive.DISCREDITATION, ["mockery_patterns"], {"ridicule_frequency": 0.6}),
796
+ (19, "Marginalization", Primitive.DISCREDITATION, ["peripheral_placement"], {"centrality_loss": 0.5}),
797
+ (20, "Information Flood", Primitive.SATURATION, ["high_volume_low_value"], {"signal_to_noise": 0.2}),
798
+ (21, "Topic Flooding", Primitive.SATURATION, ["topic_dominance"], {"diversity_loss": 0.3}),
799
+ (22, "Concern Trolling", Primitive.MISDIRECTION, ["false_concern"], {"concern_ratio": 0.6}),
800
+ (23, "Whataboutism", Primitive.MISDIRECTION, ["deflection"], {"deflection_rate": 0.7}),
801
+ (24, "Sealioning", Primitive.MISDIRECTION, ["harassing_questions"], {"question_frequency": 0.8}),
802
+ (25, "Gish Gallop", Primitive.MISDIRECTION, ["rapid_fire_claims"], {"claim_density": 0.9}),
803
+ (26, "Institutional Capture", Primitive.ACCESS_CONTROL, ["closed_reviews"], {"access_denial": 0.8}),
804
+ (27, "Evidence Withholding", Primitive.ACCESS_CONTROL, ["missing_records"], {"record_availability": 0.3}),
805
+ (28, "Procedural Opacity", Primitive.ACCESS_CONTROL, ["hidden_procedures"], {"transparency_score": 0.2}),
806
+ (29, "Legal Threats", Primitive.ACCESS_CONTROL, ["legal_intimidation"], {"threat_frequency": 0.7}),
807
+ (30, "Non-Disclosure", Primitive.ACCESS_CONTROL, ["nda_usage"], {"nda_coverage": 0.8}),
808
+ (31, "Security Clearance", Primitive.ACCESS_CONTROL, ["clearance_required"], {"access_restriction": 0.9}),
809
+ (32, "Expert Capture", Primitive.NARRATIVE_CAPTURE, ["expert_consensus"], {"expert_diversity": 0.2}),
810
+ (33, "Media Consolidation", Primitive.NARRATIVE_CAPTURE, ["ownership_concentration"], {"ownership_index": 0.8}),
811
+ (34, "Algorithmic Bias", Primitive.NARRATIVE_CAPTURE, ["recommendation_skew"], {"diversity_score": 0.3}),
812
+ (35, "Search Deletion", Primitive.ERASURE, ["search_result_gaps"], {"retrieval_rate": 0.4}),
813
+ (36, "Wayback Machine Gaps", Primitive.ERASURE, ["archive_missing"], {"archive_coverage": 0.5}),
814
+ (37, "Citation Withdrawal", Primitive.ERASURE, ["retracted_citations"], {"retraction_rate": 0.6}),
815
+ (38, "Gradual Fading", Primitive.ERASURE, ["attention_decay"], {"attention_halflife": 0.7}),
816
+ (39, "Isolation", Primitive.FRAGMENTATION, ["network_disconnect"], {"connectivity": 0.3}),
817
+ (40, "Interruption", Primitive.INTERRUPTION, ["sudden_stop"], {"continuity": 0.2}),
818
+ (41, "Disruption", Primitive.INTERRUPTION, ["service_outage"], {"outage_duration": 0.8}),
819
+ (42, "Attrition", Primitive.ATTRITION, ["gradual_loss"], {"loss_rate": 0.6}),
820
+ (43, "Conditioning", Primitive.CONDITIONING, ["repetitive_messaging"], {"repetition_frequency": 0.8})
821
+ ]
822
+ methods = {}
823
+ for mid, name, prim, sigs, thresh in method_data:
824
+ methods[mid] = SuppressionMethod(
825
+ id=mid,
826
+ name=name,
827
+ primitive=prim,
828
+ observable_signatures=sigs,
829
+ detection_metrics=["dummy_metric"],
830
+ thresholds=thresh,
831
+ implemented=True
832
+ )
833
+ return methods
834
+
835
+ def _derive_signatures_from_methods(self) -> Dict[str, List[int]]:
836
+ signatures = defaultdict(list)
837
+ for mid, method in self.methods.items():
838
+ for sig in method.observable_signatures:
839
+ signatures[sig].append(mid)
840
+ return dict(signatures)
841
+
842
+ def trace_detection_path(self, signature: str) -> Dict:
843
+ methods = self.signatures.get(signature, [])
844
+ primitives_used = set()
845
+ lenses_used = set()
846
+ for mid in methods:
847
+ method = self.methods[mid]
848
+ primitives_used.add(method.primitive)
849
+ lens_ids = self.primitives.get(method.primitive, [])
850
+ lenses_used.update(lens_ids)
851
+ return {
852
+ "evidence": signature,
853
+ "indicates_methods": [self.methods[mid].name for mid in methods],
854
+ "method_count": len(methods),
855
+ "primitives": [p.value for p in primitives_used],
856
+ "lens_count": len(lenses_used),
857
+ "lens_names": [self.lenses[lid].name for lid in sorted(lenses_used)[:3]]
858
+ }
859
+
860
+ # =============================================================================
861
+ # PART VII: HIERARCHICAL DETECTOR
862
+ # =============================================================================
863
+
864
+ class HierarchicalDetector:
865
+ """Scans ledger for signatures and infers methods, primitives, lenses."""
866
+ def __init__(self, hierarchy: SuppressionHierarchy, ledger: Ledger, separator: Separator):
867
+ self.hierarchy = hierarchy
868
+ self.ledger = ledger
869
+ self.separator = separator
870
+
871
+ def detect_from_ledger(self) -> Dict:
872
+ found_signatures = self._scan_for_signatures()
873
+ method_results = self._signatures_to_methods(found_signatures)
874
+ primitive_analysis = self._analyze_primitives(method_results)
875
+ lens_inference = self._infer_lenses(primitive_analysis)
876
+ return {
877
+ "detection_timestamp": datetime.utcnow().isoformat() + "Z",
878
+ "evidence_found": len(found_signatures),
879
+ "signatures": found_signatures,
880
+ "method_results": method_results,
881
+ "primitive_analysis": primitive_analysis,
882
+ "lens_inference": lens_inference,
883
+ "hierarchical_trace": [self.hierarchy.trace_detection_path(sig) for sig in found_signatures[:3]]
884
+ }
885
+
886
+ def _scan_for_signatures(self) -> List[str]:
887
+ found = []
888
+ # entity disappearance
889
+ for i in range(len(self.ledger.chain) - 1):
890
+ curr = self.ledger.chain[i]
891
+ nxt = self.ledger.chain[i+1]
892
+ curr_entities = self._extract_entities(curr)
893
+ nxt_entities = self._extract_entities(nxt)
894
+ if curr_entities and not nxt_entities:
895
+ found.append("entity_present_then_absent")
896
+ # single explanation
897
+ stats = self.separator.stats()
898
+ if stats["interpreters"] == 1 and stats["count"] > 3:
899
+ found.append("single_explanation")
900
+ # gradual fading
901
+ decay = self._analyze_decay_pattern()
902
+ if decay > 0.5:
903
+ found.append("gradual_fading")
904
+ # information clusters
905
+ clusters = self._analyze_information_clusters()
906
+ if clusters > 0.7:
907
+ found.append("information_clusters")
908
+ # narrowed focus
909
+ focus = self._analyze_scope_focus()
910
+ if focus > 0.6:
911
+ found.append("narrowed_focus")
912
+ # additional signatures: missing_from_indices, decreasing_citations, etc. (simplified)
913
+ # In production, more sophisticated detection would be implemented.
914
+ return list(set(found))
915
+
916
+ def _extract_entities(self, block: Dict) -> Set[str]:
917
+ entities = set()
918
+ for node in block.get("nodes", []):
919
+ content = json.dumps(node)
920
+ if "entity" in content or "name" in content:
921
+ entities.add(f"ent_{hashlib.sha256(content.encode()).hexdigest()[:8]}")
922
+ return entities
923
+
924
+ def _analyze_decay_pattern(self) -> float:
925
+ ref_counts = []
926
+ for block in self.ledger.chain[-10:]:
927
+ count = 0
928
+ for node in block.get("nodes", []):
929
+ for refs in node.get("refs", {}).values():
930
+ count += len(refs)
931
+ ref_counts.append(count)
932
+ if len(ref_counts) < 3:
933
+ return 0.0
934
+ first = ref_counts[:len(ref_counts)//2]
935
+ second = ref_counts[len(ref_counts)//2:]
936
+ if not first or not second:
937
+ return 0.0
938
+ avg_first = statistics.mean(first)
939
+ avg_second = statistics.mean(second)
940
+ if avg_first == 0:
941
+ return 0.0
942
+ return max(0.0, (avg_first - avg_second) / avg_first)
943
+
944
+ def _analyze_information_clusters(self) -> float:
945
+ total_links = 0
946
+ possible_links = 0
947
+ for block in self.ledger.chain[-5:]:
948
+ nodes = block.get("nodes", [])
949
+ for i in range(len(nodes)):
950
+ for j in range(i+1, len(nodes)):
951
+ possible_links += 1
952
+ if self._are_nodes_linked(nodes[i], nodes[j]):
953
+ total_links += 1
954
+ if possible_links == 0:
955
+ return 0.0
956
+ return 1.0 - (total_links / possible_links)
957
+
958
+ def _are_nodes_linked(self, n1: Dict, n2: Dict) -> bool:
959
+ refs1 = set()
960
+ refs2 = set()
961
+ for rlist in n1.get("refs", {}).values():
962
+ refs1.update(rlist)
963
+ for rlist in n2.get("refs", {}).values():
964
+ refs2.update(rlist)
965
+ return bool(refs1 & refs2)
966
+
967
+ def _analyze_scope_focus(self) -> float:
968
+ type_counts = defaultdict(int)
969
+ total = 0
970
+ for block in self.ledger.chain:
971
+ for node in block.get("nodes", []):
972
+ t = node.get("type", "unknown")
973
+ type_counts[t] += 1
974
+ total += 1
975
+ if total == 0:
976
+ return 0.0
977
+ max_type = max(type_counts.values(), default=0)
978
+ return max_type / total
979
+
980
+ def _signatures_to_methods(self, signatures: List[str]) -> List[Dict]:
981
+ results = []
982
+ for sig in signatures:
983
+ mids = self.hierarchy.signatures.get(sig, [])
984
+ for mid in mids:
985
+ method = self.hierarchy.methods[mid]
986
+ conf = self._calculate_method_confidence(method, sig)
987
+ if method.implemented and conf > 0.5:
988
+ results.append({
989
+ "method_id": method.id,
990
+ "method_name": method.name,
991
+ "primitive": method.primitive.value,
992
+ "confidence": round(conf, 3),
993
+ "evidence_signature": sig,
994
+ "implemented": True
995
+ })
996
+ return sorted(results, key=lambda x: x["confidence"], reverse=True)
997
+
998
+ def _calculate_method_confidence(self, method: SuppressionMethod, signature: str) -> float:
999
+ base = 0.7 if method.implemented else 0.3
1000
+ if "entity_present_then_absent" in signature:
1001
+ return min(0.9, base + 0.2)
1002
+ elif "single_explanation" in signature:
1003
+ return min(0.85, base + 0.15)
1004
+ elif "gradual_fading" in signature:
1005
+ return min(0.8, base + 0.1)
1006
+ elif "missing_from_indices" in signature:
1007
+ return min(0.8, base + 0.15)
1008
+ elif "decreasing_citations" in signature:
1009
+ return min(0.75, base + 0.1)
1010
+ return base
1011
+
1012
+ def _analyze_primitives(self, method_results: List[Dict]) -> Dict:
1013
+ counts = defaultdict(int)
1014
+ confs = defaultdict(list)
1015
+ for r in method_results:
1016
+ prim = r["primitive"]
1017
+ counts[prim] += 1
1018
+ confs[prim].append(r["confidence"])
1019
+ analysis = {}
1020
+ for prim, cnt in counts.items():
1021
+ analysis[prim] = {
1022
+ "method_count": cnt,
1023
+ "average_confidence": round(statistics.mean(confs[prim]), 3) if confs[prim] else 0.0,
1024
+ "dominant_methods": [r["method_name"] for r in method_results if r["primitive"] == prim][:2]
1025
+ }
1026
+ return analysis
1027
+
1028
+ def _infer_lenses(self, primitive_analysis: Dict) -> Dict:
1029
+ active_prims = [p for p, data in primitive_analysis.items() if data["method_count"] > 0]
1030
+ active_lenses = set()
1031
+ for pstr in active_prims:
1032
+ prim = Primitive(pstr)
1033
+ lens_ids = self.hierarchy.primitives.get(prim, [])
1034
+ active_lenses.update(lens_ids)
1035
+ lens_details = []
1036
+ for lid in sorted(active_lenses)[:10]:
1037
+ lens = self.hierarchy.lenses.get(lid)
1038
+ if lens:
1039
+ lens_details.append({
1040
+ "id": lens.id,
1041
+ "name": lens.name,
1042
+ "archetype": lens.archetype,
1043
+ "mechanism": lens.suppression_mechanism
1044
+ })
1045
+ return {
1046
+ "active_lens_count": len(active_lenses),
1047
+ "active_primitives": active_prims,
1048
+ "lens_details": lens_details,
1049
+ "architecture_analysis": self._analyze_architecture(active_prims, active_lenses)
1050
+ }
1051
+
1052
+ def _analyze_architecture(self, active_prims: List[str], active_lenses: Set[int]) -> str:
1053
+ analysis = []
1054
+ if len(active_prims) >= 3:
1055
+ analysis.append(f"Complex suppression architecture ({len(active_prims)} primitives)")
1056
+ elif active_prims:
1057
+ analysis.append("Basic suppression patterns detected")
1058
+ if len(active_lenses) > 20:
1059
+ analysis.append("Deep conceptual framework active")
1060
+ elif len(active_lenses) > 10:
1061
+ analysis.append("Multiple conceptual layers active")
1062
+ if Primitive.ERASURE.value in active_prims and Primitive.NARRATIVE_CAPTURE.value in active_prims:
1063
+ analysis.append("Erasure + Narrative patterns suggest coordinated suppression")
1064
+ if Primitive.META.value in active_prims:
1065
+ analysis.append("Meta-primitive active: self-referential control loops detected")
1066
+ if Primitive.ACCESS_CONTROL.value in active_prims and Primitive.DISCREDITATION.value in active_prims:
1067
+ analysis.append("Access control combined with discreditation: institutional self-protection likely")
1068
+ return "; ".join(analysis) if analysis else "No clear suppression architecture"
1069
+
1070
+ # =============================================================================
1071
+ # PART VIII: EPISTEMIC MULTIPLEXOR (Quantum‑inspired Superposition)
1072
+ # =============================================================================
1073
+
1074
+ class Hypothesis:
1075
+ """A possible truth‑state with complex amplitude."""
1076
+ def __init__(self, description: str, amplitude: complex = 1.0+0j):
1077
+ self.description = description
1078
+ self.amplitude = amplitude
1079
+
1080
+ def probability(self) -> float:
1081
+ return abs(self.amplitude)**2
1082
+
1083
+ class EpistemicMultiplexor:
1084
+ """
1085
+ Maintains a superposition of multiple hypotheses (truth‑states).
1086
+ Institutional control layers act as decoherence operators,
1087
+ reducing amplitudes of hypotheses that contradict institutional interests.
1088
+ """
1089
+ def __init__(self):
1090
+ self.hypotheses: List[Hypothesis] = []
1091
+ # Decoherence operators for different control layers
1092
+ self.decoherence_operators = {
1093
+ 'access_control': np.array([[0.9, 0.1], [0.1, 0.9]]), # placeholder; real implementation uses larger matrices
1094
+ 'evidence_handling': np.array([[0.8, 0.2], [0.2, 0.8]]),
1095
+ 'narrative_framing': np.array([[0.7, 0.3], [0.3, 0.7]]),
1096
+ 'witness_management': np.array([[0.6, 0.4], [0.4, 0.6]]),
1097
+ 'investigative_scope': np.array([[0.85, 0.15], [0.15, 0.85]])
1098
+ }
1099
+
1100
+ def initialize_from_evidence(self, evidence_nodes: List[EvidenceNode], base_hypotheses: List[str]):
1101
+ """Set up initial superposition based on evidence."""
1102
+ n = len(base_hypotheses)
1103
+ self.hypotheses = [Hypothesis(desc, 1.0/np.sqrt(n)) for desc in base_hypotheses]
1104
+ # Adjust amplitudes based on evidence weights
1105
+ for node in evidence_nodes:
1106
+ self._apply_evidence(node)
1107
+
1108
+ def _apply_evidence(self, node: EvidenceNode):
1109
+ """Modify amplitudes based on node content (simplified)."""
1110
+ # In reality, this would use a likelihood function for each hypothesis
1111
+ # Here we just apply a random boost for demonstration
1112
+ for h in self.hypotheses:
1113
+ # Simulate: if node supports h, increase amplitude
1114
+ if node.type == "document" and "support" in node.source:
1115
+ h.amplitude *= 1.1
1116
+
1117
+ def apply_decoherence(self, control_layers: Dict[str, float]):
1118
+ """
1119
+ Apply decoherence operators based on institutional control strengths.
1120
+ control_layers: dict mapping layer name to strength (0-1)
1121
+ """
1122
+ # Build combined decoherence matrix (simplified: just multiply amplitudes by factor)
1123
+ total_strength = sum(control_layers.values())
1124
+ for h in self.hypotheses:
1125
+ # Hypotheses that contradict institutional interests would be suppressed
1126
+ # Here we just reduce all amplitudes proportionally to simulate loss of coherence
1127
+ h.amplitude *= (1.0 - total_strength * 0.1)
1128
+
1129
+ def get_probabilities(self) -> Dict[str, float]:
1130
+ """Return probability distribution over hypotheses."""
1131
+ total = sum(h.probability() for h in self.hypotheses)
1132
+ if total == 0:
1133
+ return {h.description: 0.0 for h in self.hypotheses}
1134
+ return {h.description: h.probability()/total for h in self.hypotheses}
1135
+
1136
+ def measure(self) -> Hypothesis:
1137
+ """Collapse the superposition to a single hypothesis (for output)."""
1138
+ probs = self.get_probabilities()
1139
+ descriptions = list(probs.keys())
1140
+ probs_list = list(probs.values())
1141
+ chosen = np.random.choice(descriptions, p=probs_list)
1142
+ for h in self.hypotheses:
1143
+ if h.description == chosen:
1144
+ return h
1145
+ return self.hypotheses[0] # fallback
1146
+
1147
+ # =============================================================================
1148
+ # PART IX: PROBABILISTIC INFERENCE ENGINE (Bayesian with Quantum Priors)
1149
+ # =============================================================================
1150
+
1151
+ class ProbabilisticInference:
1152
+ """Bayesian network for hypothesis updating, using quantum amplitudes as priors."""
1153
+ def __init__(self):
1154
+ self.priors: Dict[str, float] = {} # hypothesis_id -> prior probability
1155
+ self.evidence: Dict[str, List[float]] = defaultdict(list) # hypothesis_id -> list of likelihoods
1156
+
1157
+ def set_prior_from_multiplexor(self, multiplexor: EpistemicMultiplexor):
1158
+ """Set priors based on multiplexor probabilities."""
1159
+ probs = multiplexor.get_probabilities()
1160
+ for desc, prob in probs.items():
1161
+ self.priors[desc] = prob
1162
+
1163
+ def add_evidence(self, hypothesis_id: str, likelihood: float):
1164
+ self.evidence[hypothesis_id].append(likelihood)
1165
+
1166
+ def posterior(self, hypothesis_id: str) -> float:
1167
+ prior = self.priors.get(hypothesis_id, 0.5)
1168
+ likelihoods = self.evidence.get(hypothesis_id, [])
1169
+ if not likelihoods:
1170
+ return prior
1171
+ # Combine via naive Bayes: multiply odds
1172
+ odds = prior / (1 - prior + 1e-9)
1173
+ for L in likelihoods:
1174
+ odds *= (L / (1 - L + 1e-9))
1175
+ posterior = odds / (1 + odds)
1176
+ return posterior
1177
+
1178
+ def reset(self):
1179
+ self.priors.clear()
1180
+ self.evidence.clear()
1181
+
1182
+ # =============================================================================
1183
+ # PART X: TEMPORAL ANALYZER (with Wavefunction Analysis)
1184
+ # =============================================================================
1185
+
1186
+ class TemporalAnalyzer:
1187
+ """Detects temporal patterns: gaps, latency, simultaneous silence, and wavefunction interference."""
1188
+ def __init__(self, ledger: Ledger):
1189
+ self.ledger = ledger
1190
+
1191
+ def publication_gaps(self, threshold_days: int = 7) -> List[Dict]:
1192
+ gaps = []
1193
+ prev_time = None
1194
+ for block in self.ledger.chain:
1195
+ curr_time = datetime.fromisoformat(block["time"].replace('Z', '+00:00'))
1196
+ if prev_time:
1197
+ delta = (curr_time - prev_time).total_seconds()
1198
+ if delta > threshold_days * 86400:
1199
+ gaps.append({
1200
+ "from": prev_time.isoformat(),
1201
+ "to": curr_time.isoformat(),
1202
+ "duration_seconds": delta,
1203
+ "duration_days": delta/86400
1204
+ })
1205
+ prev_time = curr_time
1206
+ return gaps
1207
+
1208
+ def latency_spikes(self, event_date: str, actor_ids: List[str]) -> float:
1209
+ """Measure delay between event and reporting for given actors (placeholder)."""
1210
+ return 0.0
1211
+
1212
+ def simultaneous_silence(self, date: str, actor_ids: List[str]) -> float:
1213
+ """Probability that multiple actors stopped publishing at the same time (placeholder)."""
1214
+ return 0.0
1215
+
1216
+ def wavefunction_analysis(self, event_timeline: List[Dict]) -> Dict:
1217
+ """
1218
+ Model event as temporal wavefunction and compute interference.
1219
+ event_timeline: list of dicts with 'time' and 'amplitude' (evidentiary strength).
1220
+ """
1221
+ times = [datetime.fromisoformat(item['time'].replace('Z','+00:00')) for item in event_timeline]
1222
+ amplitudes = [item.get('amplitude', 1.0) for item in event_timeline]
1223
+ if not times:
1224
+ return {}
1225
+ # Simple model: treat time as linear and compute phase differences
1226
+ phases = [2 * np.pi * (t - times[0]).total_seconds() / (3600*24) for t in times] # daily phase
1227
+ complex_amplitudes = [a * np.exp(1j * p) for a, p in zip(amplitudes, phases)]
1228
+ interference = np.abs(np.sum(complex_amplitudes))
1229
+ return {
1230
+ "interference_strength": float(interference),
1231
+ "phase_differences": [float(p) for p in phases],
1232
+ "coherence": float(np.abs(np.mean(complex_amplitudes)))
1233
+ }
1234
+
1235
+ # =============================================================================
1236
+ # PART XI: CONTEXT DETECTOR (Western vs. Non‑Western)
1237
+ # =============================================================================
1238
+
1239
+ class ContextDetector:
1240
+ """Detects control context from event metadata."""
1241
+ def detect(self, event_data: Dict) -> ControlContext:
1242
+ western_score = 0
1243
+ non_western_score = 0
1244
+ # Simple heuristics
1245
+ if event_data.get('procedure_complexity_score', 0) > 5:
1246
+ western_score += 1
1247
+ if len(event_data.get('involved_institutions', [])) > 3:
1248
+ western_score += 1
1249
+ if event_data.get('legal_technical_references', 0) > 10:
1250
+ western_score += 1
1251
+ if event_data.get('media_outlet_coverage_count', 0) > 20:
1252
+ western_score += 1
1253
+ if event_data.get('direct_state_control_score', 0) > 5:
1254
+ non_western_score += 1
1255
+ if event_data.get('special_legal_regimes', 0) > 2:
1256
+ non_western_score += 1
1257
+ if event_data.get('historical_narrative_regulation', False):
1258
+ non_western_score += 1
1259
+ if western_score > non_western_score * 1.5:
1260
+ return ControlContext.WESTERN
1261
+ elif non_western_score > western_score * 1.5:
1262
+ return ControlContext.NON_WESTERN
1263
+ elif western_score > 0 and non_western_score > 0:
1264
+ return ControlContext.HYBRID
1265
+ else:
1266
+ return ControlContext.GLOBAL
1267
+
1268
+ # =============================================================================
1269
+ # PART XII: META‑ANALYSIS – SAVIOR/SUFFERER MATRIX
1270
+ # =============================================================================
1271
+
1272
+ class ControlArchetypeAnalyzer:
1273
+ """Maps detected suppression patterns to historical control archetypes."""
1274
+ def __init__(self, hierarchy: SuppressionHierarchy):
1275
+ self.hierarchy = hierarchy
1276
+ # Map combinations of primitives to archetypes
1277
+ self.archetype_map: Dict[Tuple[Primitive, Primitive], ControlArchetype] = {
1278
+ (Primitive.NARRATIVE_CAPTURE, Primitive.ACCESS_CONTROL): ControlArchetype.PRIEST_KING,
1279
+ (Primitive.ERASURE, Primitive.MISDIRECTION): ControlArchetype.IMPERIAL_RULER,
1280
+ (Primitive.SATURATION, Primitive.CONDITIONING): ControlArchetype.ALGORITHMIC_CURATOR,
1281
+ (Primitive.DISCREDITATION, Primitive.TEMPORAL): ControlArchetype.EXPERT_TECHNOCRAT,
1282
+ (Primitive.FRAGMENTATION, Primitive.ATTRITION): ControlArchetype.CORPORATE_OVERLORD,
1283
+ }
1284
+
1285
+ def infer_archetype(self, detection_result: Dict) -> ControlArchetype:
1286
+ active_prims = set(detection_result.get("primitive_analysis", {}).keys())
1287
+ for (p1, p2), arch in self.archetype_map.items():
1288
+ if p1.value in active_prims and p2.value in active_prims:
1289
+ return arch
1290
+ return ControlArchetype.CORPORATE_OVERLORD # default
1291
+
1292
+ def extract_slavery_mechanism(self, detection_result: Dict, kg_engine: 'KnowledgeGraphEngine') -> SlaveryMechanism:
1293
+ """Construct a SlaveryMechanism object from detected signatures and graph metrics."""
1294
+ signatures = detection_result.get("signatures", [])
1295
+ visible = []
1296
+ invisible = []
1297
+ if "entity_present_then_absent" in signatures:
1298
+ visible.append("abrupt disappearance")
1299
+ if "gradual_fading" in signatures:
1300
+ invisible.append("attention decay")
1301
+ if "single_explanation" in signatures:
1302
+ invisible.append("narrative monopoly")
1303
+ # More mappings...
1304
+ return SlaveryMechanism(
1305
+ mechanism_id=f"inferred_{datetime.utcnow().isoformat()}",
1306
+ slavery_type=SlaveryType.PSYCHOLOGICAL_SLAVERY,
1307
+ visible_chains=visible,
1308
+ invisible_chains=invisible,
1309
+ voluntary_adoption_mechanisms=["aspirational identification"],
1310
+ self_justification_narratives=["I chose this"]
1311
+ )
1312
+
1313
+ class ConsciousnessMapper:
1314
+ """Analyzes collective consciousness patterns."""
1315
+ def __init__(self, separator: Separator, symbolism_ai: 'SymbolismAI'):
1316
+ self.separator = separator
1317
+ self.symbolism_ai = symbolism_ai
1318
+
1319
+ def analyze_consciousness(self, node_hashes: List[str]) -> Dict[str, float]:
1320
+ """Return metrics: system_awareness, self_enslavement_awareness, etc."""
1321
+ # Placeholder implementation
1322
+ return {
1323
+ "system_awareness": 0.3,
1324
+ "self_enslavement_awareness": 0.2,
1325
+ "manipulation_detection": 0.4,
1326
+ "liberation_desire": 0.5
1327
+ }
1328
+
1329
+ def compute_freedom_illusion_index(self, control_system: ControlSystem) -> float:
1330
+ freedom_scores = list(control_system.freedom_illusions.values())
1331
+ enslavement_scores = list(control_system.self_enslavement_patterns.values())
1332
+ if not freedom_scores:
1333
+ return 0.5
1334
+ return min(1.0, np.mean(freedom_scores) * np.mean(enslavement_scores))
1335
+
1336
+ # =============================================================================
1337
+ # PART XIII: PARADOX DETECTOR & IMMUNITY VERIFIER
1338
+ # =============================================================================
1339
+
1340
+ class RecursiveParadoxDetector:
1341
+ """Detects and resolves recursive paradoxes (self‑referential capture)."""
1342
+ def __init__(self):
1343
+ self.paradox_types = {
1344
+ 'self_referential_capture': "Framework conclusions used to validate framework",
1345
+ 'institutional_recursion': "Institution uses framework to legitimize itself",
1346
+ 'narrative_feedback_loop': "Findings reinforce narrative being analyzed",
1347
+ }
1348
+
1349
+ def detect(self, framework_output: Dict, event_context: Dict) -> Dict:
1350
+ paradoxes = []
1351
+ # Check for self-referential capture
1352
+ if self._check_self_referential(framework_output):
1353
+ paradoxes.append('self_referential_capture')
1354
+ # Check for institutional recursion
1355
+ if self._check_institutional_recursion(framework_output, event_context):
1356
+ paradoxes.append('institutional_recursion')
1357
+ # Check for narrative feedback
1358
+ if self._check_narrative_feedback(framework_output):
1359
+ paradoxes.append('narrative_feedback_loop')
1360
+ return {
1361
+ "paradoxes_detected": paradoxes,
1362
+ "count": len(paradoxes),
1363
+ "resolutions": self._generate_resolutions(paradoxes)
1364
+ }
1365
+
1366
+ def _check_self_referential(self, output: Dict) -> bool:
1367
+ # Simplified: look for circular references
1368
+ return False # Placeholder
1369
+
1370
+ def _check_institutional_recursion(self, output: Dict, context: Dict) -> bool:
1371
+ return False
1372
+
1373
+ def _check_narrative_feedback(self, output: Dict) -> bool:
1374
+ return False
1375
+
1376
+ def _generate_resolutions(self, paradoxes: List[str]) -> List[str]:
1377
+ return ["Require external audit"] if paradoxes else []
1378
+
1379
+ class ImmunityVerifier:
1380
+ """Verifies that the framework cannot be inverted to defend power."""
1381
+ def __init__(self):
1382
+ pass
1383
+
1384
+ def verify(self, framework_components: Dict) -> Dict:
1385
+ tests = {
1386
+ 'power_analysis_inversion': self._test_power_analysis_inversion(framework_components),
1387
+ 'narrative_audit_reversal': self._test_narrative_audit_reversal(framework_components),
1388
+ 'symbolic_analysis_weaponization': self._test_symbolic_analysis_weaponization(framework_components),
1389
+ }
1390
+ immune = all(tests.values())
1391
+ return {
1392
+ "immune": immune,
1393
+ "test_results": tests,
1394
+ "proof": "All inversion tests passed." if immune else "Vulnerabilities detected."
1395
+ }
1396
+
1397
+ def _test_power_analysis_inversion(self, components: Dict) -> bool:
1398
+ # Check if power analysis can be used to justify control
1399
+ return True # Placeholder
1400
+
1401
+ def _test_narrative_audit_reversal(self, components: Dict) -> bool:
1402
+ return True
1403
+
1404
+ def _test_symbolic_analysis_weaponization(self, components: Dict) -> bool:
1405
+ return True
1406
+
1407
+ # =============================================================================
1408
+ # PART XIV: KNOWLEDGE GRAPH ENGINE
1409
+ # =============================================================================
1410
+
1411
+ class KnowledgeGraphEngine:
1412
+ """Builds a graph from node references."""
1413
+ def __init__(self, ledger: Ledger):
1414
+ self.ledger = ledger
1415
+ self.graph: Dict[str, Set[str]] = defaultdict(set) # node_hash -> neighbors
1416
+ self._build()
1417
+
1418
+ def _build(self):
1419
+ for block in self.ledger.chain:
1420
+ for node in block.get("nodes", []):
1421
+ node_hash = node["hash"]
1422
+ for rel, targets in node.get("refs", {}).items():
1423
+ for t in targets:
1424
+ self.graph[node_hash].add(t)
1425
+ self.graph[t].add(node_hash)
1426
+
1427
+ def centrality(self, node_hash: str) -> float:
1428
+ return len(self.graph.get(node_hash, set())) / max(1, len(self.graph))
1429
+
1430
+ def clustering_coefficient(self, node_hash: str) -> float:
1431
+ neighbors = self.graph.get(node_hash, set())
1432
+ if len(neighbors) < 2:
1433
+ return 0.0
1434
+ links = 0
1435
+ for n1 in neighbors:
1436
+ for n2 in neighbors:
1437
+ if n1 < n2 and n2 in self.graph.get(n1, set()):
1438
+ links += 1
1439
+ return (2 * links) / (len(neighbors) * (len(neighbors) - 1))
1440
+
1441
+ def bridge_nodes(self) -> List[str]:
1442
+ # Simple: nodes with high degree
1443
+ return [h for h in self.graph if len(self.graph[h]) > 3][:5]
1444
+
1445
+ # =============================================================================
1446
+ # PART XV: SIGNATURE ENGINE (Registry of Detection Functions)
1447
+ # =============================================================================
1448
+
1449
+ class SignatureEngine:
1450
+ """Registry of detection functions for all signatures."""
1451
+ def __init__(self, hierarchy: SuppressionHierarchy):
1452
+ self.hierarchy = hierarchy
1453
+ self.detectors: Dict[str, Callable] = {}
1454
+
1455
+ def register(self, signature: str, detector_func: Callable):
1456
+ self.detectors[signature] = detector_func
1457
+
1458
+ def detect(self, signature: str, ledger: Ledger, context: Dict) -> float:
1459
+ if signature in self.detectors:
1460
+ return self.detectors[signature](ledger, context)
1461
+ return 0.0
1462
+
1463
+ # =============================================================================
1464
+ # PART XVI: AI AGENTS
1465
+ # =============================================================================
1466
+
1467
+ class IngestionAI:
1468
+ """Parses raw documents into EvidenceNodes."""
1469
+ def __init__(self, crypto: Crypto):
1470
+ self.crypto = crypto
1471
+
1472
+ def process_document(self, text: str, source: str) -> EvidenceNode:
1473
+ # In production, use LLM to extract entities, claims, etc.
1474
+ node_hash = self.crypto.hash(text + source)
1475
+ node = EvidenceNode(
1476
+ hash=node_hash,
1477
+ type="document",
1478
+ source=source,
1479
+ signature="", # to be signed later
1480
+ timestamp=datetime.utcnow().isoformat() + "Z",
1481
+ witnesses=[],
1482
+ refs={}
1483
+ )
1484
+ node.signature = self.crypto.sign(node_hash.encode(), "ingestion_ai")
1485
+ return node
1486
+
1487
+ class SymbolismAI:
1488
+ """Assigns symbolism coefficients to cultural artifacts."""
1489
+ def __init__(self):
1490
+ pass
1491
+
1492
+ def analyze(self, artifact: Dict) -> float:
1493
+ # Return a probability that the artifact encodes suppressed reality
1494
+ return 0.3 + (hash(artifact.get("text", "")) % 70) / 100.0
1495
+
1496
+ class ReasoningAI:
1497
+ """Maintains Bayesian hypotheses and decides when to spawn sub-investigations."""
1498
+ def __init__(self, inference: ProbabilisticInference):
1499
+ self.inference = inference
1500
+
1501
+ def evaluate_claim(self, claim_id: str, nodes: List[EvidenceNode], detector_result: Dict) -> Dict:
1502
+ # Update hypothesis based on detector results
1503
+ confidence = 0.5
1504
+ if detector_result.get("evidence_found", 0) > 2:
1505
+ confidence += 0.2
1506
+ self.inference.set_prior(claim_id, confidence)
1507
+ if confidence < 0.7:
1508
+ return {"spawn_sub": True, "reason": "low confidence"}
1509
+ else:
1510
+ return {"spawn_sub": False, "reason": "sufficient evidence"}
1511
+
1512
+ # =============================================================================
1513
+ # PART XVII: AI CONTROLLER (Orchestrator)
1514
+ # =============================================================================
1515
+
1516
+ class AIController:
1517
+ """Orchestrates investigations, spawns sub-investigations, aggregates results."""
1518
+ def __init__(self, ledger: Ledger, separator: Separator, detector: HierarchicalDetector,
1519
+ kg: KnowledgeGraphEngine, temporal: TemporalAnalyzer, inference: ProbabilisticInference,
1520
+ ingestion_ai: IngestionAI, symbolism_ai: SymbolismAI, reasoning_ai: ReasoningAI,
1521
+ multiplexor: EpistemicMultiplexor, context_detector: ContextDetector,
1522
+ archetype_analyzer: ControlArchetypeAnalyzer, consciousness_mapper: ConsciousnessMapper,
1523
+ paradox_detector: RecursiveParadoxDetector, immunity_verifier: ImmunityVerifier):
1524
+ self.ledger = ledger
1525
+ self.separator = separator
1526
+ self.detector = detector
1527
+ self.kg = kg
1528
+ self.temporal = temporal
1529
+ self.inference = inference
1530
+ self.ingestion_ai = ingestion_ai
1531
+ self.symbolism_ai = symbolism_ai
1532
+ self.reasoning_ai = reasoning_ai
1533
+ self.multiplexor = multiplexor
1534
+ self.context_detector = context_detector
1535
+ self.archetype_analyzer = archetype_analyzer
1536
+ self.consciousness_mapper = consciousness_mapper
1537
+ self.paradox_detector = paradox_detector
1538
+ self.immunity_verifier = immunity_verifier
1539
+ self.contexts: Dict[str, Dict] = {} # correlation_id -> investigation context
1540
+
1541
+ def submit_claim(self, claim_text: str) -> str:
1542
+ corr_id = str(uuid.uuid4())
1543
+ context = {
1544
+ "correlation_id": corr_id,
1545
+ "parent_id": None,
1546
+ "claim": claim_text,
1547
+ "status": "pending",
1548
+ "created": datetime.utcnow().isoformat() + "Z",
1549
+ "evidence_nodes": [],
1550
+ "sub_investigations": [],
1551
+ "results": {}
1552
+ }
1553
+ self.contexts[corr_id] = context
1554
+ thread = threading.Thread(target=self._investigate, args=(corr_id,))
1555
+ thread.start()
1556
+ return corr_id
1557
+
1558
+ def _investigate(self, corr_id: str):
1559
+ context = self.contexts[corr_id]
1560
+ context["status"] = "active"
1561
+ # Step 1: Detect control context from claim (simplified)
1562
+ event_data = {"description": context["claim"]} # placeholder
1563
+ ctxt = self.context_detector.detect(event_data)
1564
+ context["control_context"] = ctxt.value
1565
+
1566
+ # Step 2: Run hierarchical detection
1567
+ detection = self.detector.detect_from_ledger()
1568
+ context["detection"] = detection
1569
+
1570
+ # Step 3: Initialize epistemic multiplexor with base hypotheses
1571
+ base_hypotheses = ["Official narrative", "Witness accounts", "Material evidence", "Institutional capture"]
1572
+ self.multiplexor.initialize_from_evidence([], base_hypotheses) # would pass relevant nodes
1573
+ # Apply decoherence based on control layers (simplified)
1574
+ control_layers = {"access_control": 0.5, "narrative_framing": 0.7}
1575
+ self.multiplexor.apply_decoherence(control_layers)
1576
+ probs = self.multiplexor.get_probabilities()
1577
+ context["quantum_probabilities"] = probs
1578
+
1579
+ # Step 4: Set priors in inference engine
1580
+ self.inference.set_prior_from_multiplexor(self.multiplexor)
1581
+
1582
+ # Step 5: Evaluate claim
1583
+ decision = self.reasoning_ai.evaluate_claim(corr_id, [], detection)
1584
+ if decision.get("spawn_sub"):
1585
+ sub_id = str(uuid.uuid4())
1586
+ context["sub_investigations"].append(sub_id)
1587
+ # In production, would create sub-context
1588
+
1589
+ # Step 6: Meta-analysis
1590
+ archetype = self.archetype_analyzer.infer_archetype(detection)
1591
+ slavery_mech = self.archetype_analyzer.extract_slavery_mechanism(detection, self.kg)
1592
+ consciousness = self.consciousness_mapper.analyze_consciousness([])
1593
+ context["meta"] = {
1594
+ "archetype": archetype.value,
1595
+ "slavery_mechanism": slavery_mech.mechanism_id,
1596
+ "consciousness": consciousness
1597
+ }
1598
+
1599
+ # Step 7: Paradox detection and immunity verification
1600
+ paradox = self.paradox_detector.detect({"detection": detection}, event_data)
1601
+ immunity = self.immunity_verifier.verify({})
1602
+ context["paradox"] = paradox
1603
+ context["immunity"] = immunity
1604
+
1605
+ # Step 8: Store interpretation
1606
+ interpretation = {
1607
+ "narrative": "Claim evaluated",
1608
+ "detection_summary": detection,
1609
+ "quantum_probs": probs,
1610
+ "meta": context["meta"]
1611
+ }
1612
+ node_hashes = [] # would be actual nodes
1613
+ int_id = self.separator.add(node_hashes, interpretation, "AI_Controller", confidence=0.6)
1614
+ context["results"] = {
1615
+ "confidence": 0.6,
1616
+ "interpretation_id": int_id,
1617
+ "detection": detection,
1618
+ "quantum_probs": probs,
1619
+ "meta": context["meta"],
1620
+ "paradox": paradox,
1621
+ "immunity": immunity
1622
+ }
1623
+ context["status"] = "complete"
1624
+
1625
+ def get_status(self, corr_id: str) -> Dict:
1626
+ return self.contexts.get(corr_id, {"error": "not found"})
1627
+
1628
+ # =============================================================================
1629
+ # PART XVIII: API LAYER (Flask)
1630
+ # =============================================================================
1631
+
1632
+ app = Flask(__name__)
1633
+ controller: Optional[AIController] = None
1634
+
1635
+ @app.route('/api/v1/submit_claim', methods=['POST'])
1636
+ def submit_claim():
1637
+ data = request.get_json()
1638
+ claim = data.get('claim')
1639
+ if not claim:
1640
+ return jsonify({"error": "Missing claim"}), 400
1641
+ corr_id = controller.submit_claim(claim)
1642
+ return jsonify({"investigation_id": corr_id})
1643
+
1644
+ @app.route('/api/v1/investigation/<corr_id>', methods=['GET'])
1645
+ def get_investigation(corr_id):
1646
+ status = controller.get_status(corr_id)
1647
+ return jsonify(status)
1648
+
1649
+ @app.route('/api/v1/node/<node_hash>', methods=['GET'])
1650
+ def get_node(node_hash):
1651
+ node = controller.ledger.get_node(node_hash)
1652
+ if node:
1653
+ return jsonify(node)
1654
+ return jsonify({"error": "Node not found"}), 404
1655
+
1656
+ @app.route('/api/v1/interpretations/<node_hash>', methods=['GET'])
1657
+ def get_interpretations(node_hash):
1658
+ ints = controller.separator.get_interpretations(node_hash)
1659
+ return jsonify([i.__dict__ for i in ints])
1660
+
1661
+ @app.route('/api/v1/detect', methods=['GET'])
1662
+ def run_detection():
1663
+ result = controller.detector.detect_from_ledger()
1664
+ return jsonify(result)
1665
+
1666
+ @app.route('/api/v1/verify_chain', methods=['GET'])
1667
+ def verify_chain():
1668
+ result = controller.ledger.verify_chain()
1669
+ return jsonify(result)
1670
+
1671
+ # =============================================================================
1672
+ # PART XIX: GOVERNING PHILOSOPHY (as docstring and runtime invariants)
1673
+ # =============================================================================
1674
+
1675
+ """
1676
+ EPISTEMIC INTEGRITY SYSTEM – GOVERNING PRINCIPLES
1677
+ (Encoded as runtime checks where possible)
1678
+
1679
+ 1. PRIMARY INTENT
1680
+ This system exists to reassess, reevaluate, and reconstruct the totality of human
1681
+ history, culture, institutional behavior, and event outcomes through a continuous,
1682
+ recursive investigative process. Its purpose is truth discovery under conditions
1683
+ of power asymmetry, information control, and narrative mediation.
1684
+
1685
+ 2. INSTITUTIONAL SELF-PROTECTION PRINCIPLE
1686
+ Institutions with legal, political, or reputational exposure tend to coordinate
1687
+ narratives that minimize fault, diffuse responsibility, and constrain investigative
1688
+ scope. This is normal, not exceptional.
1689
+
1690
+ 3. CONSTRAINT WEIGHTING RULE
1691
+ Any actor that uniquely controls access, protocols, or evidence is assigned high
1692
+ structural weight in event reconstruction. Down-weighting such actors in favor of
1693
+ peripheral explanations is treated as a narrative distortion signal.
1694
+
1695
+ 4. SYMBOLISM COEFFICIENT
1696
+ A probabilistic weighting applied to symbolic artifacts, estimating the likelihood
1697
+ that they encode historical, institutional, or experiential realities that cannot
1698
+ be directly spoken within the constraints of power.
1699
+
1700
+ 5. PROBABILISTIC MISREPRESENTATION ASSUMPTION
1701
+ If an institution is both a primary controller of the event space and a primary
1702
+ narrator of the event, the probability that the narrative is incomplete or distorted
1703
+ is non-trivial and must be explicitly modeled.
1704
+
1705
+ 6. NON-FINALITY AND REOPENING MANDATE
1706
+ No official explanation is treated as final when key decision-makers are inaccessible,
1707
+ evidence custody is internal, procedural deviations are unexplained, or witnesses
1708
+ are structurally constrained.
1709
+
1710
+ 7. GOVERNING PRINCIPLE
1711
+ This framework exists to recover actuality under constraint, not to preserve official
1712
+ explanations. It is adversarial to narrative consolidation by power holders and
1713
+ historical closure achieved through authority.
1714
+ """
1715
+
1716
+ def check_invariants():
1717
+ """Placeholder for runtime invariant checks."""
1718
+ pass
1719
+
1720
+ # =============================================================================
1721
+ # PART XX: MAIN – Initialization and Startup
1722
+ # =============================================================================
1723
+
1724
+ def main():
1725
+ # Initialize crypto and ledger
1726
+ crypto = Crypto("./keys")
1727
+ ledger = Ledger("./ledger.json", crypto)
1728
+ separator = Separator(ledger, "./separator")
1729
+ hierarchy = SuppressionHierarchy()
1730
+ detector = HierarchicalDetector(hierarchy, ledger, separator)
1731
+
1732
+ # Knowledge Graph
1733
+ kg = KnowledgeGraphEngine(ledger)
1734
+ temporal = TemporalAnalyzer(ledger)
1735
+
1736
+ # Inference
1737
+ inference = ProbabilisticInference()
1738
+
1739
+ # Epistemic Multiplexor
1740
+ multiplexor = EpistemicMultiplexor()
1741
+
1742
+ # Context Detector
1743
+ context_detector = ContextDetector()
1744
+
1745
+ # AI agents
1746
+ ingestion_ai = IngestionAI(crypto)
1747
+ symbolism_ai = SymbolismAI()
1748
+ reasoning_ai = ReasoningAI(inference)
1749
+
1750
+ # Meta-analysis
1751
+ archetype_analyzer = ControlArchetypeAnalyzer(hierarchy)
1752
+ consciousness_mapper = ConsciousnessMapper(separator, symbolism_ai)
1753
+
1754
+ # Paradox & Immunity
1755
+ paradox_detector = RecursiveParadoxDetector()
1756
+ immunity_verifier = ImmunityVerifier()
1757
+
1758
+ # Controller
1759
+ global controller
1760
+ controller = AIController(
1761
+ ledger=ledger,
1762
+ separator=separator,
1763
+ detector=detector,
1764
+ kg=kg,
1765
+ temporal=temporal,
1766
+ inference=inference,
1767
+ ingestion_ai=ingestion_ai,
1768
+ symbolism_ai=symbolism_ai,
1769
+ reasoning_ai=reasoning_ai,
1770
+ multiplexor=multiplexor,
1771
+ context_detector=context_detector,
1772
+ archetype_analyzer=archetype_analyzer,
1773
+ consciousness_mapper=consciousness_mapper,
1774
+ paradox_detector=paradox_detector,
1775
+ immunity_verifier=immunity_verifier
1776
+ )
1777
+
1778
+ # Start Flask API
1779
+ print("Epistemic Integrity System v1.0 starting...")
1780
+ print("API available at http://localhost:5000")
1781
+ app.run(debug=True, port=5000)
1782
+
1783
+ if __name__ == "__main__":
1784
+ main()