Taylor commited on
Commit
c1fa6bf
·
1 Parent(s): e1618af

feat: Act 4 -- Five Bules personality demo

Browse files

Five personality profiles on fork/race/fold/vent/interfere axes:
- Explorer: high temp (0.8-1.6), wide top-p (0.98), forks broadly
- Builder: low temp (0.2-0.5), narrow top-p (0.70), folds tightly
- Creative: medium-high temp, aggressive C3 (threshold=2), races freely
- Anxious: medium temp, frequent C3 (threshold=2), interferes early
- Balanced: standard glossolalia, phi convergence

Four swappable models:
- buleyean-smollm2 (360M, void-trained)
- base-smollm2 (360M, standard instruct)
- buleyean-qwen (0.5B, void-trained)
- base-qwen (0.5B, standard instruct)

All five personalities generate in parallel. Each shows
immediately when done. Amber accent.

THM-FIVE-BULE-PERSONALITY + THM-PHI-ATTRACTOR (Lean 4).

Files changed (6) hide show
  1. Dockerfile +18 -0
  2. README.md +11 -7
  3. aether-server.mjs +679 -0
  4. app.py +199 -0
  5. requirements.txt +2 -0
  6. simd-kernels.wasm +3 -0
Dockerfile ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ RUN apt-get update && apt-get install -y curl && \
4
+ curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
5
+ apt-get install -y nodejs && \
6
+ rm -rf /var/lib/apt/lists/*
7
+
8
+ WORKDIR /app
9
+
10
+ COPY requirements.txt .
11
+ RUN pip install --no-cache-dir -r requirements.txt
12
+
13
+ COPY app.py aether-server.mjs simd-kernels.wasm ./
14
+
15
+ RUN mkdir -p /tmp/hf_cache
16
+ ENV PYTHONUNBUFFERED=1 HF_HOME=/tmp/hf_cache
17
+ EXPOSE 7860
18
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,10 +1,14 @@
1
  ---
2
- title: Five Bules
3
- emoji: 🐠
4
- colorFrom: blue
5
- colorTo: red
6
  sdk: docker
7
- pinned: false
 
 
 
 
 
 
8
  ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: Five Bules - Personality as Void Walking
3
+ emoji: "\U0001F3AD"
4
+ colorFrom: yellow
5
+ colorTo: yellow
6
  sdk: docker
7
+ app_port: 7860
8
+ pinned: true
9
+ models:
10
+ - bartowski/SmolLM2-360M-Instruct-GGUF
11
+ - forkjoin-ai/buleyean-smollm2-360m
12
+ - bartowski/Qwen2.5-0.5B-Instruct-GGUF
13
+ - forkjoin-ai/buleyean-qwen2.5-0.5b
14
  ---
 
 
aether-server.mjs ADDED
@@ -0,0 +1,679 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /**
2
+ * Aether Inference Server with Glossolalia Decoder
3
+ *
4
+ * SmolLM2-360M inference using WASM SIMD kernels.
5
+ * Two endpoints:
6
+ * /generate-standard -- standard top-p sampling
7
+ * /generate-glossolalia -- temperature-ensemble fork/race/fold
8
+ */
9
+
10
+ import { createServer } from 'http';
11
+ import { readFileSync, existsSync } from 'fs';
12
+ import { execSync } from 'child_process';
13
+ import { fileURLToPath } from 'url';
14
+ import { dirname, join } from 'path';
15
+
16
+ const __dirname = dirname(fileURLToPath(import.meta.url));
17
+ const PORT = parseInt(process.env.AETHER_PORT || '7861');
18
+
19
+ // ─── Model Configs ──────────────────────────────────────────────────────────
20
+ const CONFIGS = {
21
+ 'smollm2-360m': {
22
+ hiddenDim: 960, numLayers: 32, numHeads: 15, numKvHeads: 5,
23
+ headDim: 64, intermediateSize: 2560, vocabSize: 49152,
24
+ ropeTheta: 100000.0, rmsNormEps: 1e-5, eosToken: 2,
25
+ },
26
+ 'qwen2.5-0.5b': {
27
+ hiddenDim: 896, numLayers: 24, numHeads: 14, numKvHeads: 2,
28
+ headDim: 64, intermediateSize: 4864, vocabSize: 151936,
29
+ ropeTheta: 1000000.0, rmsNormEps: 1e-6, eosToken: 151645, // <|im_end|>
30
+ },
31
+ };
32
+
33
+ // Five Bule Personality Profiles (THM-FIVE-BULE-PERSONALITY)
34
+ // Each personality is a position on the fork/race/fold/vent/interfere axes
35
+ const PERSONALITIES = {
36
+ explorer: { temps: [0.8, 1.2, 1.6], topP: 0.98, absorbingThreshold: 5, label: 'Explorer -- forks broadly, high temperature diversity' },
37
+ builder: { temps: [0.2, 0.3, 0.5], topP: 0.70, absorbingThreshold: 4, label: 'Builder -- folds tightly, low temperature, precise' },
38
+ creative: { temps: [0.6, 1.0, 1.4], topP: 0.95, absorbingThreshold: 2, label: 'Creative -- races freely, aggressive C3 perturbation' },
39
+ anxious: { temps: [0.4, 0.6, 0.8], topP: 0.85, absorbingThreshold: 2, label: 'Anxious -- interferes early, cautious, frequent C3' },
40
+ balanced: { temps: [0.4, 0.7, 1.0], topP: 0.90, absorbingThreshold: 3, label: 'Balanced -- standard glossolalia, phi convergence' },
41
+ };
42
+
43
+ // Default config (overridden per-model)
44
+ let C = CONFIGS['qwen2.5-0.5b'];
45
+ let kvDim = C.numKvHeads * C.headDim;
46
+ let gqaRatio = C.numHeads / C.numKvHeads;
47
+
48
+ // ─── WASM SIMD ──────────────────────────────────────────────────────────────
49
+ let simd = null;
50
+
51
+ async function loadSIMD() {
52
+ const p = join(__dirname, 'simd-kernels.wasm');
53
+ if (!existsSync(p)) return null;
54
+ try {
55
+ const { instance } = await WebAssembly.instantiate(readFileSync(p), {
56
+ env: { expf: Math.exp, tanhf: Math.tanh, powf: Math.pow },
57
+ });
58
+ const w = instance.exports; w.resetHeap(65536);
59
+ const mem = w.memory;
60
+ const hf = () => new Float32Array(mem.buffer);
61
+ const cp = (ptr, f) => hf().set(f, ptr >> 2);
62
+ const rd = (ptr, n) => hf().slice(ptr >> 2, (ptr >> 2) + n);
63
+ const wrap = (fn) => (...args) => { const s = w.getHeapPtr(); try { return fn(s, ...args); } finally { w.resetHeap(s); } };
64
+ console.log('[Aether] WASM SIMD loaded');
65
+ return {
66
+ matVec: wrap((s, mat, vec, rows, cols) => {
67
+ if (mat.byteLength > 100_000_000) return matVecJS(mat, vec, rows, cols);
68
+ const mP=w.allocate(mat.byteLength),vP=w.allocate(vec.byteLength),rP=w.allocate(rows*4);
69
+ cp(mP,mat);cp(vP,vec);w.matVecSimdBatch4(mP,vP,rP,rows,cols);return rd(rP,rows);
70
+ }),
71
+ rmsNorm: wrap((s,x,wt,eps) => {
72
+ const xP=w.allocate(x.byteLength),wP=w.allocate(wt.byteLength),rP=w.allocate(x.byteLength);
73
+ cp(xP,x);cp(wP,wt);w.rmsNormSimd(xP,wP,rP,x.length,eps);return rd(rP,x.length);
74
+ }),
75
+ softmax: wrap((s,x) => {
76
+ const xP=w.allocate(x.byteLength),rP=w.allocate(x.byteLength);
77
+ cp(xP,x);w.softmaxSimd(xP,rP,x.length);return rd(rP,x.length);
78
+ }),
79
+ fusedSiluMul: wrap((s,g,u) => {
80
+ const gP=w.allocate(g.byteLength),uP=w.allocate(u.byteLength),rP=w.allocate(g.byteLength);
81
+ cp(gP,g);cp(uP,u);w.fusedSiluMul(gP,uP,rP,g.length);return rd(rP,g.length);
82
+ }),
83
+ add: wrap((s,a,b) => {
84
+ const aP=w.allocate(a.byteLength),bP=w.allocate(b.byteLength),rP=w.allocate(a.byteLength);
85
+ cp(aP,a);cp(bP,b);w.addSimd(aP,bP,rP,a.length);return rd(rP,a.length);
86
+ }),
87
+ };
88
+ } catch(e) { console.warn('[Aether] WASM failed:',e.message); return null; }
89
+ }
90
+
91
+ // ─── JS Fallbacks ───────────────────────────────────────────────────────────
92
+ function matVecJS(m,v,rows,cols){const o=new Float32Array(rows);for(let r=0;r<rows;r++){let s=0;const off=r*cols;for(let c=0;c<cols;c++)s+=m[off+c]*v[c];o[r]=s;}return o;}
93
+ function rmsNormJS(x,w,eps){let ss=0;for(let i=0;i<x.length;i++)ss+=x[i]*x[i];ss=1/Math.sqrt(ss/x.length+eps);const o=new Float32Array(x.length);for(let i=0;i<x.length;i++)o[i]=x[i]*ss*w[i];return o;}
94
+ function softmaxJS(x){let mx=-Infinity;for(let i=0;i<x.length;i++)if(x[i]>mx)mx=x[i];const o=new Float32Array(x.length);let s=0;for(let i=0;i<x.length;i++){o[i]=Math.exp(x[i]-mx);s+=o[i];}for(let i=0;i<x.length;i++)o[i]/=s;return o;}
95
+ function fusedSiluMulJS(g,u){const o=new Float32Array(g.length);for(let i=0;i<g.length;i++){const v=g[i];o[i]=(v/(1+Math.exp(-v)))*u[i];}return o;}
96
+ function addJS(a,b){const o=new Float32Array(a.length);for(let i=0;i<a.length;i++)o[i]=a[i]+b[i];return o;}
97
+ const op = () => ({ matVec:simd?.matVec||matVecJS, rmsNorm:simd?.rmsNorm||rmsNormJS, softmax:simd?.softmax||softmaxJS, fusedSiluMul:simd?.fusedSiluMul||fusedSiluMulJS, add:simd?.add||addJS });
98
+
99
+ // ─── Q8_0 Dequant ───────────────────────────────────────────────────────────
100
+ function fp16(lo,hi){const h=lo|(hi<<8),s=(h>>15)&1,e=(h>>10)&0x1f,f=h&0x3ff;if(e===0)return f===0?0:(s?-1:1)*(f/1024)*Math.pow(2,-14);if(e===31)return 0;return(s?-1:1)*Math.pow(2,e-15)*(1+f/1024);}
101
+ function dequantQ8(data,n){const o=new Float32Array(n),nb=Math.ceil(n/32);for(let b=0;b<nb;b++){const off=b*34,sc=fp16(data[off],data[off+1]);const cnt=Math.min(32,n-b*32);for(let i=0;i<cnt;i++){const v=data[off+2+i];o[b*32+i]=(v>127?v-256:v)*sc;}}return o;}
102
+ function dequantByType(data,n,type){if(type===0)return new Float32Array(data.buffer,data.byteOffset,n);if(type===8)return dequantQ8(data,n);if(type===1){const o=new Float32Array(n);for(let i=0;i<n;i++)o[i]=fp16(data[i*2],data[i*2+1]);return o;}return dequantQ8(data,n);}
103
+
104
+ // ─── GGUF Parser ────────────────────────────────────────────────────────────
105
+ const MAGIC=0x46554747;const BSZ={2:32,3:32,6:32,7:32,8:32,9:32,10:256,11:256,12:256,13:256,14:256,15:256};const BBY={2:18,3:20,6:22,7:24,8:34,9:36,10:84,11:110,12:144,13:176,14:210,15:292};const TSZ={0:4,1:2,16:1,17:2,18:4,19:8,20:8};
106
+ function csz(d,t){let n=1n;for(const x of d)n*=x;const b=BSZ[t];if(b&&BBY[t])return Math.ceil(Number(n)/b)*BBY[t];return Math.ceil(Number(n)*(TSZ[t]??4));}
107
+ function rs(b,o){const l=Number(b.readBigUInt64LE(o));return{v:b.subarray(o+8,o+8+l).toString('utf8'),o:o+8+l};}
108
+ function rv(b,o,t){switch(t){case 0:return{v:b.readUInt8(o),o:o+1};case 1:return{v:b.readInt8(o),o:o+1};case 2:return{v:b.readUInt16LE(o),o:o+2};case 3:return{v:b.readInt16LE(o),o:o+2};case 4:return{v:b.readUInt32LE(o),o:o+4};case 5:return{v:b.readInt32LE(o),o:o+4};case 6:return{v:b.readFloatLE(o),o:o+4};case 7:return{v:b.readUInt8(o)!==0,o:o+1};case 8:{const r=rs(b,o);return{v:r.v,o:r.o};}case 10:return{v:b.readBigUInt64LE(o),o:o+8};case 11:return{v:b.readBigInt64LE(o),o:o+8};case 12:return{v:b.readDoubleLE(o),o:o+8};case 9:{const at=b.readUInt32LE(o),al=Number(b.readBigUInt64LE(o+4));let co=o+12;const a=[];for(let i=0;i<al;i++){const r=rv(b,co,at);a.push(r.v);co=r.o;}return{v:a,o:co};}default:throw new Error(`Unknown GGUF type ${t}`);}}
109
+ function parseGGUF(buf){let o=0;if(buf.readUInt32LE(o)!==MAGIC)throw new Error('Not GGUF');o+=4;o+=4;const tc=Number(buf.readBigUInt64LE(o));o+=8;const kc=Number(buf.readBigUInt64LE(o));o+=8;let align=32;for(let i=0;i<kc;i++){const{v:k,o:o1}=rs(buf,o);o=o1;const vt=buf.readUInt32LE(o);o+=4;const{v,o:o2}=rv(buf,o,vt);o=o2;if(k==='general.alignment')align=Number(v);}const tensors=[];for(let i=0;i<tc;i++){const{v:name,o:o1}=rs(buf,o);o=o1;const nd=buf.readUInt32LE(o);o+=4;const dims=[];for(let d=0;d<nd;d++){dims.push(buf.readBigUInt64LE(o));o+=8;}const type=buf.readUInt32LE(o);o+=4;const offset=buf.readBigUInt64LE(o);o+=8;tensors.push({name,dims,type,offset,size:csz(dims,type),numElements:Number(dims.reduce((a,b)=>a*b,1n))});}return{tensors,dataOffset:Math.ceil(o/align)*align};}
110
+
111
+ // ─── BPE Tokenizer ──────────────────────────────────────────────────────────
112
+ class Tok{constructor(j){const m=j.model||{};this.vocab=m.vocab||{};this.rev={};for(const[t,id]of Object.entries(this.vocab))this.rev[id]=t;this.mr={};for(const[i,mg]of(m.merges||[]).entries())this.mr[mg]=i;this.added={};if(j.added_tokens)for(const t of j.added_tokens)this.added[t.content]=t.id;}
113
+ encode(text){const sp=/<\|[^|]+\|>/g;const parts=[];let last=0,m;while((m=sp.exec(text))!==null){if(m.index>last)parts.push({t:text.slice(last,m.index),s:false});parts.push({t:m[0],s:true});last=m.index+m[0].length;}if(last<text.length)parts.push({t:text.slice(last),s:false});const tokens=[];for(const p of parts){if(p.s){const id=this.added[p.t]??this.vocab[p.t];if(id!==undefined)tokens.push(id);continue;}const words=p.t.match(/\S+|\s+/g)||[];for(const w of words){let syms=[];for(const ch of w){if(this.vocab[ch]!==undefined)syms.push(ch);else for(const b of Buffer.from(ch,'utf8'))syms.push(`<0x${b.toString(16).toUpperCase().padStart(2,'0')}>`)}while(syms.length>1){let best=Infinity,bi=-1;for(let i=0;i<syms.length-1;i++){const r=this.mr[`${syms[i]} ${syms[i+1]}`];if(r!==undefined&&r<best){best=r;bi=i;}}if(bi===-1)break;syms.splice(bi,2,syms[bi]+syms[bi+1]);}for(const s of syms){const id=this.vocab[s]??this.added[s];if(id!==undefined)tokens.push(id);}}}return tokens;}
114
+ decode(tokens){const p=[];for(const t of tokens){const s=this.rev[t];if(s&&s.startsWith('<0x')&&s.endsWith('>'))p.push(String.fromCharCode(parseInt(s.slice(3,-1),16)));else if(s&&!s.startsWith('<|'))p.push(s);}return p.join('').replace(/Ġ/g,' ').replace(/Ċ/g,'\n');}}
115
+
116
+ // ─── RoPE (LLaMA style: ADJACENT pairs) ─────────────────────────────────────
117
+ function applyRoPE(x, headDim, position, theta) {
118
+ for (let i = 0; i < headDim; i += 2) {
119
+ const freq = 1.0 / Math.pow(theta, (2 * (i/2)) / headDim);
120
+ const angle = position * freq;
121
+ const cos = Math.cos(angle), sin = Math.sin(angle);
122
+ const x0 = x[i], x1 = x[i + 1];
123
+ x[i] = x0 * cos - x1 * sin;
124
+ x[i + 1] = x0 * sin + x1 * cos;
125
+ }
126
+ }
127
+
128
+ // ─── Models ─────────────────────────────────────────────────────────────────
129
+ const models = {};
130
+ let activeModel = null;
131
+
132
+ function loadModel(name, ggufPath, tokPath, configName) {
133
+ const cfg = CONFIGS[configName] || CONFIGS['smollm2-360m'];
134
+ console.log(`[Aether] Loading ${name} (${configName}: ${cfg.numLayers}L, ${cfg.hiddenDim}d)...`);
135
+ const t0=Date.now();const buf=readFileSync(ggufPath);const parsed=parseGGUF(buf);
136
+ console.log(`[Aether] Parsed ${parsed.tensors.length} tensors in ${Date.now()-t0}ms`);
137
+ const tokenizer=new Tok(JSON.parse(readFileSync(tokPath,'utf8')));
138
+ const byName={};for(const t of parsed.tensors)byName[t.name]=t;
139
+ function get(nm){const t=byName[nm];if(!t)return null;const raw=new Uint8Array(buf.buffer,buf.byteOffset+parsed.dataOffset+Number(t.offset),t.size);return dequantByType(raw,t.numElements,t.type);}
140
+ console.log('[Aether] Dequantizing...');const tokenEmbd=get('token_embd.weight');const layers=[];
141
+ for(let i=0;i<cfg.numLayers;i++){if(i%8===0)console.log(`[Aether] Layer ${i}/${cfg.numLayers}`);layers.push({an:get(`blk.${i}.attn_norm.weight`),fn:get(`blk.${i}.ffn_norm.weight`),qw:get(`blk.${i}.attn_q.weight`),kw:get(`blk.${i}.attn_k.weight`),vw:get(`blk.${i}.attn_v.weight`),ow:get(`blk.${i}.attn_output.weight`),gw:get(`blk.${i}.ffn_gate.weight`),uw:get(`blk.${i}.ffn_up.weight`),dw:get(`blk.${i}.ffn_down.weight`)});}
142
+ const outNorm=get('output_norm.weight');let outWeight=get('output.weight');if(!outWeight){console.log('[Aether] Tied embeddings');outWeight=tokenEmbd;}
143
+ const loadTime=Date.now()-t0;
144
+ console.log(`[Aether] ${name} loaded in ${(loadTime/1000).toFixed(1)}s`);
145
+ models[name]={tokenEmbd,layers,outNorm,outWeight,tokenizer,loadTime,name,config:cfg};
146
+ return models[name];
147
+ }
148
+
149
+ function getModel(name) {
150
+ return models[name] || models['base'] || Object.values(models)[0];
151
+ }
152
+
153
+ // ─── Forward Pass (returns raw logits) ──────────────────────────────────────
154
+ function forwardPass(prompt, modelName) {
155
+ const o = op();
156
+ const model = getModel(modelName);
157
+ const mc = model.config; // model-specific config
158
+ const mcKvDim = mc.numKvHeads * mc.headDim;
159
+ const mcGqaRatio = mc.numHeads / mc.numKvHeads;
160
+ const chatPrompt = `<|im_start|>user\n${prompt}<|im_end|>\n<|im_start|>assistant\n`;
161
+ const inputTokens = model.tokenizer.encode(chatPrompt);
162
+ const allTokens = [...inputTokens];
163
+ const kvCache = Array.from({length:mc.numLayers},()=>({k:[],v:[]}));
164
+
165
+ return {
166
+ inputTokens, config: mc,
167
+ step(allToks, kvC, diag) {
168
+ const pos = allToks.length - 1;
169
+ const tid = allToks[allToks.length - 1];
170
+ const x0 = model.tokenEmbd.slice(tid*mc.hiddenDim,(tid+1)*mc.hiddenDim);
171
+ let x = x0;
172
+ const layerNorms = diag ? [] : null;
173
+ const attnEntropies = diag ? [] : null;
174
+
175
+ for (let l=0;l<mc.numLayers;l++) {
176
+ const ly=model.layers[l];
177
+ const xPrev = x;
178
+ const normed=o.rmsNorm(x,ly.an,mc.rmsNormEps);
179
+ const q=o.matVec(ly.qw,normed,mc.hiddenDim,mc.hiddenDim);
180
+ const k=o.matVec(ly.kw,normed,mcKvDim,mc.hiddenDim);
181
+ const v=o.matVec(ly.vw,normed,mcKvDim,mc.hiddenDim);
182
+ for(let h=0;h<mc.numHeads;h++)applyRoPE(q.subarray(h*mc.headDim,(h+1)*mc.headDim),mc.headDim,pos,mc.ropeTheta);
183
+ for(let h=0;h<mc.numKvHeads;h++)applyRoPE(k.subarray(h*mc.headDim,(h+1)*mc.headDim),mc.headDim,pos,mc.ropeTheta);
184
+ kvC[l].k.push(new Float32Array(k));kvC[l].v.push(new Float32Array(v));
185
+ const seqLen=kvC[l].k.length;const attnOut=new Float32Array(mc.hiddenDim);
186
+ const headEntropies = diag ? [] : null;
187
+ for(let h=0;h<mc.numHeads;h++){const kvH=Math.floor(h/mcGqaRatio);const qH=q.subarray(h*mc.headDim,(h+1)*mc.headDim);const scores=new Float32Array(seqLen);
188
+ for(let s=0;s<seqLen;s++){const kH=kvC[l].k[s].subarray(kvH*mc.headDim,(kvH+1)*mc.headDim);let dot=0;for(let d=0;d<mc.headDim;d++)dot+=qH[d]*kH[d];scores[s]=dot/Math.sqrt(mc.headDim);}
189
+ const w=softmaxJS(scores);
190
+ if (diag) { let he=0; for(let s=0;s<seqLen;s++) if(w[s]>1e-10) he-=w[s]*Math.log(w[s]); headEntropies.push(Math.round(he*1000)/1000); }
191
+ for(let s=0;s<seqLen;s++){const vH=kvC[l].v[s].subarray(kvH*mc.headDim,(kvH+1)*mc.headDim);const wt=w[s];for(let d=0;d<mc.headDim;d++)attnOut[h*mc.headDim+d]+=wt*vH[d];}}
192
+ if (diag) attnEntropies.push(headEntropies);
193
+ const projected=o.matVec(ly.ow,attnOut,mc.hiddenDim,mc.hiddenDim);const postAttn=o.add(x,projected);
194
+ const ffnIn=o.rmsNorm(postAttn,ly.fn,mc.rmsNormEps);const gate=o.matVec(ly.gw,ffnIn,mc.intermediateSize,mc.hiddenDim);
195
+ const up=o.matVec(ly.uw,ffnIn,mc.intermediateSize,mc.hiddenDim);const activated=o.fusedSiluMul(gate,up);
196
+ const down=o.matVec(ly.dw,activated,mc.hiddenDim,mc.intermediateSize);x=o.add(postAttn,down);
197
+
198
+ if (diag) {
199
+ let norm=0, delta=0, prevNorm=0;
200
+ for(let i=0;i<mc.hiddenDim;i++) { norm+=x[i]*x[i]; delta+=(x[i]-xPrev[i])**2; prevNorm+=xPrev[i]*xPrev[i]; }
201
+ layerNorms.push({ norm: Math.round(Math.sqrt(norm)*100)/100, residual: prevNorm>0 ? Math.round(Math.sqrt(delta/prevNorm)*1000)/1000 : 0 });
202
+ }
203
+ }
204
+ const finalNormed=o.rmsNorm(x,model.outNorm,mc.rmsNormEps);
205
+ const logits = o.matVec(model.outWeight,finalNormed,mc.vocabSize,mc.hiddenDim);
206
+ return { logits, layerNorms, attnEntropies };
207
+ }
208
+ };
209
+ }
210
+
211
+ // ─── Sampling Functions ─────────────────────────────────────────────────────
212
+
213
+ function sampleStandard(logits, temperature = 0.7, topP = 0.9) {
214
+ const o = op();
215
+ const scaled = new Float32Array(logits.length);
216
+ for (let i = 0; i < logits.length; i++) scaled[i] = logits[i] / temperature;
217
+ const probs = o.softmax(scaled);
218
+ // Top-p
219
+ const indexed = Array.from(probs).map((p,i)=>({p,i})).sort((a,b)=>b.p-a.p);
220
+ let cumP = 0;
221
+ const candidates = [];
222
+ for (const {p,i} of indexed) { cumP += p; candidates.push({p,i}); if (cumP >= topP) break; }
223
+ const total = candidates.reduce((s,c) => s+c.p, 0);
224
+ const r = Math.random() * total;
225
+ let acc = 0;
226
+ for (const {p,i} of candidates) { acc += p; if (r < acc) return i; }
227
+ return candidates[0].i;
228
+ }
229
+
230
+ function glossolaliaMerge(rawLogits, temperatures = [0.4, 0.7, 1.0]) {
231
+ const V = rawLogits.length;
232
+ const logV = Math.log(V);
233
+ const agents = [];
234
+
235
+ for (const tau of temperatures) {
236
+ const scaled = new Float32Array(V);
237
+ for (let i = 0; i < V; i++) scaled[i] = rawLogits[i] / Math.max(tau, 0.01);
238
+ const probs = softmaxJS(scaled);
239
+
240
+ // Shannon entropy
241
+ let h = 0;
242
+ for (let i = 0; i < V; i++) { const p = probs[i]; if (p > 1e-12) h -= p * Math.log(p); }
243
+
244
+ // Deficit weight: low entropy = high confidence = high weight
245
+ const w = Math.max(1.0 - h / logV, 1e-8); // the sliver
246
+
247
+ // Top-5 for diagnostics
248
+ const top5 = Array.from(probs).map((p,i)=>({p,i})).sort((a,b)=>b.p-a.p).slice(0,5);
249
+
250
+ agents.push({ probs, entropy: h, weight: w, tau, top5 });
251
+ }
252
+
253
+ // Merge: weighted average
254
+ const totalW = agents.reduce((s,a) => s + a.weight, 0);
255
+ const merged = new Float32Array(V);
256
+ for (const a of agents) {
257
+ const nw = a.weight / totalW;
258
+ for (let i = 0; i < V; i++) merged[i] += nw * a.probs[i];
259
+ }
260
+
261
+ return { merged, agents, totalW };
262
+ }
263
+
264
+ function sampleGlossolalia(logits) {
265
+ const { merged, agents } = glossolaliaMerge(logits);
266
+ const indexed = Array.from(merged).map((p,i)=>({p,i})).sort((a,b)=>b.p-a.p);
267
+ let cumP = 0;
268
+ const candidates = [];
269
+ for (const {p,i} of indexed) { cumP += p; candidates.push({p,i}); if (cumP >= 0.95) break; }
270
+ const total = candidates.reduce((s,c) => s+c.p, 0);
271
+ const r = Math.random() * total;
272
+ let acc = 0;
273
+ for (const {p,i} of candidates) { acc += p; if (r < acc) return { tokenId: i, agents, merged }; }
274
+ return { tokenId: candidates[0].i, agents, merged };
275
+ }
276
+
277
+ // ─── C2/C3 Metacognitive Monitoring ─────────────────────────────────────────
278
+ // C2: Detect entropy regime collapse (>50% drop in 3-token window)
279
+ // C3: Detect absorbing states + apply diversity perturbation
280
+
281
+ function detectRegimeChange(state) {
282
+ const h = state.entropyHistory;
283
+ if (h.length < 3) return false;
284
+ const recent = h.slice(-3);
285
+ const older = h.slice(-6, -3);
286
+ if (older.length === 0) return false;
287
+ const recentMean = recent.reduce((a, b) => a + b, 0) / recent.length;
288
+ const olderMean = older.reduce((a, b) => a + b, 0) / older.length;
289
+ return olderMean > 0 && recentMean < olderMean * 0.5;
290
+ }
291
+
292
+ function metacognitiveC3(logits, state, selectedToken, absorbingThreshold = 3) {
293
+ // Update repeat tracking
294
+ if (selectedToken === state.lastToken) state.repeatCount++;
295
+ else { state.repeatCount = 0; state.lastToken = selectedToken; }
296
+
297
+ const isAbsorbing = state.repeatCount >= absorbingThreshold;
298
+ const isRegimeCollapse = detectRegimeChange(state);
299
+
300
+ if (!isAbsorbing && !isRegimeCollapse) {
301
+ return { logits, perturbed: false, reason: null };
302
+ }
303
+
304
+ // Perturbation: eta scales with repetition depth
305
+ const eta = 0.1 * (1 + state.repeatCount);
306
+ const perturbed = new Float32Array(logits.length);
307
+ let totalOther = 0;
308
+ for (let i = 0; i < logits.length; i++) if (i !== selectedToken && logits[i] > 0) totalOther += logits[i];
309
+ const redistributionMass = Math.abs(logits[selectedToken]) * eta;
310
+
311
+ for (let i = 0; i < logits.length; i++) {
312
+ if (i === selectedToken) perturbed[i] = logits[i] * (1 - eta);
313
+ else if (totalOther > 0 && logits[i] > 0) perturbed[i] = logits[i] + redistributionMass * (logits[i] / totalOther);
314
+ else perturbed[i] = logits[i];
315
+ }
316
+
317
+ state.perturbationCount++;
318
+ return {
319
+ logits: perturbed,
320
+ perturbed: true,
321
+ reason: isAbsorbing ? `absorbing(${state.repeatCount} repeats)` : 'regime_collapse',
322
+ eta,
323
+ };
324
+ }
325
+
326
+ function sampleWithMetacog(rawLogits, metacogState) {
327
+ const { merged, agents } = glossolaliaMerge(rawLogits);
328
+
329
+ // Compute merged entropy for C2 tracking
330
+ let mergedEntropy = 0;
331
+ for (let i = 0; i < merged.length; i++) { const p = merged[i]; if (p > 1e-12) mergedEntropy -= p * Math.log(p); }
332
+ metacogState.entropyHistory.push(mergedEntropy);
333
+
334
+ // First sample from merged (pre-C3) to detect what token would be chosen
335
+ const indexed = Array.from(merged).map((p,i)=>({p,i})).sort((a,b)=>b.p-a.p);
336
+ let cumP = 0, candidates = [];
337
+ for (const {p,i} of indexed) { cumP += p; candidates.push({p,i}); if (cumP >= 0.95) break; }
338
+ let total = candidates.reduce((s,c) => s+c.p, 0);
339
+ let r = Math.random() * total, acc = 0, preC3Token = candidates[0].i;
340
+ for (const {p,i} of candidates) { acc += p; if (r < acc) { preC3Token = i; break; } }
341
+
342
+ // C3: check and potentially perturb
343
+ const c3 = metacognitiveC3(rawLogits, metacogState, preC3Token);
344
+
345
+ let finalToken = preC3Token;
346
+ if (c3.perturbed) {
347
+ // Re-merge with perturbed logits
348
+ const { merged: remerged } = glossolaliaMerge(c3.logits);
349
+ const ridx = Array.from(remerged).map((p,i)=>({p,i})).sort((a,b)=>b.p-a.p);
350
+ cumP = 0; candidates = [];
351
+ for (const {p,i} of ridx) { cumP += p; candidates.push({p,i}); if (cumP >= 0.95) break; }
352
+ total = candidates.reduce((s,c) => s+c.p, 0);
353
+ r = Math.random() * total; acc = 0;
354
+ for (const {p,i} of candidates) { acc += p; if (r < acc) { finalToken = i; break; } }
355
+ }
356
+
357
+ return {
358
+ tokenId: finalToken, agents, merged, mergedEntropy,
359
+ c3: { perturbed: c3.perturbed, reason: c3.reason, eta: c3.eta,
360
+ preC3Token, repeatCount: metacogState.repeatCount,
361
+ perturbationCount: metacogState.perturbationCount },
362
+ };
363
+ }
364
+
365
+ // ─── Generation Loops ───────────────────────────────────────────────────────
366
+
367
+ function generateStandard(prompt, maxTokens = 8192, modelName = 'buleyean') {
368
+ const t0 = performance.now();
369
+ const model = getModel(modelName);
370
+ const fwd = forwardPass(prompt, modelName);
371
+ const allTokens = [...fwd.inputTokens];
372
+ const kvC = Array.from({length:model.config.numLayers},()=>({k:[],v:[]}));
373
+ const tokenTimes = [];
374
+
375
+ // Prefill
376
+ for (let i = 0; i < fwd.inputTokens.length; i++) {
377
+ fwd.step(allTokens.slice(0, i+1), kvC, false);
378
+ }
379
+
380
+ const perTokenInfo = [];
381
+
382
+ // Decode
383
+ for (let i = 0; i < maxTokens; i++) {
384
+ const ts = performance.now();
385
+ const { logits, layerNorms, attnEntropies } = fwd.step(allTokens, kvC, true);
386
+ const o2 = op();
387
+ const scaled = new Float32Array(logits.length);
388
+ for (let j = 0; j < logits.length; j++) scaled[j] = logits[j] / 0.7;
389
+ const probs = o2.softmax(scaled);
390
+
391
+ const chosen = sampleStandard(logits);
392
+ const chosenProb = probs[chosen];
393
+ const perplexity = chosenProb > 0 ? -Math.log2(chosenProb) : 99;
394
+
395
+ // Vocab coverage: tokens with >0.1% probability
396
+ let vocabCoverage = 0;
397
+ for (let j = 0; j < probs.length; j++) if (probs[j] > 0.001) vocabCoverage++;
398
+
399
+ // Top-5
400
+ const top5 = Array.from(probs).map((p,j)=>({p,i:j})).sort((a,b)=>b.p-a.p).slice(0,5)
401
+ .map(t => ({ token: model.tokenizer.decode([t.i]), prob: Math.round(t.p*1000)/1000 }));
402
+
403
+ tokenTimes.push(performance.now() - ts);
404
+ perTokenInfo.push({ perplexity: Math.round(perplexity*100)/100, chosenProb: Math.round(chosenProb*1000)/1000, vocabCoverage, top5, layerNorms, attnEntropies });
405
+
406
+ if (chosen === model.config.eosToken) break;
407
+ allTokens.push(chosen);
408
+ }
409
+
410
+ const genTokens = allTokens.slice(fwd.inputTokens.length);
411
+ const totalTime = performance.now() - t0;
412
+ const avgMs = tokenTimes.length > 0 ? tokenTimes.reduce((a,b)=>a+b,0)/tokenTimes.length : 0;
413
+
414
+ return {
415
+ text: model.tokenizer.decode(genTokens), tokens: genTokens.length,
416
+ totalTimeMs: Math.round(totalTime), avgTokenMs: Math.round(avgMs),
417
+ mode: 'standard', temperature: 0.7, topP: 0.9,
418
+ tokenDiagnostics: perTokenInfo,
419
+ };
420
+ }
421
+
422
+ function generateGlossolalia(prompt, maxTokens = 8192, modelName = 'buleyean') {
423
+ const t0 = performance.now();
424
+ const model = getModel(modelName);
425
+ const fwd = forwardPass(prompt, modelName);
426
+ const allTokens = [...fwd.inputTokens];
427
+ const kvC = Array.from({length:model.config.numLayers},()=>({k:[],v:[]}));
428
+ const tokenTimes = [];
429
+ const perTokenDiag = [];
430
+
431
+ // Prefill
432
+ for (let i = 0; i < fwd.inputTokens.length; i++) {
433
+ fwd.step(allTokens.slice(0, i+1), kvC, false);
434
+ }
435
+
436
+ // Decode with Glossolalia
437
+ for (let i = 0; i < maxTokens; i++) {
438
+ const ts = performance.now();
439
+ const { logits, layerNorms, attnEntropies } = fwd.step(allTokens, kvC, true);
440
+ const { tokenId, agents } = sampleGlossolalia(logits);
441
+
442
+ // Token-level perplexity from merged distribution
443
+ const { merged } = glossolaliaMerge(logits);
444
+ const chosenProb = merged[tokenId] || 0;
445
+ const perplexity = chosenProb > 0 ? -Math.log2(chosenProb) : 99;
446
+ let vocabCoverage = 0;
447
+ for (let j = 0; j < merged.length; j++) if (merged[j] > 0.001) vocabCoverage++;
448
+
449
+ tokenTimes.push(performance.now() - ts);
450
+
451
+ perTokenDiag.push({
452
+ agents: agents.map(a => ({
453
+ tau: a.tau, entropy: Math.round(a.entropy*1000)/1000, weight: Math.round(a.weight*1000)/1000,
454
+ top3: a.top5.slice(0,3).map(t => ({ token: model.tokenizer.decode([t.i]), prob: Math.round(t.p*1000)/1000 })),
455
+ })),
456
+ perplexity: Math.round(perplexity*100)/100,
457
+ chosenProb: Math.round(chosenProb*1000)/1000,
458
+ vocabCoverage,
459
+ layerNorms,
460
+ attnEntropies,
461
+ });
462
+
463
+ if (tokenId === model.config.eosToken) break;
464
+ allTokens.push(tokenId);
465
+ }
466
+
467
+ const genTokens = allTokens.slice(fwd.inputTokens.length);
468
+ const totalTime = performance.now() - t0;
469
+ const avgMs = tokenTimes.length > 0 ? tokenTimes.reduce((a,b)=>a+b,0)/tokenTimes.length : 0;
470
+
471
+ return {
472
+ text: model.tokenizer.decode(genTokens), tokens: genTokens.length,
473
+ totalTimeMs: Math.round(totalTime), avgTokenMs: Math.round(avgMs),
474
+ mode: 'glossolalia', temperatures: [0.4, 0.7, 1.0],
475
+ diagnostics: perTokenDiag,
476
+ };
477
+ }
478
+
479
+ // ─── Metacog Generation (Glossolalia + C2/C3) ───────────────────────────────
480
+
481
+ function generateMetacog(prompt, maxTokens = 8192, modelName = 'buleyean') {
482
+ const t0 = performance.now();
483
+ const model = getModel(modelName);
484
+ const fwd = forwardPass(prompt, modelName);
485
+ const allTokens = [...fwd.inputTokens];
486
+ const kvC = Array.from({length:model.config.numLayers},()=>({k:[],v:[]}));
487
+ const tokenTimes = [];
488
+ const perTokenDiag = [];
489
+
490
+ // Metacognitive state (persists across tokens)
491
+ const metacogState = { repeatCount: 0, lastToken: -1, entropyHistory: [], perturbationCount: 0 };
492
+
493
+ // Prefill
494
+ for (let i = 0; i < fwd.inputTokens.length; i++) {
495
+ fwd.step(allTokens.slice(0, i+1), kvC, false);
496
+ }
497
+
498
+ // Decode with Glossolalia + C2/C3
499
+ for (let i = 0; i < maxTokens; i++) {
500
+ const ts = performance.now();
501
+ const { logits, layerNorms } = fwd.step(allTokens, kvC, true);
502
+ const result = sampleWithMetacog(logits, metacogState);
503
+
504
+ const chosenProb = result.merged[result.tokenId] || 0;
505
+ const perplexity = chosenProb > 0 ? -Math.log2(chosenProb) : 99;
506
+ let vocabCoverage = 0;
507
+ for (let j = 0; j < result.merged.length; j++) if (result.merged[j] > 0.001) vocabCoverage++;
508
+
509
+ tokenTimes.push(performance.now() - ts);
510
+
511
+ perTokenDiag.push({
512
+ agents: result.agents.map(a => ({
513
+ tau: a.tau, entropy: Math.round(a.entropy*1000)/1000, weight: Math.round(a.weight*1000)/1000,
514
+ top3: a.top5.slice(0,3).map(t => ({ token: model.tokenizer.decode([t.i]), prob: Math.round(t.p*1000)/1000 })),
515
+ })),
516
+ perplexity: Math.round(perplexity*100)/100,
517
+ chosenProb: Math.round(chosenProb*1000)/1000,
518
+ vocabCoverage,
519
+ layerNorms,
520
+ c3: result.c3,
521
+ mergedEntropy: Math.round(result.mergedEntropy*1000)/1000,
522
+ });
523
+
524
+ if (result.tokenId === model.config.eosToken) break;
525
+ allTokens.push(result.tokenId);
526
+ }
527
+
528
+ const genTokens = allTokens.slice(fwd.inputTokens.length);
529
+ const totalTime = performance.now() - t0;
530
+ const avgMs = tokenTimes.length > 0 ? tokenTimes.reduce((a,b)=>a+b,0)/tokenTimes.length : 0;
531
+
532
+ return {
533
+ text: model.tokenizer.decode(genTokens), tokens: genTokens.length,
534
+ totalTimeMs: Math.round(totalTime), avgTokenMs: Math.round(avgMs),
535
+ mode: 'metacog', temperatures: [0.4, 0.7, 1.0],
536
+ diagnostics: perTokenDiag,
537
+ metacogSummary: {
538
+ totalPerturbations: metacogState.perturbationCount,
539
+ finalRepeatCount: metacogState.repeatCount,
540
+ entropyHistory: metacogState.entropyHistory.map(h => Math.round(h*1000)/1000),
541
+ },
542
+ };
543
+ }
544
+
545
+ // ─── Personality Generation ────────────────────────────────��─────────────────
546
+
547
+ function generatePersonality(prompt, maxTokens = 8192, modelName = 'buleyean-smollm2', personalityName = 'balanced') {
548
+ const personality = PERSONALITIES[personalityName] || PERSONALITIES.balanced;
549
+ const model = getModel(modelName);
550
+ const mc = model.config;
551
+ const t0 = performance.now();
552
+ const fwd = forwardPass(prompt, modelName);
553
+ const allTokens = [...fwd.inputTokens];
554
+ const kvC = Array.from({length:mc.numLayers},()=>({k:[],v:[]}));
555
+ const tokenTimes = [];
556
+ const metacogState = { repeatCount: 0, lastToken: -1, entropyHistory: [], perturbationCount: 0 };
557
+
558
+ for (let i = 0; i < fwd.inputTokens.length; i++) fwd.step(allTokens.slice(0,i+1), kvC, false);
559
+
560
+ for (let i = 0; i < maxTokens; i++) {
561
+ const ts = performance.now();
562
+ const { logits } = fwd.step(allTokens, kvC, false);
563
+ const { merged } = glossolaliaMerge(logits, personality.temps);
564
+
565
+ let mergedEntropy = 0;
566
+ for (let j = 0; j < merged.length; j++) { const p = merged[j]; if (p > 1e-12) mergedEntropy -= p * Math.log(p); }
567
+ metacogState.entropyHistory.push(mergedEntropy);
568
+
569
+ const indexed = Array.from(merged).map((p,j)=>({p,i:j})).sort((a,b)=>b.p-a.p);
570
+ let cumP = 0, candidates = [];
571
+ for (const {p,i} of indexed) { cumP += p; candidates.push({p,i}); if (cumP >= personality.topP) break; }
572
+ let total = candidates.reduce((s,c) => s+c.p, 0);
573
+ let r = Math.random() * total, acc = 0, preC3Token = candidates[0].i;
574
+ for (const {p,i} of candidates) { acc += p; if (r < acc) { preC3Token = i; break; } }
575
+
576
+ const c3 = metacognitiveC3(logits, metacogState, preC3Token, personality.absorbingThreshold);
577
+ let finalToken = preC3Token;
578
+ if (c3.perturbed) {
579
+ const { merged: rm } = glossolaliaMerge(c3.logits, personality.temps);
580
+ const ri = Array.from(rm).map((p,j)=>({p,i:j})).sort((a,b)=>b.p-a.p);
581
+ cumP = 0; candidates = [];
582
+ for (const {p,i} of ri) { cumP += p; candidates.push({p,i}); if (cumP >= personality.topP) break; }
583
+ total = candidates.reduce((s,c) => s+c.p, 0);
584
+ r = Math.random() * total; acc = 0;
585
+ for (const {p,i} of candidates) { acc += p; if (r < acc) { finalToken = i; break; } }
586
+ }
587
+
588
+ tokenTimes.push(performance.now() - ts);
589
+ if (finalToken === mc.eosToken) break;
590
+ allTokens.push(finalToken);
591
+ }
592
+
593
+ const genTokens = allTokens.slice(fwd.inputTokens.length);
594
+ const totalTime = performance.now() - t0;
595
+ const avgMs = tokenTimes.length > 0 ? tokenTimes.reduce((a,b)=>a+b,0)/tokenTimes.length : 0;
596
+
597
+ return {
598
+ text: model.tokenizer.decode(genTokens), tokens: genTokens.length,
599
+ totalTimeMs: Math.round(totalTime), avgTokenMs: Math.round(avgMs),
600
+ mode: 'personality', personality: personalityName,
601
+ personalityLabel: personality.label,
602
+ temperatures: personality.temps,
603
+ modelName,
604
+ metacogSummary: { totalPerturbations: metacogState.perturbationCount },
605
+ };
606
+ }
607
+
608
+ // ─── HTTP Server ────────────────────────────────────────────────────────────
609
+ const server = createServer((req, res) => {
610
+ let body = '';
611
+ req.on('data', c => body += c);
612
+ req.on('end', () => {
613
+ try {
614
+ if (req.url === '/health') {
615
+ res.writeHead(200,{'Content-Type':'application/json'});
616
+ res.end(JSON.stringify({status:'ok',models:Object.keys(models),personalities:Object.keys(PERSONALITIES),simd:!!simd}));
617
+ return;
618
+ }
619
+ if (req.method !== 'POST') { res.writeHead(404); res.end(); return; }
620
+ const { prompt, max_tokens, model, personality } = JSON.parse(body);
621
+ const mn = model || 'buleyean-smollm2';
622
+ let result;
623
+ if (req.url === '/generate-personality') result = generatePersonality(prompt, max_tokens||128, mn, personality||'balanced');
624
+ else if (req.url === '/generate-standard') result = generateStandard(prompt, max_tokens||128, mn);
625
+ else if (req.url === '/generate-glossolalia') result = generateGlossolalia(prompt, max_tokens||128, mn);
626
+ else if (req.url === '/generate-metacog') result = generateMetacog(prompt, max_tokens||128, mn);
627
+ else { res.writeHead(404); res.end(); return; }
628
+ res.writeHead(200, { 'Content-Type': 'application/json' });
629
+ res.end(JSON.stringify(result));
630
+ } catch (e) {
631
+ console.error('[Aether]', e);
632
+ res.writeHead(500, { 'Content-Type': 'application/json' });
633
+ res.end(JSON.stringify({ error: e.message }));
634
+ }
635
+ });
636
+ });
637
+
638
+ // ─── Model Registry ─────────────────────────────────────────────────────────
639
+ const MODEL_REGISTRY = [
640
+ { name: 'buleyean-smollm2', repo: 'forkjoin-ai/buleyean-smollm2-360m', file: 'buleyean-smollm2-360m-q8_0.gguf', tokRepo: 'HuggingFaceTB/SmolLM2-360M-Instruct', config: 'smollm2-360m' },
641
+ { name: 'base-smollm2', repo: 'bartowski/SmolLM2-360M-Instruct-GGUF', file: 'SmolLM2-360M-Instruct-Q8_0.gguf', tokRepo: 'HuggingFaceTB/SmolLM2-360M-Instruct', config: 'smollm2-360m' },
642
+ { name: 'buleyean-qwen', repo: 'forkjoin-ai/buleyean-qwen2.5-0.5b', file: 'buleyean-qwen2.5-0.5b-q8_0.gguf', tokRepo: 'Qwen/Qwen2.5-0.5B-Instruct', config: 'qwen2.5-0.5b' },
643
+ { name: 'base-qwen', repo: 'bartowski/Qwen2.5-0.5B-Instruct-GGUF', file: 'Qwen2.5-0.5B-Instruct-Q8_0.gguf', tokRepo: 'Qwen/Qwen2.5-0.5B-Instruct', config: 'qwen2.5-0.5b' },
644
+ ];
645
+
646
+ function dl(repo, file) {
647
+ const local = `/tmp/hf_cache/${file}`;
648
+ if (existsSync(local)) return local;
649
+ console.log(`[Aether] Downloading ${repo}/${file}...`);
650
+ execSync(`python3 -c "from huggingface_hub import hf_hub_download; hf_hub_download('${repo}', '${file}', cache_dir='/tmp/hf_cache', local_dir='/tmp/hf_cache')"`, { stdio: 'inherit' });
651
+ return local;
652
+ }
653
+
654
+ function dlTok(repo) {
655
+ const local = `/tmp/hf_cache/tokenizer-${repo.replace(/\//g,'-')}.json`;
656
+ if (existsSync(local)) return local;
657
+ console.log(`[Aether] Downloading tokenizer from ${repo}...`);
658
+ execSync(`python3 -c "from huggingface_hub import hf_hub_download; p=hf_hub_download('${repo}', 'tokenizer.json'); import shutil; shutil.copy(p, '${local}')"`, { stdio: 'inherit' });
659
+ return local;
660
+ }
661
+
662
+ async function main() {
663
+ simd = await loadSIMD();
664
+
665
+ // Load all models that fit in memory (load sequentially, keep all)
666
+ for (const m of MODEL_REGISTRY) {
667
+ try {
668
+ const gguf = dl(m.repo, m.file);
669
+ const tok = dlTok(m.tokRepo);
670
+ loadModel(m.name, gguf, tok, m.config);
671
+ } catch (e) {
672
+ console.error(`[Aether] Failed to load ${m.name}: ${e.message}`);
673
+ }
674
+ }
675
+
676
+ server.listen(PORT, '127.0.0.1', () => console.log(`[Aether] http://127.0.0.1:${PORT} (SIMD: ${!!simd}, models: ${Object.keys(models).join(', ')})`));
677
+ }
678
+
679
+ main().catch(e => { console.error('[Aether] Fatal:', e); process.exit(1); });
app.py ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Five Bules -- Personality as Void Walking
3
+ Act 4: Five personality profiles on fork/race/fold/vent/interfere axes.
4
+
5
+ Same model, same prompt. Different personality = different decoder config.
6
+ Explorer forks broadly. Builder folds tightly. Creative races freely.
7
+ Anxious interferes early. Balanced converges to phi.
8
+ """
9
+
10
+ import gradio as gr
11
+ import json
12
+ import time
13
+ import subprocess
14
+ import urllib.request
15
+ import urllib.error
16
+ import select
17
+ from concurrent.futures import ThreadPoolExecutor, as_completed
18
+
19
+ print("[Five Bules] Starting Aether...", flush=True)
20
+ aether_proc = subprocess.Popen(
21
+ ["node", "aether-server.mjs"],
22
+ env={**__import__('os').environ, "AETHER_PORT": "7861"},
23
+ stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
24
+ )
25
+
26
+ print("[Five Bules] Waiting for Aether...", flush=True)
27
+ for attempt in range(300):
28
+ try:
29
+ req = urllib.request.Request("http://127.0.0.1:7861/health")
30
+ resp = urllib.request.urlopen(req, timeout=2)
31
+ health = json.loads(resp.read())
32
+ if health.get("status") == "ok" and health.get("models"):
33
+ print(f"[Five Bules] Aether ready (models: {health.get('models')})", flush=True)
34
+ break
35
+ except Exception:
36
+ pass
37
+ if aether_proc.stdout and select.select([aether_proc.stdout], [], [], 0)[0]:
38
+ line = aether_proc.stdout.readline()
39
+ if line: print(f" {line.decode().strip()}", flush=True)
40
+ time.sleep(1)
41
+ else:
42
+ print("[Five Bules] WARNING: Aether not ready after 300s", flush=True)
43
+
44
+
45
+ def call_personality(prompt, max_tokens, model, personality):
46
+ try:
47
+ data = json.dumps({"prompt": prompt, "max_tokens": max_tokens, "model": model, "personality": personality}).encode()
48
+ req = urllib.request.Request("http://127.0.0.1:7861/generate-personality", data=data, headers={"Content-Type": "application/json"})
49
+ resp = urllib.request.urlopen(req, timeout=600)
50
+ return json.loads(resp.read())
51
+ except urllib.error.HTTPError as e:
52
+ body = e.read().decode() if e.fp else str(e)
53
+ try: detail = json.loads(body).get("error", body[:300])
54
+ except Exception: detail = body[:300]
55
+ return {"text": f"[Error: {detail}]", "tokens": 0, "totalTimeMs": 0, "avgTokenMs": 0}
56
+ except Exception as e:
57
+ return {"text": f"[Error: {e}]", "tokens": 0, "totalTimeMs": 0, "avgTokenMs": 0}
58
+
59
+
60
+ def run_all(prompt, max_tokens, model_name):
61
+ if not prompt or not prompt.strip():
62
+ yield "", "", "", "", "", ""
63
+ return
64
+
65
+ max_tokens = int(max_tokens)
66
+ personas = ["explorer", "builder", "creative", "anxious", "balanced"]
67
+ results = {p: [None] for p in personas}
68
+
69
+ def run(p):
70
+ results[p][0] = call_personality(prompt, max_tokens, model_name, p)
71
+
72
+ def fmt(r):
73
+ return r["text"] if r else "generating..."
74
+
75
+ def stats(r):
76
+ if not r: return "running..."
77
+ return f'{r["tokens"]} tok / {r["totalTimeMs"]/1000:.1f}s / C3: {r.get("metacogSummary",{}).get("totalPerturbations",0)}'
78
+
79
+ def build():
80
+ texts = tuple(fmt(results[p][0]) for p in personas)
81
+ diag_lines = []
82
+ for p in personas:
83
+ r = results[p][0]
84
+ if r:
85
+ diag_lines.append(f"[{p.upper()}] {stats(r)}")
86
+ diag_lines.append(f" temps={r.get('temperatures','?')} | {r.get('personalityLabel','')}")
87
+ diag_lines.append("")
88
+ return texts + ("\n".join(diag_lines),)
89
+
90
+ with ThreadPoolExecutor(max_workers=5) as pool:
91
+ futures = {pool.submit(run, p): p for p in personas}
92
+ for future in as_completed(futures):
93
+ future.result()
94
+ yield build()
95
+ yield build()
96
+
97
+
98
+ CSS = """
99
+ .gradio-container { max-width: 1200px !important; margin: 0 auto !important; }
100
+ .gradio-container, .dark { background: #09090b !important; }
101
+ #hero { text-align: center; padding: 2rem 0 1rem; }
102
+ #hero h1 { font-size: 2.5rem; font-weight: 300; letter-spacing: -0.02em; color: #fafafa; margin: 0; }
103
+ #hero .accent { color: #f59e0b; }
104
+ #hero .subtitle { color: #71717a; font-size: 0.95rem; margin-top: 0.5rem; }
105
+ .response-card { background: #0c0c0f !important; border: 1px solid #1f1f23 !important; border-radius: 8px !important; }
106
+ .response-card textarea { background: #0c0c0f !important; border: none !important; color: #e4e4e7 !important; font-size: 0.9rem !important; line-height: 1.5 !important; }
107
+ .p-label { font-size: 0.75rem !important; text-transform: uppercase !important; letter-spacing: 0.05em !important; font-weight: 600 !important; }
108
+ #prompt-input > label > span { display: none !important; }
109
+ #prompt-input textarea { background: #111114 !important; border: 1px solid #1f1f23 !important; border-radius: 8px !important; color: #fafafa !important; font-size: 1rem !important; padding: 1rem !important; }
110
+ #prompt-input textarea:focus { border-color: #f59e0b !important; }
111
+ #gen-btn { background: #f59e0b !important; border: none !important; border-radius: 8px !important; font-weight: 500 !important; color: #09090b !important; }
112
+ .prompt-chip { background: #111114 !important; border: 1px solid #1f1f23 !important; border-radius: 6px !important; color: #a1a1aa !important; font-size: 0.85rem !important; }
113
+ .prompt-chip:hover { border-color: #f59e0b !important; color: #fafafa !important; }
114
+ #footer { text-align: center; padding: 2rem 0; border-top: 1px solid #1f1f23; margin-top: 2rem; }
115
+ #footer p { color: #52525b; font-size: 0.8rem; }
116
+ #footer a { color: #f59e0b; text-decoration: none; }
117
+ footer.svelte-1ax1toq { display: none !important; }
118
+ .built-with { display: none !important; }
119
+ """
120
+
121
+ with gr.Blocks(css=CSS, theme=gr.themes.Base(primary_hue="yellow", neutral_hue="zinc"), title="Five Bules") as demo:
122
+
123
+ gr.HTML("""
124
+ <div id="hero">
125
+ <h1>The Five <span class="accent">Bules</span></h1>
126
+ <p class="subtitle">Personality as void walking. Same model, same prompt, five decoder configurations.<br/>
127
+ Each personality is a position on the fork/race/fold/vent/interfere axes.<br/>
128
+ THM-FIVE-BULE-PERSONALITY -- all five converge to &phi;<sub>inv</sub> &approx; 0.618.</p>
129
+ </div>
130
+ """)
131
+
132
+ with gr.Row():
133
+ prompt = gr.Textbox(elem_id="prompt-input", placeholder="Tell me about yourself.", lines=2, label="Prompt", show_label=False, interactive=True, scale=4)
134
+ with gr.Column(scale=1):
135
+ model_choice = gr.Dropdown(
136
+ choices=["buleyean-smollm2", "base-smollm2", "buleyean-qwen", "base-qwen"],
137
+ value="buleyean-smollm2", label="Model",
138
+ )
139
+ max_tok = gr.Slider(minimum=8, maximum=8192, value=64, step=1, label="Max tokens")
140
+
141
+ btn = gr.Button("Generate All Five", elem_id="gen-btn", variant="primary")
142
+
143
+ with gr.Row(equal_height=True):
144
+ with gr.Column():
145
+ gr.HTML('<p class="p-label" style="color:#3b82f6">Explorer -- forks broadly</p>')
146
+ explorer_out = gr.Textbox(lines=8, show_label=False, interactive=False, elem_classes=["response-card"])
147
+ with gr.Column():
148
+ gr.HTML('<p class="p-label" style="color:#22c55e">Builder -- folds tightly</p>')
149
+ builder_out = gr.Textbox(lines=8, show_label=False, interactive=False, elem_classes=["response-card"])
150
+ with gr.Column():
151
+ gr.HTML('<p class="p-label" style="color:#a855f7">Creative -- races freely</p>')
152
+ creative_out = gr.Textbox(lines=8, show_label=False, interactive=False, elem_classes=["response-card"])
153
+
154
+ with gr.Row(equal_height=True):
155
+ with gr.Column():
156
+ gr.HTML('<p class="p-label" style="color:#ef4444">Anxious -- interferes early</p>')
157
+ anxious_out = gr.Textbox(lines=8, show_label=False, interactive=False, elem_classes=["response-card"])
158
+ with gr.Column():
159
+ gr.HTML('<p class="p-label" style="color:#f59e0b">Balanced -- phi convergence</p>')
160
+ balanced_out = gr.Textbox(lines=8, show_label=False, interactive=False, elem_classes=["response-card"])
161
+ with gr.Column():
162
+ gr.HTML('<p class="p-label" style="color:#71717a">Diagnostics</p>')
163
+ diag_out = gr.Textbox(lines=8, show_label=False, interactive=False, elem_classes=["response-card"])
164
+
165
+ outputs = [explorer_out, builder_out, creative_out, anxious_out, balanced_out, diag_out]
166
+ inputs = [prompt, max_tok, model_choice]
167
+
168
+ def run(p, mt, m):
169
+ for vals in run_all(p, mt, m):
170
+ yield vals
171
+
172
+ btn.click(run, inputs, outputs)
173
+ prompt.submit(run, inputs, outputs)
174
+
175
+ gr.HTML('<p style="color:#52525b; font-size:0.8rem; margin-top:1.5rem; margin-bottom:0.5rem;">Try these:</p>')
176
+ with gr.Row():
177
+ for p in ["Tell me about yourself.", "What scares you?", "Describe the perfect day.", "How do you handle failure?"]:
178
+ gr.Button(p, size="sm", elem_classes=["prompt-chip"]).click(
179
+ fn=lambda x=p: x, outputs=[prompt]
180
+ ).then(fn=run, inputs=inputs, outputs=outputs)
181
+
182
+ gr.HTML("""
183
+ <div id="footer">
184
+ <p style="color:#a1a1aa; font-size:0.85rem; margin-bottom:0.5rem;">
185
+ SmolLM2-360M + Qwen2.5-0.5B &middot; Aether WASM-SIMD &middot; Two architectures, five personalities
186
+ </p>
187
+ <p>
188
+ <a href="https://forkracefold.com/">Whitepaper</a> &middot;
189
+ <a href="https://huggingface.co/spaces/forkjoin-ai/the-void">The Void</a> &middot;
190
+ <a href="https://huggingface.co/spaces/forkjoin-ai/glossolalia">Glossolalia</a> &middot;
191
+ <a href="https://huggingface.co/spaces/forkjoin-ai/metacog">Metacog</a>
192
+ </p>
193
+ <p style="margin-top:1rem;">Personality = 5 measurable distances &middot; THM-PHI-ATTRACTOR &middot;
194
+ <a href="https://forkracefold.com/">&phi;&sup2; = &phi; + 1</a></p>
195
+ </div>
196
+ """)
197
+
198
+ if __name__ == "__main__":
199
+ demo.launch(server_name="0.0.0.0", server_port=7860)
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ gradio>=5.0.0,<6.0.0
2
+ huggingface-hub>=0.26.0
simd-kernels.wasm ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a05084c8998119797c6e80927678ce007e3285b78c6e7e8feee223ca4bb13636
3
+ size 14553