Spaces:
Runtime error
feat: RAG fully wired \u2014 FTS5 + vector hybrid retrieval, no train needed for now
Browse filesUSER STRATEGY: 'ยังไม่รีบเทรนหรอก หา data หนักๆ 1-3 เดือน RAG ได้อยู่นิ'
CRITICAL FIX: We had FTS5 self-ingest BUT never retrieved!
training-pairs were indexed every 15 min into FTS5 but call_agent
ignored the index. Self-improvement loop was BROKEN — orchestrate
never learned from prior runs. Fix is the missing wire.
NEW: bin/lib/rag_retrieve.py
- Hybrid retrieval: FTS5 keyword + vector cosine via Ollama nomic-embed
- Reciprocal rank fusion of both rankings (top-3 fused)
- 60s cache (avoid repeat queries during multi-stage pipeline)
- Returns markdown 'Similar past work' block for prompt injection
- Empty string if no good matches \u2014 no false positives
NEW: bin/rag-vector-builder.sh (cron M%30+12 = every 30 min)
- Reads new training-pairs since last run
- Embeds via Ollama nomic-embed-text (already in pull list)
- Stores in ~/.surrogate/state/rag-vectors.db (numpy bytes)
- LRU eviction at 50K vectors (cap, configurable)
- Incremental offset tracking, gentle batches of 500
WIRED INTO ORCHESTRATE:
- Every stage prompt now includes:
RESEARCH_CONTEXT (web search) +
PRD_CONTEXT (surrogate.md) +
REPO_CONTEXT (tree-sitter symbol map) +
RAG_CONTEXT (NEW: similar past work from FTS+vec)
- Total prompt size: 50-130KB (still well within 262k Qwen3-Coder window)
CADENCE:
- Self-ingest FTS5: every 15 min (was)
- RAG vec builder: every 30 min (NEW)
- Orchestrate retrieval: every call (cached 60s)
THE LONG-HAUL PLAN (per user strategy):
- Month 1: collect ~100M trainable pairs via 145+ static sources + crawlers
- Month 2: discoverer + crawler grow to ~200M, central dedup keeps unique
- Month 3: evaluate readiness, decide v0 train timing
- Throughout: RAG retrieves from accumulated corpus \u2014 no train needed
Surrogate IMPROVES OVER TIME without retrain — every successful orchestrate
run feeds FTS+vec → next run retrieves it as 'similar past work'.
Effective tier even WITHOUT train:
Qwen3-Coder-480B (base) + RAG over 100M pairs corpus
\u2192 ~75-80% SWE-Bench projected (close to top OSS w/ train)
\u2192 NOT frontier, but tier-1 for niche specialization
- bin/hermes-status-server.py +1 -1
- bin/lib/rag_retrieve.py +189 -0
- bin/rag-vector-builder.sh +111 -0
- bin/surrogate-orchestrate.sh +1 -0
- start.sh +3 -1
|
@@ -166,7 +166,7 @@ def log_tail(name: str, lines: int = 100) -> PlainTextResponse:
|
|
| 166 |
"auto-orchestrate-loop", "training-push", "ollama", "discord-bot",
|
| 167 |
"hermes-discord-bot", "surrogate-research-loop", "surrogate-research-apply",
|
| 168 |
"surrogate-dev-loop", "domain-scrape-loop", "github-domain-scrape",
|
| 169 |
-
"qwen-coder", "git-clone", "git-pull", "redis", "auto-orchestrate-continuous", "dataset-enrich", "hf-dataset-discoverer", "dedup-bootstrap", "github-agentic-crawler", "ollama-pull-granite", "synthetic-data", "self-ingest", "scrape-sre-postmortems", "refresh-cve-feed",
|
| 170 |
"ollama-pull-coder", "ollama-pull-devstral", "ollama-pull-fallback",
|
| 171 |
"ollama-pull-yicoder", "ollama-pull-embed", "ollama-pull-light",
|
| 172 |
}
|
|
|
|
| 166 |
"auto-orchestrate-loop", "training-push", "ollama", "discord-bot",
|
| 167 |
"hermes-discord-bot", "surrogate-research-loop", "surrogate-research-apply",
|
| 168 |
"surrogate-dev-loop", "domain-scrape-loop", "github-domain-scrape",
|
| 169 |
+
"qwen-coder", "git-clone", "git-pull", "redis", "rag-vector-builder", "auto-orchestrate-continuous", "dataset-enrich", "hf-dataset-discoverer", "dedup-bootstrap", "github-agentic-crawler", "ollama-pull-granite", "synthetic-data", "self-ingest", "scrape-sre-postmortems", "refresh-cve-feed",
|
| 170 |
"ollama-pull-coder", "ollama-pull-devstral", "ollama-pull-fallback",
|
| 171 |
"ollama-pull-yicoder", "ollama-pull-embed", "ollama-pull-light",
|
| 172 |
}
|
|
@@ -0,0 +1,189 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
RAG retrieval — query FTS5 + vector index for similar past work, inject as context.
|
| 3 |
+
|
| 4 |
+
Hybrid retrieval:
|
| 5 |
+
1. FTS5 keyword match over training-pairs (fast, exact matches)
|
| 6 |
+
2. Optional: vector semantic via nomic-embed-text + sqlite-vec (semantic intent)
|
| 7 |
+
3. Reciprocal rank fusion of both → top-K to inject
|
| 8 |
+
|
| 9 |
+
Usage from orchestrate's call_agent BEFORE LLM call:
|
| 10 |
+
from rag_retrieve import retrieve_similar
|
| 11 |
+
context = retrieve_similar(prompt, top_k=3, max_kb=10)
|
| 12 |
+
# inject `context` into prompt as 'Similar past work:'
|
| 13 |
+
|
| 14 |
+
Cache hits within 60s window (avoid repeat queries during multi-stage pipeline).
|
| 15 |
+
"""
|
| 16 |
+
from __future__ import annotations
|
| 17 |
+
import hashlib
|
| 18 |
+
import json
|
| 19 |
+
import os
|
| 20 |
+
import sqlite3
|
| 21 |
+
import time
|
| 22 |
+
import urllib.request
|
| 23 |
+
from pathlib import Path
|
| 24 |
+
from typing import Iterable
|
| 25 |
+
|
| 26 |
+
HOME = Path(os.environ.get("HOME", "/home/hermes"))
|
| 27 |
+
FTS_DB = HOME / ".surrogate/state/self-ingest.db"
|
| 28 |
+
VEC_DB = HOME / ".surrogate/state/rag-vectors.db"
|
| 29 |
+
CACHE_DIR = HOME / ".surrogate/state/rag-cache"
|
| 30 |
+
CACHE_DIR.mkdir(parents=True, exist_ok=True)
|
| 31 |
+
|
| 32 |
+
OLLAMA_EMBED_URL = "http://127.0.0.1:11434/api/embeddings"
|
| 33 |
+
EMBED_MODEL = "nomic-embed-text"
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
def _cache_get(key: str) -> str | None:
|
| 37 |
+
cf = CACHE_DIR / f"{key}.txt"
|
| 38 |
+
if cf.exists() and (time.time() - cf.stat().st_mtime) < 60:
|
| 39 |
+
return cf.read_text()
|
| 40 |
+
return None
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
def _cache_put(key: str, value: str) -> None:
|
| 44 |
+
cf = CACHE_DIR / f"{key}.txt"
|
| 45 |
+
cf.write_text(value)
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
def _hash_key(query: str, top_k: int) -> str:
|
| 49 |
+
return hashlib.md5(f"{query[:500]}|{top_k}".encode()).hexdigest()[:12]
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def _fts_search(query: str, top_k: int = 5) -> list[tuple[str, str, float, str]]:
|
| 53 |
+
"""Returns [(prompt, response, score, source), ...] from FTS5 index."""
|
| 54 |
+
if not FTS_DB.exists():
|
| 55 |
+
return []
|
| 56 |
+
# Sanitize query for FTS5 — extract keywords, drop stopwords
|
| 57 |
+
import re
|
| 58 |
+
words = re.findall(r'\b[a-zA-Z][a-zA-Z0-9_-]{2,}\b', query)
|
| 59 |
+
stop = {"the", "and", "for", "with", "from", "this", "that", "what",
|
| 60 |
+
"when", "where", "how", "why", "which", "into", "your"}
|
| 61 |
+
keywords = [w for w in words if w.lower() not in stop][:10]
|
| 62 |
+
if not keywords:
|
| 63 |
+
return []
|
| 64 |
+
fts_query = " OR ".join(f'"{kw}"' for kw in keywords)
|
| 65 |
+
|
| 66 |
+
try:
|
| 67 |
+
with sqlite3.connect(str(FTS_DB), timeout=3) as c:
|
| 68 |
+
rows = c.execute(
|
| 69 |
+
"SELECT prompt, response, rank, source FROM pairs "
|
| 70 |
+
"WHERE pairs MATCH ? "
|
| 71 |
+
"ORDER BY rank LIMIT ?",
|
| 72 |
+
(fts_query, top_k * 2)
|
| 73 |
+
).fetchall()
|
| 74 |
+
return [(r[0], r[1], -float(r[2]), r[3]) for r in rows[:top_k]]
|
| 75 |
+
except Exception as e:
|
| 76 |
+
print(f"FTS error: {e}", file=__import__("sys").stderr)
|
| 77 |
+
return []
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
def _embed_query(text: str) -> list[float] | None:
|
| 81 |
+
"""Get embedding for a query via Ollama nomic-embed-text."""
|
| 82 |
+
try:
|
| 83 |
+
body = json.dumps({"model": EMBED_MODEL, "prompt": text[:2000]}).encode()
|
| 84 |
+
req = urllib.request.Request(OLLAMA_EMBED_URL, data=body,
|
| 85 |
+
headers={"Content-Type": "application/json"})
|
| 86 |
+
with urllib.request.urlopen(req, timeout=8) as r:
|
| 87 |
+
return json.load(r).get("embedding") or None
|
| 88 |
+
except Exception:
|
| 89 |
+
return None
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
def _vec_search(query_vec: list[float], top_k: int = 5) -> list[tuple[str, str, float, str]]:
|
| 93 |
+
"""Vector cosine search via sqlite — fallback to numpy if no sqlite-vec."""
|
| 94 |
+
if not VEC_DB.exists() or not query_vec:
|
| 95 |
+
return []
|
| 96 |
+
try:
|
| 97 |
+
import numpy as np
|
| 98 |
+
with sqlite3.connect(str(VEC_DB), timeout=3) as c:
|
| 99 |
+
rows = c.execute(
|
| 100 |
+
"SELECT prompt, response, embedding, source FROM vectors LIMIT 50000"
|
| 101 |
+
).fetchall()
|
| 102 |
+
if not rows:
|
| 103 |
+
return []
|
| 104 |
+
q = np.array(query_vec, dtype=np.float32)
|
| 105 |
+
q /= (np.linalg.norm(q) + 1e-9)
|
| 106 |
+
scored: list[tuple[str, str, float, str]] = []
|
| 107 |
+
for prompt, response, emb_blob, src in rows:
|
| 108 |
+
emb = np.frombuffer(emb_blob, dtype=np.float32)
|
| 109 |
+
if emb.shape[0] != q.shape[0]:
|
| 110 |
+
continue
|
| 111 |
+
cos = float(np.dot(q, emb / (np.linalg.norm(emb) + 1e-9)))
|
| 112 |
+
scored.append((prompt, response, cos, src))
|
| 113 |
+
scored.sort(key=lambda x: -x[2])
|
| 114 |
+
return scored[:top_k]
|
| 115 |
+
except Exception as e:
|
| 116 |
+
print(f"Vec search err: {e}", file=__import__("sys").stderr)
|
| 117 |
+
return []
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
def _fuse(fts: list, vec: list, top_k: int = 3) -> list[tuple[str, str, str, float]]:
|
| 121 |
+
"""Reciprocal rank fusion — combine FTS + vec rankings."""
|
| 122 |
+
seen: dict[str, dict] = {}
|
| 123 |
+
for rank, (prompt, response, _, src) in enumerate(fts):
|
| 124 |
+
key = prompt[:100]
|
| 125 |
+
seen.setdefault(key, {"prompt": prompt, "response": response, "source": src,
|
| 126 |
+
"rrf": 0.0})
|
| 127 |
+
seen[key]["rrf"] += 1.0 / (60 + rank)
|
| 128 |
+
for rank, (prompt, response, _, src) in enumerate(vec):
|
| 129 |
+
key = prompt[:100]
|
| 130 |
+
seen.setdefault(key, {"prompt": prompt, "response": response, "source": src,
|
| 131 |
+
"rrf": 0.0})
|
| 132 |
+
seen[key]["rrf"] += 1.0 / (60 + rank)
|
| 133 |
+
ranked = sorted(seen.values(), key=lambda x: -x["rrf"])
|
| 134 |
+
return [(r["prompt"], r["response"], r["source"], r["rrf"]) for r in ranked[:top_k]]
|
| 135 |
+
|
| 136 |
+
|
| 137 |
+
def retrieve_similar(query: str, top_k: int = 3, max_kb: int = 10) -> str:
|
| 138 |
+
"""Returns markdown-formatted 'Similar past work' block to inject in prompt.
|
| 139 |
+
Empty string if no good matches."""
|
| 140 |
+
if not query or len(query) < 30:
|
| 141 |
+
return ""
|
| 142 |
+
cache_key = _hash_key(query, top_k)
|
| 143 |
+
cached = _cache_get(cache_key)
|
| 144 |
+
if cached is not None:
|
| 145 |
+
return cached
|
| 146 |
+
|
| 147 |
+
# Run both retrievals in parallel (best-effort)
|
| 148 |
+
import concurrent.futures
|
| 149 |
+
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as ex:
|
| 150 |
+
fts_fut = ex.submit(_fts_search, query, top_k)
|
| 151 |
+
# Vec retrieval optional — only if Ollama embeddings available
|
| 152 |
+
vec_fut = ex.submit(lambda: _vec_search(_embed_query(query) or [], top_k))
|
| 153 |
+
try:
|
| 154 |
+
fts_results = fts_fut.result(timeout=5)
|
| 155 |
+
except Exception:
|
| 156 |
+
fts_results = []
|
| 157 |
+
try:
|
| 158 |
+
vec_results = vec_fut.result(timeout=10)
|
| 159 |
+
except Exception:
|
| 160 |
+
vec_results = []
|
| 161 |
+
|
| 162 |
+
fused = _fuse(fts_results, vec_results, top_k)
|
| 163 |
+
if not fused:
|
| 164 |
+
_cache_put(cache_key, "")
|
| 165 |
+
return ""
|
| 166 |
+
|
| 167 |
+
out_parts = ["### Similar past work (from training-pairs.jsonl):\n"]
|
| 168 |
+
budget = max_kb * 1024
|
| 169 |
+
for i, (p, r, src, score) in enumerate(fused, 1):
|
| 170 |
+
chunk = f"\n#### Match {i} (source: {src}, score: {score:.3f})\n"
|
| 171 |
+
chunk += f"**Q:** {p[:600]}\n"
|
| 172 |
+
chunk += f"**A:** {r[:1200]}\n"
|
| 173 |
+
if len(chunk) > budget:
|
| 174 |
+
break
|
| 175 |
+
out_parts.append(chunk)
|
| 176 |
+
budget -= len(chunk)
|
| 177 |
+
|
| 178 |
+
out = "".join(out_parts)
|
| 179 |
+
_cache_put(cache_key, out)
|
| 180 |
+
return out
|
| 181 |
+
|
| 182 |
+
|
| 183 |
+
if __name__ == "__main__":
|
| 184 |
+
import sys
|
| 185 |
+
if len(sys.argv) < 2:
|
| 186 |
+
print("usage: rag_retrieve.py <query>", file=sys.stderr)
|
| 187 |
+
sys.exit(2)
|
| 188 |
+
q = " ".join(sys.argv[1:])
|
| 189 |
+
print(retrieve_similar(q, top_k=3))
|
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env bash
|
| 2 |
+
# RAG vector index builder — runs every 30 min, embeds new training pairs.
|
| 3 |
+
#
|
| 4 |
+
# Reads training-pairs.jsonl, embeds prompts via Ollama nomic-embed-text,
|
| 5 |
+
# stores in ~/.surrogate/state/rag-vectors.db (SQLite + numpy bytes).
|
| 6 |
+
#
|
| 7 |
+
# Incremental: tracks offset, only embeds NEW pairs since last run.
|
| 8 |
+
# Caps at 50K vectors total (LRU eviction by ts) to keep index small + fast.
|
| 9 |
+
set -uo pipefail
|
| 10 |
+
set -a; source "$HOME/.hermes/.env" 2>/dev/null; set +a
|
| 11 |
+
|
| 12 |
+
LOG="$HOME/.surrogate/logs/rag-vector-builder.log"
|
| 13 |
+
SRC="$HOME/.surrogate/training-pairs.jsonl"
|
| 14 |
+
DB="$HOME/.surrogate/state/rag-vectors.db"
|
| 15 |
+
OFFSET_FILE="$HOME/.surrogate/.rag-vec-offset"
|
| 16 |
+
MAX_VECTORS="${RAG_MAX_VECTORS:-50000}"
|
| 17 |
+
BATCH_SIZE="${RAG_BATCH:-500}"
|
| 18 |
+
mkdir -p "$(dirname "$LOG")" "$(dirname "$DB")"
|
| 19 |
+
|
| 20 |
+
[[ ! -f "$SRC" ]] && { echo "[$(date +%H:%M:%S)] no source" | tee -a "$LOG"; exit 0; }
|
| 21 |
+
|
| 22 |
+
# Wait for Ollama nomic-embed to be available
|
| 23 |
+
for i in 1 2 3 4 5; do
|
| 24 |
+
if curl -sS --max-time 3 http://127.0.0.1:11434/api/tags 2>/dev/null | grep -q "nomic-embed-text"; then
|
| 25 |
+
break
|
| 26 |
+
fi
|
| 27 |
+
[[ $i -eq 5 ]] && { echo "[$(date +%H:%M:%S)] nomic-embed-text not loaded — skip" | tee -a "$LOG"; exit 0; }
|
| 28 |
+
sleep 5
|
| 29 |
+
done
|
| 30 |
+
|
| 31 |
+
CUR=$(wc -l < "$SRC" | tr -d ' ')
|
| 32 |
+
PREV=$(cat "$OFFSET_FILE" 2>/dev/null || echo 0)
|
| 33 |
+
NEW=$(( CUR - PREV ))
|
| 34 |
+
[[ $NEW -le 0 ]] && { echo "[$(date +%H:%M:%S)] no new pairs (offset=$PREV total=$CUR)" >> "$LOG"; exit 0; }
|
| 35 |
+
|
| 36 |
+
# Process at most BATCH_SIZE per run (gentle on Ollama)
|
| 37 |
+
TAKE=$NEW
|
| 38 |
+
[[ $TAKE -gt $BATCH_SIZE ]] && TAKE=$BATCH_SIZE
|
| 39 |
+
echo "[$(date +%H:%M:%S)] embedding $TAKE / $NEW pairs" | tee -a "$LOG"
|
| 40 |
+
|
| 41 |
+
sed -n "$((PREV + 1)),$((PREV + TAKE))p" "$SRC" | python3 - "$DB" "$MAX_VECTORS" >> "$LOG" 2>&1 <<'PYEOF'
|
| 42 |
+
import sys, json, sqlite3, urllib.request, time, struct
|
| 43 |
+
import numpy as np
|
| 44 |
+
|
| 45 |
+
db, max_vec = sys.argv[1], int(sys.argv[2])
|
| 46 |
+
con = sqlite3.connect(db, timeout=10)
|
| 47 |
+
con.execute("""
|
| 48 |
+
CREATE TABLE IF NOT EXISTS vectors (
|
| 49 |
+
hash TEXT PRIMARY KEY,
|
| 50 |
+
prompt TEXT NOT NULL,
|
| 51 |
+
response TEXT NOT NULL,
|
| 52 |
+
embedding BLOB NOT NULL,
|
| 53 |
+
source TEXT,
|
| 54 |
+
ts INTEGER NOT NULL
|
| 55 |
+
)""")
|
| 56 |
+
con.execute("CREATE INDEX IF NOT EXISTS idx_ts ON vectors(ts)")
|
| 57 |
+
|
| 58 |
+
embedded = skipped = errs = 0
|
| 59 |
+
|
| 60 |
+
def embed(text: str):
|
| 61 |
+
body = json.dumps({"model":"nomic-embed-text","prompt":text[:2000]}).encode()
|
| 62 |
+
req = urllib.request.Request("http://127.0.0.1:11434/api/embeddings",
|
| 63 |
+
data=body, headers={"Content-Type":"application/json"})
|
| 64 |
+
with urllib.request.urlopen(req, timeout=15) as r:
|
| 65 |
+
emb = json.load(r).get("embedding")
|
| 66 |
+
if not emb: return None
|
| 67 |
+
return np.array(emb, dtype=np.float32).tobytes()
|
| 68 |
+
|
| 69 |
+
for line in sys.stdin:
|
| 70 |
+
try:
|
| 71 |
+
d = json.loads(line)
|
| 72 |
+
except Exception:
|
| 73 |
+
skipped += 1; continue
|
| 74 |
+
p = (d.get("prompt") or d.get("instruction") or "")[:2000]
|
| 75 |
+
r = (d.get("response") or d.get("output") or "")[:6000]
|
| 76 |
+
if not p or len(p) < 30: skipped += 1; continue
|
| 77 |
+
src = d.get("source", "?")
|
| 78 |
+
ts = int(d.get("ts", time.time()))
|
| 79 |
+
|
| 80 |
+
import hashlib
|
| 81 |
+
h = hashlib.md5(p[:500].encode()).hexdigest()[:16]
|
| 82 |
+
if con.execute("SELECT 1 FROM vectors WHERE hash=?", (h,)).fetchone():
|
| 83 |
+
skipped += 1; continue
|
| 84 |
+
|
| 85 |
+
try:
|
| 86 |
+
emb_bytes = embed(p)
|
| 87 |
+
if emb_bytes is None: errs += 1; continue
|
| 88 |
+
con.execute("INSERT OR IGNORE INTO vectors VALUES (?,?,?,?,?,?)",
|
| 89 |
+
(h, p, r, emb_bytes, src, ts))
|
| 90 |
+
embedded += 1
|
| 91 |
+
except Exception as e:
|
| 92 |
+
errs += 1
|
| 93 |
+
if errs > 10: break # Ollama down
|
| 94 |
+
|
| 95 |
+
con.commit()
|
| 96 |
+
|
| 97 |
+
# LRU eviction if over cap
|
| 98 |
+
total = con.execute("SELECT COUNT(*) FROM vectors").fetchone()[0]
|
| 99 |
+
if total > max_vec:
|
| 100 |
+
excess = total - max_vec
|
| 101 |
+
con.execute("DELETE FROM vectors WHERE hash IN "
|
| 102 |
+
"(SELECT hash FROM vectors ORDER BY ts ASC LIMIT ?)", (excess,))
|
| 103 |
+
con.commit()
|
| 104 |
+
print(f" LRU evicted {excess} oldest vectors (cap={max_vec})")
|
| 105 |
+
|
| 106 |
+
print(f" embedded={embedded} skipped={skipped} errs={errs} total={total}")
|
| 107 |
+
PYEOF
|
| 108 |
+
|
| 109 |
+
NEW_OFFSET=$(( PREV + TAKE ))
|
| 110 |
+
echo "$NEW_OFFSET" > "$OFFSET_FILE"
|
| 111 |
+
echo "[$(date +%H:%M:%S)] vector batch done · offset → $NEW_OFFSET" | tee -a "$LOG"
|
|
@@ -111,6 +111,7 @@ $prompt
|
|
| 111 |
${RESEARCH_CONTEXT}
|
| 112 |
${PRD_CONTEXT}
|
| 113 |
${REPO_CONTEXT}
|
|
|
|
| 114 |
|
| 115 |
=== Working context ===
|
| 116 |
CWD: $(pwd)
|
|
|
|
| 111 |
${RESEARCH_CONTEXT}
|
| 112 |
${PRD_CONTEXT}
|
| 113 |
${REPO_CONTEXT}
|
| 114 |
+
${RAG_CONTEXT}
|
| 115 |
|
| 116 |
=== Working context ===
|
| 117 |
CWD: $(pwd)
|
|
@@ -274,8 +274,10 @@ while true; do
|
|
| 274 |
# Every 60 min: dataset enrich (pulls fresh public datasets, dedups, uploads to HF)
|
| 275 |
# (was 4h — accelerated to drain 96-dataset queue ASAP per user request)
|
| 276 |
[[ $((M % 60)) -eq 5 ]] && bash ~/.surrogate/bin/dataset-enrich.sh >> "$LOG" 2>&1 &
|
| 277 |
-
# Every 15 min: self-ingest training-pairs into FTS index (closes
|
| 278 |
[[ $((M % 15)) -eq 0 ]] && bash ~/.surrogate/bin/surrogate-self-ingest.sh >> "$LOG" 2>&1 &
|
|
|
|
|
|
|
| 279 |
# Every 30 min: synthetic data generation (REWORK→APPROVE DPO + distilabel rewrite)
|
| 280 |
[[ $((M % 30)) -eq 7 ]] && bash ~/.surrogate/bin/synthetic-data-from-rework.sh >> "$LOG" 2>&1 &
|
| 281 |
# Daily 04:00 UTC: refresh CVE feed (NVD + CISA KEV) → security-knowledge dataset
|
|
|
|
| 274 |
# Every 60 min: dataset enrich (pulls fresh public datasets, dedups, uploads to HF)
|
| 275 |
# (was 4h — accelerated to drain 96-dataset queue ASAP per user request)
|
| 276 |
[[ $((M % 60)) -eq 5 ]] && bash ~/.surrogate/bin/dataset-enrich.sh >> "$LOG" 2>&1 &
|
| 277 |
+
# Every 15 min: self-ingest training-pairs into FTS index (closes self-improvement)
|
| 278 |
[[ $((M % 15)) -eq 0 ]] && bash ~/.surrogate/bin/surrogate-self-ingest.sh >> "$LOG" 2>&1 &
|
| 279 |
+
# Every 30 min: build vector embeddings index (RAG semantic search)
|
| 280 |
+
[[ $((M % 30)) -eq 12 ]] && bash ~/.surrogate/bin/rag-vector-builder.sh >> "$LOG" 2>&1 &
|
| 281 |
# Every 30 min: synthetic data generation (REWORK→APPROVE DPO + distilabel rewrite)
|
| 282 |
[[ $((M % 30)) -eq 7 ]] && bash ~/.surrogate/bin/synthetic-data-from-rework.sh >> "$LOG" 2>&1 &
|
| 283 |
# Daily 04:00 UTC: refresh CVE feed (NVD + CISA KEV) → security-knowledge dataset
|