fix(discord-bot): direct 11-provider LLM chain (drop surrogate CLI dep) 6b77b52 axentx-dev-bot commited on 7 days ago
v18: simplify hub naming to surrogate-1-{SIZE}B-v1.5 (owner directive) 8aaeb2d Ashira Pitchayapakayakul commited on 7 days ago
v18: add Qwen3.5/3.6 aliases (2026-04 releases) β Qwen3.5-9B = sweet spot for T4x2 4e9d4f7 Ashira Pitchayapakayakul commited on 7 days ago
v18: BASE_MODEL alias resolver β short names just work 02e2084 Ashira Pitchayapakayakul commited on 7 days ago
v18-safe-defaults: flip SUR_LORA_INIT=loftq + DISABLE_AL=1 as defaults e3077e1 Ashira Pitchayapakayakul commited on 7 days ago
v18b: add GLM-family Kaggle-feasible bases (glm-4-9b-chat, 4.7-Flash) 9a0b1b3 Ashira Pitchayapakayakul commited on 7 days ago
v18(round6): Phase 78-96 + Thai/gen-intel datasets + Qwen3 base option 78dd635 Ashira Pitchayapakayakul commited on 7 days ago
v17(catchup): 18 phases + multi-teacher distill + 5-step merge + EAGLE-3 + TTC 31fcd69 Ashira Pitchayapakayakul commited on 7 days ago
v16(round4-mega): 21 new phases + 30 datasets + 9 tokens + Granite/RoPE-NoPE/YaFSDP ab97ba7 Ashira Pitchayapakayakul commited on 7 days ago
v15(round3): 13 new phases + 25 datasets + 20 swarm tokens + Kimi/DS/GLM techniques 1ba4971 Ashira Pitchayapakayakul commited on 7 days ago
fix(bundle): prefer HF_TOKEN_PRO_WRITE to dodge HF_TOKEN 2500-req/5min cap f17fac0 Ashira Pitchayapakayakul commited on 7 days ago
v14(unified): Phase -1 ingest + Phase 23-25 daemon/auto-feat/multicloud c311886 Ashira Pitchayapakayakul commited on 7 days ago
v13(into-model): 22 phases + 30+ datasets + multi-agent tokens + frontier kernels dc702a4 Ashira Pitchayapakayakul commited on 7 days ago
v12(into-model): wire ALL techniques as 14 env-toggle training phases 1bfa3c7 Ashira Pitchayapakayakul commited on 7 days ago
v11(into-model): add 9 ingest datasets + Phase 0 hygiene + TruthRL ternary GRPO a71a56a Ashira Pitchayapakayakul commited on 7 days ago
pivot(v9): SRE-specialist trainer prep β knowledge corpora + 6 role personas cc2fe17 Ashira Pitchayapakayakul commited on 7 days ago
feat(overnight): scoring rubric + report generator + multi-window orchestrator 326e0f0 Ashira Pitchayapakayakul commited on 8 days ago
feat(release): add file:/local-path support + arkship to allowlist 8b4b1b5 Ashira Pitchayapakayakul commited on 8 days ago
fix(v8/kaggle): bootstrap Kaggle Secrets β os.environ before SFT setup ea2749e Ashira Pitchayapakayakul commited on 8 days ago
feat(v8/lora-init): add loftq+pissa hybrid + corda modes (init can't co-exist trivially) 53e31e9 Ashira Pitchayapakayakul commited on 8 days ago
feat(v8+autonomy): research-driven trainer + 4 daemons + 9-layer safety gate 4e166c6 Ashira Pitchayapakayakul commited on 8 days ago
feat(v7): Spectrum-lite + Magpie + active-learning + full swap-and-bench chain dddf626 Ashira Pitchayapakayakul commited on 8 days ago
feat(v1.1-extended): pivot to 7B base + EXTENDED stack on Kaggle T4Γ2 1e1e228 Ashira Pitchayapakayakul commited on 8 days ago
feat(post-bench): 3-branch decision pipeline ready before bench fires b5f6808 Ashira Pitchayapakayakul commited on 8 days ago
perf(harvest): bump worker fleet 1β16 + push cadence 2Γ β use the headroom 7d05ef5 Ashira Pitchayapakayakul commited on 8 days ago
feat(civo): L40S 48GB training launcher with auto-teardown f89906d Ashira Pitchayapakayakul commited on 8 days ago
fix(watcher+bench): retarget 14B v1.5-mid (Kaggle T4Γ2 reality) a12a88b Ashira Pitchayapakayakul commited on 8 days ago
fix(kaggle-trainer): pick 14B (not 32B) for T4Γ2 β OOM diagnosis 332bf66 Ashira Pitchayapakayakul commited on 8 days ago
feat(synth-puller): round-robin between 2 PRO ZeroGPU endpoints c14673b Ashira Pitchayapakayakul commited on 8 days ago
fix(auto-bench-watcher): bash structure (else clause was nested wrong) 6223f7d Ashira Pitchayapakayakul commited on 8 days ago
fix(auto-bench-watcher): single-API-call check (avoid pipe-stdout dup) 990093c Ashira Pitchayapakayakul commited on 8 days ago
feat(auto-bench): watcher fires bench-v1-vs-v15 on first v1.5 checkpoint a84b5be Ashira Pitchayapakayakul commited on 8 days ago
feat(kaggle-trainer): hardware-aware base model auto-pick 1fcf6ae Ashira Pitchayapakayakul commited on 8 days ago
fix(kaggle-trainer): defensive schema handling β drop interleave_datasets 3a42737 Ashira Pitchayapakayakul commited on 8 days ago
rename: drop '-lora-' segment from all model names + capitalize v1.5 size b772ad8 Ashira Pitchayapakayakul commited on 8 days ago
feat(eval): axentx-eval-50 β 50-prompt in-domain DevSecOps eval suite ac7d68c Ashira Pitchayapakayakul commited on 8 days ago
feat(v1.5): rewrite Kaggle train.py with full R1-R12 technique stack 6492b88 Ashira Pitchayapakayakul commited on 8 days ago
feat(harvest): lift source-side length caps 6K/8K β 100K/200K chars e161478 Ashira Pitchayapakayakul commited on 8 days ago
feat(v1.5): 32B SFT config + 3-way benchmark (v1 vs base32B vs v1.5) 8056cbe Ashira Pitchayapakayakul commited on 8 days ago
fix: disable chutes ladder + zero-gpu cold-start retry 7fd3e2c Ashira Pitchayapakayakul commited on 8 days ago
fix(zero-gpu-bridge): /api/predict β /call/respond SSE polling b2d3ead Ashira Pitchayapakayakul commited on 8 days ago
fix(synth-puller): switch /run/ β /call/<api>/<event_id> SSE poll c1bfbce Ashira Pitchayapakayakul commited on 8 days ago
fix(bridges+cron): repair 3 broken bridges + wire build-data-pipeline weekly 72f6b7f Ashira Pitchayapakayakul commited on 8 days ago
feat(rounds-7-12-clean-no-tokens): SHARD + ZeroGPU + personae + monitor 47f02de Ashira Pitchayapakayakul commited on 8 days ago
feat(burst-but-dont-die): adaptive auto-scaler + tighter discoverer cycle ff2fbf3 Ashira Pitchayapakayakul commited on 8 days ago
fix(oom-permanent): memory guard + redis cap + heavy-task gating d8d7a71 Ashira Pitchayapakayakul commited on 8 days ago
feat(regress-cron): daily 11:00 UTC regression-test on both Space + anchor 8c00e62 Ashira Pitchayapakayakul commited on 8 days ago
feat(round12-tier2-regress): GSPO + CodeScaler stubs + 10-step regression suite e2c9041 Ashira Pitchayapakayakul commited on 8 days ago