Fix prompt truncation in inference_eval.py: max_seq_length 768 -> 2048 1217c1d InosLihka commited on 12 days ago
Fix max_new_tokens for CoT format + add eval-only HF Jobs script b9c9b8f InosLihka commited on 12 days ago
Algorithm Distillation: grader v2 with belief_accuracy + SFT pipeline ece0bbe InosLihka commited on 12 days ago
env: meta-RL refactor (continuous profiles, action+belief, adaptation grader) ecbe0d8 InosLihka Claude Opus 4.7 (1M context) commited on 13 days ago
Rebuild as Life Simulator: 5 meters, 3 hidden profiles, GRPO training pipeline cc6473a InosLihka Claude Sonnet 4.6 commited on 14 days ago