Fix saliency (len(models) bug), cache demo LLM outputs 3f4b5ab verified Yatsuiii commited on 2 days ago
LLM: vLLM on AMD MI300X (OpenAI-compat) with HF fallback e83206b verified Yatsuiii commited on 2 days ago
Update AUC to 0.7288 (K=32 retrain), LLM via InferenceClient d84066a verified Yatsuiii commited on 2 days ago
LLM: switch to HF InferenceClient (merged model, always-on) b050a20 verified Yatsuiii commited on 2 days ago
Ensemble UI: vote summary + p(ASD) histogram + collapsed per-site details c15db7e verified Yatsuiii commited on 3 days ago
Fix: saliency CPU timeout (cap 5 models), 17→20 sites, architecture 4→20 models, stale text c721f8e verified Yatsuiii commited on 3 days ago
Fix: /20 verdict count, graceful LLM fallback on CPU, AMD badge ff6bc7a verified Yatsuiii commited on 3 days ago
5x LLM improvements: saliency grounding, anti-hallucination system prompt, temp 0.1, n_subjects, per-network scores in prompt da0634d verified Yatsuiii commited on 3 days ago
Wire Qwen2.5-7B LoRA (AMD MI300X) into analysis report 9466bf3 verified Yatsuiii commited on 3 days ago
20-site LOSO: all 20 CC200 checkpoints, AUC=0.7260, 20-model ensemble 1133ab9 verified Yatsuiii commited on 4 days ago