CoLAR Gemma 3-4B Flawed Fictions SFT
This repository stores CoLAR exports in a Hugging Face-compatible layout. The repo root works for standard Transformers loading, and extra_state.pt preserves the latent head for latent decoding.
Current Revision
- Current tag:
last-ew6ob26g - Stage: supervised fine-tuning final checkpoint
- Task: Flawed Fictions continuity error detection
- Compare slug:
gemma3_colar_sft_last_ew6ob26g
Tagged Checkpoints
| Tag | Local reference | Status |
|---|---|---|
best-epoch02-step35-val_loss=2.5829 |
best step35 export | tagged checkpoint |
last-ew6ob26g |
canonical last export | current commit |
Files
- HF model files at repo root for standard decoding
extra_state.ptfor CoLAR latent decodingexport_meta.jsonfrom the local exportlatent_metadata.jsonwith archival provenance
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('agurung/colar-gemma-3-4b-ff-sft', revision='last-ew6ob26g', torch_dtype='auto', device_map='auto')
tokenizer = AutoTokenizer.from_pretrained('agurung/colar-gemma-3-4b-ff-sft', revision='last-ew6ob26g')
For latent decoding, download the same revision and use extra_state.pt together with the repo root model files.
Notes
- This is the canonical Gemma3 CoLAR post-SFT export used in the ablation materials.
- Downloads last month
- 28
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support