EXL3 4.0bpw H6 quant (quatized with exllamav3 0.0.12)
Original: sam-paech/gemma-3-27b-it-antislop
A fine-tune of google/gemma-3-27b-it using the antislop method described in this paper: https://arxiv.org/abs/2510.15061
The pipeline identifies the model's unique slop (over-represented words and phrases compared to human writing), generates a preference training set, and trains out the slop with our FTPO training algorithm.
https://github.com/sam-paech/auto-antislop
This process alters the model to make the most common slop words & phrases much less frequent, with minimal impact or degradation to the model.
It won't remove slop entirely. The technique only targets over-represented words & phrases, not stylistic or thematic slop.
This model should serve as a good base for further fine-tuning.
- Downloads last month
- 10
Model tree for Nimbz/sam-paech_gemma-3-27b-it-antislop_4.0bpw_H6_EXL3
Base model
sam-paech/gemma-3-27b-it-antislop