Scalably Extracting Latent Representations of Users Collection Models and datasets for "Scalably Extracting Latent Representations of Users" • 10 items • Updated Mar 9 • 1
QuantLRM: Quantization of Large Reasoning Models via Fine-Tuning Signals Paper • 2602.02581 • Published Jan 31 • 10
Functionality-Oriented LLM Merging on the Fisher--Rao Manifold Paper • 2603.04972 • Published Mar 5 • 3
GPT-2 models fine-tuned on tasks from GLUE Benchmark Collection if you find these models helpful, consider citing [our paper](https://arxiv.org/abs/2406.03280) • 7 items • Updated Aug 27, 2024 • 3
Scaling Latent Reasoning via Looped Language Models Paper • 2510.25741 • Published Oct 29, 2025 • 229
story writing favourites Collection Models I personally liked for generating stories in the past. Not a recommendation, most of these are outdated. • 17 items • Updated Mar 2 • 97
Unsloth 4-bit Dynamic Quants Collection Unsloths Dynamic 4bit Quants selectively skips quantizing certain parameters; greatly improving accuracy while only using <10% more VRAM than BnB 4bit • 28 items • Updated 11 days ago • 96
miscii-14b-dev Collection Known stable releases of the miscii-1020 based models • 3 items • Updated Mar 2 • 2