Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Building on HF
116.7
TFLOPS
11
5
8
Ed Addario
PRO
eaddario
Follow
rhodalynn754's profile picture
lingbejiling's profile picture
spooner2's profile picture
86 followers
·
31 following
EAddario
AI & ML interests
Finding ways to optimize LLMs' inference performance in resource-constrained environments (e.g. commodity hardware, desktops, laptops, mobiles, edge devices, etc.)
Recent Activity
replied
to
their
post
about 19 hours ago
Experimental global target bits‑per‑weight quantization of Qwen/Qwen3.5-4B and Qwen/Qwen3.5-9B Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target. Key Advantages: - VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM). - Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs. Full benchmarks (PPL, KLD, ARC, MMLU, etc.) and methodology in the models' cards https://huggingface.co/eaddario/Qwen3.5-4B-GGUF https://huggingface.co/eaddario/Qwen3.5-9B-GGUF
posted
an
update
2 days ago
https://huggingface.co/datasets/eaddario/imatrix-calibration datasets updated to include Southeast Asian languages (Burmese, Filipino, Indonesian, Thai & Vietnamese).
new
activity
2 days ago
eaddario/imatrix-calibration:
Great collection, I'm using it for my little project.
View all activity
Organizations
eaddario
's datasets
1
Sort: Recently updated
eaddario/imatrix-calibration
Viewer
•
Updated
3 days ago
•
299
•
14.7k
•
37