Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
caiovicentino1 's Collections
HLWQ Large MoE (100B+)
HLWQ Models
HLWQ Video & Diffusion Models
HLWQ Gemma Models
Nemotron 30B — Consumer GPU Inference
HLWQ Unified (Weights Q5 + KV Cache Q3)
HLWQ MLX (Apple Silicon)
Large Models (27B-35B) HLWQ
Qwen3.5-4B EOQ Quantized
Qwen2.5 EOQ Quantized
Qwen3.5-9B HLWQ
EOQ Compressed Models
Qwen3.5-27B HLWQ

HLWQ Gemma Models

updated 10 days ago

Google Gemma family quantized with HLWQ (Hadamard-Lloyd) · formerly PolarQuant Gemma

Upvote
5

  • caiovicentino1/Gemma-4-31B-it-HLWQ-Q5

    Text Generation • Updated 9 days ago • 1.58k • 4

  • caiovicentino1/Gemma-4-31B-it-HLWQ-Q5-Vision

    Image-Text-to-Text • Updated 9 days ago • 337 • 7

  • caiovicentino1/Gemma-4-26B-A4B-it-HLWQ-Q5

    Image-Text-to-Text • 27B • Updated 9 days ago • 372 • 7

  • caiovicentino1/Gemma-4-31B-Claude-Opus-HLWQ-Q5-Vision

    Image-Text-to-Text • Updated 9 days ago • 836 • 17

  • caiovicentino1/Gemopus-4-26B-A4B-it-HLWQ-Q5

    Image-Text-to-Text • Updated 9 days ago • 144 • 3

    Note HLWQ Q5 · 16.6 GB · 27B Gemma-4 26B-A4B MoE · per-expert, consumer GPU ready

Upvote
5
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs