personalgpt / README.md
pashak's picture
Update README.md
f813944 verified
metadata
title: Bonsai 1-bit GPU
emoji: 🌿
colorFrom: green
colorTo: blue
sdk: docker
app_port: 7860
suggested_hardware: l40sx1
pinned: true
short_description: Run 1-bit Bonsai LLMs on GPUs
models:
  - prism-ml/Bonsai-8B-gguf
  - prism-ml/Bonsai-4B-gguf
  - prism-ml/Bonsai-1.7B-gguf

Bonsai Demo

Interactive demo for Bonsai — the first commercially viable 1-bit LLMs, by PrismML.

Bonsai models run at true 1-bit precision — every weight is a single bit. An 8B model fits in 1.15 GB, a 1.7B model in just 240 MB. Small enough to run in a browser, on a phone, or on any laptop — while remaining competitive with full-precision models on benchmarks.

This demo will be available for a limited time (approximately 1–2 weeks). Enjoy it while it lasts!

Highlights

Bonsai-8B fits in 1.15 GB (14x smaller than FP16) and generates at ~330 tok/s on an L40S (6.3x faster than FP16). Scores 70.5 average across 6 benchmark tasks, competitive with full-precision 8B models.

Models

Resources

Privacy

  • We do not log any messages. Chat content is never stored on the server.
  • This demo uses the built-in llama-server UI, which saves your conversation history in your browser's local storage only. Clearing your browser cache will erase it.
  • That said, please do not submit sensitive, private, or confidential information in your messages.

Fair Use

We've allocated multiple GPUs to keep this demo responsive, but resources are shared across all users. Under heavy load you may experience slower responses or brief queuing. Please be mindful of usage and avoid sending large bursts of automated requests so everyone can enjoy the demo.