aria-llm / colab /README.md
SofiTesfay2010's picture
Add Colab benchmark + chat script for Qwen3-8B-AWQ
50d4220 verified
Copy the full content of the script to Colab. Run in order:
1. First cell: `!pip install torch transformers datasets accelerate autoawq huggingface_hub -q`
2. Then run all remaining cells in order.
The script benchmarks GSM8K (20 samples) and code generation (10 problems),
then drops into an interactive chat loop with ARIA monitoring.
Model: Qwen/Qwen3-8B-AWQ (same Qwen3-8B architecture as prism-ml/Ternary-Bonsai-8B)
GPU: T4 (16GB VRAM) — fits comfortably with AWQ 4-bit quantization (~5.7GB)