Add Colab benchmark + chat script for Qwen3-8B-AWQ
Browse files- colab/README.md +10 -0
colab/README.md
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Copy the full content of the script to Colab. Run in order:
|
| 2 |
+
|
| 3 |
+
1. First cell: `!pip install torch transformers datasets accelerate autoawq huggingface_hub -q`
|
| 4 |
+
2. Then run all remaining cells in order.
|
| 5 |
+
|
| 6 |
+
The script benchmarks GSM8K (20 samples) and code generation (10 problems),
|
| 7 |
+
then drops into an interactive chat loop with ARIA monitoring.
|
| 8 |
+
|
| 9 |
+
Model: Qwen/Qwen3-8B-AWQ (same Qwen3-8B architecture as prism-ml/Ternary-Bonsai-8B)
|
| 10 |
+
GPU: T4 (16GB VRAM) — fits comfortably with AWQ 4-bit quantization (~5.7GB)
|