π SmartChild-1.1B--Q4_K_M (2026 Edition)
"He's back... and he's local. lol."
This is a custom-quantized version of TinyLlama-1.1B, specifically optimized to attempt to revive the spirit and utility of the legendary SmarterChild AIM bot. While not quite as snarky as the OG, this model maintains a fun, snappy, and positive coherence that is random/uplifting in its nature. While the model is not 100% factually correct with dates or names, it is good for things like: introspection, simple advice, fake inspirational quotes, idea generation, simple coding examples, recipes, lyric generation, storytelling, brainstorming, and all other manner of silly musings (It may hallucinate, talk to itself, or "slop out" on prompts with low context like "hey!").
π§ Why this model is different
Unlike a standard 1.1B quant, this model was processed using a custom Importance Matrix (imatrix). The training data for the imatrix was hand-curated to preserve:
- Classic AIM Dialect: High retention of 2000s-era slang (
rofl,lmao,brb,s/l/r). - Logical Flow: Inclusion of
wllama.jssource code and logic puzzles in the imatrix training to ensure the model stays coherent at low bitrates. - Modern Awareness: Contextual data for 2026, including local-first AI and edge computing concepts.
π Quantization Details
- Base Model: TinyLlama-1.1B-Chat-v1.0
- Quantization: Q4_K_M
- Format: GGUF
- Size: ~668 MB
- Context Length: 2048 tokens
π Perplexity Benchmarks
The following results were generated using llama-perplexity on the wikitext-2-raw/wiki.test.raw dataset.
| Model | Precision | Perplexity (PPL) | Ξ PPL |
|---|---|---|---|
| TinyLlama-1.1B (Baseline) | F16 | 19.5532 | - |
| TinyLlama-1.1B (Quant) | Q4_K_M | 19.9509 | +0.3977 |
βοΈ Evaluation Verdict
For a model as small as TinyLlama (1.1B), this is a highly successful quantization. Smaller models are inherently "fragile"βthey have fewer parameters to represent complex information, so reducing bit-depth usually results in a significant accuracy drop. A Delta of +0.3977 indicates that the Q4_K_M method has preserved the vast majority of the model's reasoning capabilities while reducing the memory footprint by approximately 85%.
π Hardware Performance (Apple M2)
- Throughput: 943.52 tokens/sec (Prompt Eval)
- Memory Usage: ~636 MiB RAM for model weights.
π Usage
π In Browser (Wllama)
This model is optimized for web environments. Try it out at the SmartChild Space.
const MODEL_URL = "https://huggingface.co/macwhisperer/smartchild/resolve/main/tinyllama-1.1b-Q4_K_M.gguf?download=true";
// Use with Wllama.js for local-first inference.
- Downloads last month
- 620
4-bit
Model tree for macwhisperer/smartchild
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0