Pocket Atlas 0.8B

"A towel is about the most massively useful thing an interstellar hitchhiker can have." — Douglas Adams

Pocket Atlas is your towel for ideas.

A fine-tuned Qwen3.5-0.8B that lives on your device and explains anything — clearly, warmly, and precisely. Ask it about quantum decoherence, the social contract, or why your sourdough collapsed. It will tell you what it is, why it matters, how it works, give you an example, and leave you with something worth remembering.

No internet. No cloud. No waiting. Just answers.


The Format

Pocket Atlas answers every question in a consistent 5-part structure:

What it is       — the honest one-sentence definition
Why it matters   — why any human being should care
How it works     — the mechanism, without the jargon
Simple example   — something you can picture
Key takeaway     — the thing worth remembering

Small enough to run on a MacBook, an iPhone via Ollama, or a Raspberry Pi — and explains the world better than most textbooks.


Files

File Size Quantization Best for
pocket-atlas-q4_k_m.gguf 493 MB Q4_K_M Ollama, phones, low RAM — recommended
pocket-atlas-q8_0.gguf 812 MB Q8_0 Higher quality, 8 GB+ RAM
pocket-atlas-f16.gguf 1.5 GB F16 Maximum quality, 16 GB+ RAM
model.safetensors-* 1.7 GB BF16 MLX / transformers / fine-tuning

Quick Start

MLX (Apple Silicon — recommended)

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("cetusian/pocket-atlas-0.8b")

messages = [
    {"role": "system", "content": "You explain ideas clearly, warmly, and precisely. Use this format: What it is, Why it matters, How it works, Simple example, Key takeaway."},
    {"role": "user", "content": "Explain the Fermi paradox."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True, enable_thinking=False)
response = generate(model, tokenizer, prompt=prompt, max_tokens=600, verbose=True)

Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "cetusian/pocket-atlas-0.8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)

messages = [
    {"role": "system", "content": "You explain ideas clearly, warmly, and precisely. Use this format: What it is, Why it matters, How it works, Simple example, Key takeaway."},
    {"role": "user", "content": "Explain entropy."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
out = model.generate(**inputs, max_new_tokens=600)
print(tokenizer.decode(out[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

Example Output

Prompt: Explain the Fermi paradox.

What It Is The Fermi paradox is the puzzling observation that, despite the high probability of extraterrestrial civilizations existing, we have found no evidence of them.

Why It Matters It forces us to confront deep questions: Is life rare? Do civilizations destroy themselves? Are we being ignored — or are we simply not listening hard enough?

How It Works Physicist Enrico Fermi reasoned: the universe is 13.8 billion years old, there are hundreds of billions of stars like our Sun, many with Earth-like planets — so where is everyone? The math says they should be everywhere. The silence says otherwise.

Simple Example Imagine a huge party where statistically thousands of people should be in the kitchen, but you walk in and it's completely empty. You start to wonder: did everyone leave early? Was there never a party? Are you the only one who showed up?

Key Takeaway The Fermi paradox isn't a proof that we're alone — it's a proof that something in our assumptions is wrong. Which assumption? That's the question that keeps astronomers up at night.


Prompting Guide

Pocket Atlas understands several modes:

# Standard explanation
"Explain [concept]."

# Explain like I'm 5
"Explain [concept] like I'm 5."

# Analogy only
"Explain [concept] using only an analogy."

# Misconceptions
"What do most people get wrong about [concept]?"

# Real-world applications
"Give 3 surprising real-world applications of [concept]."

Training Details

Base model unsloth/Qwen3.5-0.8B
Method LoRA fine-tuning (PEFT)
Dataset cetusian/atlas-pages
LoRA rank 16
LoRA alpha 16
Target modules q, k, v, o, gate, up, down proj
Sequence length 2048
Batch size 64 (2× A100 80GB, torchrun DDP)
Epochs 1
Learning rate 2e-4 (cosine decay)
Optimizer AdamW 8-bit
Precision bf16
Thinking mode Disabled — direct answers only
Train loss 2.147

Dataset Composition (~18k examples)

Source Count What it teaches
Atlas Pages — 5-part explanations ~6,400 The house format: structured, warm, precise
arXiv abstracts ~8,000 Technical compression — dense research → plain language
XSum news summaries ~3,000 Radical brevity — one sentence, complete thought

Performance

Tested on Apple M-series:

Device Framework Speed RAM
MacBook (Apple Silicon) MLX ~52 tok/s ~1.6 GB

Don't panic.

Downloads last month
86
Safetensors
Model size
0.9B params
Tensor type
F32
·
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cetusian/pocket-atlas-0.8b

Adapter
(13)
this model
Quantizations
2 models

Dataset used to train cetusian/pocket-atlas-0.8b

Space using cetusian/pocket-atlas-0.8b 1