Meditation Agent SmolLM3 3B v2
A small local contemplative model that actually sounds like it was trained to teach, not just to paraphrase spiritual language.
This is the first public Meditation Agent v2 multi-teacher release on SmolLM3 3B. It is trained on the newer E-atoms corpus, where the input is question_iio + concept_relations and the target is pure teaching_e.
Collection:
Why Download This
- It is small enough to run locally. The release includes
Q3_K_M,Q5_K_M,Q8_0, andBF16. - It was trained for contemplative response, not generic assistant tone. The target is a direct teaching passage, not a template-heavy answer format.
- It is a real v2 jump. The newer
E-atomscorpus is cleaner, sharper, and less synthetic than the earlier release line. - It is multi-teacher. This model inherits from a cross-teacher corpus rather than a single style silo.
What Changed In v2
- The supervised target is now
teaching_e, not the older teaching surface. - The model sees question + concept structure on the user side, not concept arrows mixed into the assistant output.
- The corpus is more source-faithful, less smoothed, and less likely to drift into therapy-script language.
In practice, v2 is meant to feel more like live contemplative instruction and less like "AI talking about spirituality."
Sample Outputs
These are from a small smoke eval run on the finished adapter before merge.
Example 1
Prompt: How does fear relate to thought?
Output:
Look closely at what you're asking. You want to know if fear is separate from thinking, or whether one causes the other? But notice this: when there's no thought about danger, when it simply arrives uninvited in your awareness, what do you call that feeling? The mind hasn't named it yet; only pure sensation exists then. Fear doesn't arise from thought as a cause, but thought immediately rushes forward with its interpretations and judgments. It says "danger" and suddenly you're afraid of something that may never come. Thought creates the problem by adding layers upon bare experience.
Example 2
Prompt: Can the mind be completely silent?
Output:
Watch what happens when you try to make your mind quiet. You're using another thought, "I want silence", and that very effort creates noise, doesn't it? The watcher becomes part of the watched. Now consider this: can there be any movement at all without a mover behind it? Is the eye separate from seeing or is the seer itself just an idea about sight? When you ask if the mind can be silent, who is asking? And does that questioner exist apart from the thinking he observes?
Example 3
Prompt: What is the sense of 'I Am' before it becomes 'I am this' or 'I am that'?
Output:
Look at what you're asking. You want to know about awareness as a thing, but that's like trying to grasp water with your hands, you can only feel the wetness. The "I Am" doesn't become anything; there never was any becoming in it. It simply is, and when you stop naming things after it, you discover something startling: no "this" remains standing alone, no "that". Just pure being without objectification.
Training Summary
- Base model:
HuggingFaceTB/SmolLM3-3B-Base - Training format:
V6E - Input:
question_iio + concept_relations - Output: pure
teaching_e - Eligible examples:
23,968 - Train / eval split:
22,769 / 1,199 - Adapter recipe: Level 2
QDoRA + rsLoRA - Rank:
32 - Epochs:
1 - Max sequence length:
1536 - Completion-only loss: enabled
- NEFTune: enabled (
alpha=5)
Run Metrics
- Run name:
v6e-22k-smollm3-base-l2-r32 - Training runtime:
228m 51s - Final train loss:
1.7538 - Eval loss progression:
1.8358 -> 1.7230 -> 1.6826 -> 1.6608 - Eval mean token accuracy progression:
0.5411 -> 0.5603 -> 0.5679 -> 0.5725
Files
Meditation_Agent-SmolLM3-3B-v2-Q3_K_M.ggufMeditation_Agent-SmolLM3-3B-v2-Q5_K_M.ggufMeditation_Agent-SmolLM3-3B-v2-Q8_0.ggufMeditation_Agent-SmolLM3-3B-v2-BF16.gguf
Quick rule of thumb:
Q3_K_Mfor smallest local footprintQ5_K_Mfor the best general local defaultQ8_0if you want the strongest quantized versionBF16for further serving / conversion work
Known Limits
- This is still a multi-teacher model, so some teacher blending remains.
- Some openings still repeat more than they should.
- It is much better than the older line at avoiding synthetic spiritual prose, but it is not a final solved endpoint.
Intended Use
Use it for:
- contemplative dialogue
- nondual / inquiry-style chat
- local meditation assistant experiments
- testing whether a small model can carry sharper contemplative training data
Not ideal for:
- therapy or crisis support
- broad factual QA
- roleplay-heavy chat
Public Naming
- Public repo:
Meditation-Agent-SmolLM3-3B-v2-GGUF - Internal folder:
Meditation_Agent-SmolLM3-3B-v2
The public name uses v2 because this is the second-generation Meditation Agent release. Internally, it is still the E-atoms line.
- Downloads last month
- 721
3-bit
5-bit
8-bit
16-bit
Model tree for Sathman/Meditation-Agent-SmolLM3-3B-v2-GGUF
Base model
HuggingFaceTB/SmolLM3-3B-Base