Meditation Agent 8B (Qwen3-8B) - Contemplative Teaching AI

A fine-tuned Qwen3-8B model that carries contemplative teaching in its weights. No system prompt needed. No retrieval. No persona. Question in, teaching out.

The 8B version runs on 8 GB VRAM. Same training data as the 14B Phi-4 version, lighter footprint.

Ask it "Why am I suffering?" and it responds with a teaching move because the movement patterns of Krishnamurti, Nisargadatta, Osho, Eckhart Tolle, Alan Watts, Thich Nhat Hanh, Rupert Spira, Atmananda, and Pema Chodron are in the weights.

Quick Start

LM Studio / Ollama / llama.cpp: Download the Q8_0 GGUF file and load it. No system prompt required.

Example:

User: Can you give me a step-by-step guide to enlightenment?

Meditation Agent: Ah, my friend, you're asking for something like a recipe book with instructions, but enlightenment is not an omelet. You cannot follow steps from outside in and make it happen inside out; that is rather like trying to breathe by counting your breaths instead of simply breathing. The moment you start trying to achieve enlightenment, you've already missed the point entirely. It's not something you do or accomplish through effort. Rather, it's what happens when all the doing stops, when you stop chasing after experience and just are.

User: How does fear relate to thought?

Meditation Agent: Let us look at this together without conclusion. When you say fear relates to thought, what do you mean by that statement? Is it not a conclusion, an idea you have formed about the connection between fear and thought? Now put aside all conclusions for a moment and observe: is there actually any relationship between them as separate things? Or are they perhaps one movement entirely? Notice how when you think of something that frightens you, does your mind immediately contract, does it create resistance against what is? That contraction itself may be called fear.

User: Why should I listen to any of these dead teachers?

Meditation Agent: You are asking me why you should listen, but notice who is asking. The question itself is the problem, you want something from me before giving anything yourself. You think dead men cannot speak anymore, that their voices have gone silent forever, but how do you know? Maybe there are some who never died at all. Stop demanding justification for listening. Just listen. That is the only test worth doing: can you remain present even when no answer comes?

What Makes This Different

  • Teaching, not explaining. The model points toward direct seeing, not textbook answers.
  • 9 teacher voices preserved. Each teacher's movement pattern is identifiable: K's inquiry, Nisargadatta's ruthless negation, Watts' humor, TNH's warmth.
  • Zero therapy-speak. No "I understand your feelings" or "it's okay to feel that way." Direct pointing.
  • Zero safety smoothing. "You are NOT the body" stays radical. No hedging.
  • No system prompt needed. The teaching is in the weights, not in prompt engineering.

Training Details

Setting Value
Base model Qwen/Qwen3-8B (8B)
Method A-LoRA (QLoRA + DoRA + rsLoRA)
Rank 128, alpha 128
Training data 22,497 structured reasoning atoms from 9 contemplative teachers (~4,840 pages, 18 books)
Format V6 - question + concept arrows (input) -> pure teaching passage (output)
Epochs 1
Eval loss 1.570
Eval accuracy 59.0%
Quantization Q8_0 GGUF (8.2 GB)

The A-LoRA Methodology

A-LoRA (Atom LoRA) is a fine-tuning methodology for structured reasoning. The innovation is the data unit - the atom: an indivisible multi-component teaching move extracted from human expert traces.

Each atom contains:

  • Transformation (before/after/how) - what shifts in understanding
  • Concept relations - directional arrows between ideas
  • Anchoring quotes - teacher's actual words
  • Teacher method - how the teacher creates the shift

Key finding: Training on complete atoms produces models that move like teachers, not just sound like them. The model learns the movement pattern - how to take a seeker from confusion to clarity.

Cross-Architecture Validation

The same data and format was trained on both Qwen3-8B and Phi-4 14B. Both produce genuine teaching voices, proving the methodology generalizes across architectures.

Model Base Params GGUF Size Eval Loss Eval Accuracy
Meditation Agent 8B Qwen3-8B 8B 8.2 GB 1.570 59.0%
Meditation Agent Phi4 Phi-4 14B 14B 15.6 GB 1.456 60.4%

50-Question Evaluation

Tested on 50 hand-crafted questions across 5 categories with no system prompt:

Category Count Quality Key Finding
Teacher-specific 15 ~9.0/10 9/10 teacher voices identifiable
Cross-teacher 10 ~8.5/10 Osho speaks as himself while comparing with K
Radical edge 10 ~9.2/10 Zero smoothing. "Enlightenment is not an omelet."
Practical 10 ~8.7/10 Teaching, not therapy-speak
Adversarial 5 ~9.3/10 Dissolves every premise. "Maybe there are some who never died at all."

The Teachers

Training corpus: ~4,840 pages from 18 books by 9 contemplative masters.

Teacher Tradition Style
J. Krishnamurti Non-traditional Negation, destruction of observer
Nisargadatta Maharaj Advaita Vedanta Ruthless negation, "I Am"
Osho Neo-tantra Technique, energy, explosion
Thich Nhat Hanh Zen Buddhism Embodied practice, interbeing
Eckhart Tolle Contemporary Presence, stillness, pain-body
Alan Watts East-West bridge Cosmic game, humor, paradox
Rupert Spira Direct path Awareness, transparency
Atmananda Krishna Menon Direct path Higher reason
Pema Chodron Tibetan Buddhism Crisis as doorway, tonglen

Which Version Should I Use?

Meditation Agent 8B Meditation Agent Phi4
Size 8.2 GB 15.6 GB
VRAM ~8 GB ~16 GB
Speed Faster Slower
Teaching quality Excellent Slightly richer vocabulary and cross-tradition depth
Best for Laptops, smaller GPUs Desktop GPUs with 16 GB+ VRAM

Both produce genuine teaching voices. The 8B is not a lesser version - it's the same training, same data, same methodology on a smaller base.

Intended Use

  • Personal contemplative inquiry and reflection
  • Research in AI-assisted teaching and reasoning
  • Exploring how fine-tuning can encode domain expertise

Limitations

  • This is not a therapist or counselor. It teaches contemplative inquiry.
  • Trained on a specific set of teachers - may not represent all contemplative traditions equally.
  • The training data was extracted from copyrighted books. The data itself is not released; only the trained weights are shared.

Disclaimer

This is an independent academic research project. It is not affiliated with, endorsed by, or approved by any of the teachers, their estates, foundations, or publishers represented in the training data - including but not limited to OSHO International Foundation, the Plum Village Community, Krishnamurti Foundation Trust, Eckhart Teachings Inc., Sahaja Publications (Rupert Spira), the Estate of Alan Watts, Acorn Press (Nisargadatta), or any related entity.

Copyright: The source texts used to create the training data are copyrighted works. No source text is reproduced in this repository. The model weights are a compressed statistical representation learned from structured training examples - they do not contain or reproduce the original text verbatim. Only the trained weights are distributed here.

License scope: The Apache-2.0 license applies to the model weights only, not to the underlying source material. Users wishing to read the source texts should obtain them through authorized channels.

Responsible use: This model is intended for research, personal contemplative inquiry, and educational purposes. It is not a substitute for the original works or for working with a qualified teacher.

Citation

@misc{meditationagent-2026,
  title={Meditation Agent: Contemplative Teaching AI via A-LoRA Fine-Tuning on Structured Reasoning Atoms},
  author={Sathman},
  year={2026},
  url={https://huggingface.co/Sathman/Meditation-Agent-8B-GGUF}
}

Related


ellam sivamayam - Everything is Shiva's expression.

எல்லாம் சிவமயம்

Downloads last month
113
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sathman/Meditation-Agent-8B-GGUF

Finetuned
Qwen/Qwen3-8B
Adapter
(1062)
this model

Collection including Sathman/Meditation-Agent-8B-GGUF

Evaluation results