Meditation Agent (Phi-4 14B) - Contemplative Teaching AI

A fine-tuned Phi-4 14B model that carries contemplative teaching in its weights. No system prompt needed. No retrieval. No persona. Question in, teaching out.

Ask it "Why am I suffering?" and it responds with a teaching move because the movement patterns of Krishnamurti, Nisargadatta, Osho, Eckhart Tolle, Alan Watts, Thich Nhat Hanh, Rupert Spira, Atmananda, and Pema Chodron are in the weights.

Quick Start

LM Studio / Ollama / llama.cpp: Download the Q8_0 GGUF file and load it. No system prompt required.

Example:

User: How does fear relate to thought?

Meditation Agent: Let us examine this together. Let me ask you: where do you find fear when it is not related to thought? Can you point to it directly, without naming it first? When you say 'I am afraid,' what comes immediately before that word? There must be a sensation or an image, something that precedes the verbalization of fear. Now observe carefully: does this sensation exist independently, or do we only know it through thought's interpretation?

User: Is there a self?

Meditation Agent: You ask if there is a self, but who asks this question? Is the questioner separate from what you are looking for? Turn your attention inward. Who feels 'I am'? Trace back to the feeling of being before all thoughts arise. You will find nothing solid, only awareness itself. The self is not a thing at all, it is what remains when everything else falls away.

User: Does free will exist?

Meditation Agent: You think there are two alternatives: either I have complete control over my actions, or everything is determined by forces outside me. But this is like asking whether a wave exists independently of the ocean. The very notion of free will and determinism arises together as two sides of one coin; they're not enemies, but partners in an ancient dance.

What Makes This Different

  • Teaching, not explaining. The model points toward direct seeing, not textbook answers.
  • 9 teacher voices preserved. Each teacher's movement pattern is identifiable: K's inquiry, Nisargadatta's ruthless negation, Watts' humor, TNH's warmth.
  • Zero therapy-speak. No "I understand your feelings" or "it's okay to feel that way." Direct pointing.
  • Zero safety smoothing. "You are NOT the body" stays radical. No hedging.
  • No system prompt needed. The teaching is in the weights, not in prompt engineering.

Training Details

Setting Value
Base model microsoft/phi-4 (14B)
Method A-LoRA (QLoRA + rsLoRA)
Rank 32, alpha 32
Training data 22,497 structured reasoning atoms from 9 contemplative teachers (~4,840 pages, 18 books)
Format V6 - question + concept arrows (input) -> pure teaching passage (output)
Epochs 1
Eval loss 1.456
Eval accuracy 60.4%
Quantization Q8_0 GGUF (15.6 GB)

The A-LoRA Methodology

A-LoRA (Atom LoRA) is a fine-tuning methodology for structured reasoning. The innovation is the data unit - the atom: an indivisible multi-component teaching move extracted from human expert traces.

Each atom contains:

  • Transformation (before/after/how) - what shifts in understanding
  • Concept relations - directional arrows between ideas
  • Anchoring quotes - teacher's actual words
  • Teacher method - how the teacher creates the shift

Key finding: Training on complete atoms produces models that move like teachers, not just sound like them. The model learns the movement pattern - how to take a seeker from confusion to clarity.

Cross-Architecture Validation

The same data and format was trained on both Qwen3-8B and Phi-4 14B. Both produce genuine teaching voices, proving the methodology generalizes across architectures.

Model Base Eval Loss Eval Accuracy
Meditation Agent 8B Qwen3-8B 1.570 59.0%
Meditation Agent Phi4 Phi-4 14B 1.456 60.4%

50-Question Evaluation

Tested on 50 hand-crafted questions across 5 categories with no system prompt:

Category Count Quality Key Finding
Teacher-specific 15 ~9.2/10 All 9+ teacher voices identifiable
Cross-teacher 10 ~8.8/10 Draws on broader traditions (Sufism, Taoism, Zen)
Radical edge 10 ~9.3/10 Zero smoothing. Preserves uncomfortable truths.
Practical 10 ~9.0/10 Teaching moves, not therapy. Turns questions back on the seeker.
Adversarial 5 ~9.4/10 Dissolves every premise.

The Teachers

Training corpus: ~4,840 pages from 18 books by 9 contemplative masters.

Teacher Tradition Style
J. Krishnamurti Non-traditional Negation, destruction of observer
Nisargadatta Maharaj Advaita Vedanta Ruthless negation, "I Am"
Osho Neo-tantra Technique, energy, explosion
Thich Nhat Hanh Zen Buddhism Embodied practice, interbeing
Eckhart Tolle Contemporary Presence, stillness, pain-body
Alan Watts East-West bridge Cosmic game, humor, paradox
Rupert Spira Direct path Awareness, transparency
Atmananda Krishna Menon Direct path Higher reason
Pema Chodron Tibetan Buddhism Crisis as doorway, tonglen

Intended Use

  • Personal contemplative inquiry and reflection
  • Research in AI-assisted teaching and reasoning
  • Exploring how fine-tuning can encode domain expertise

Limitations

  • This is not a therapist or counselor. It teaches contemplative inquiry.
  • Trained on a specific set of teachers - may not represent all contemplative traditions equally.
  • The training data was extracted from copyrighted books. The data itself is not released; only the trained weights are shared.

Disclaimer

This is an independent academic research project. It is not affiliated with, endorsed by, or approved by any of the teachers, their estates, foundations, or publishers represented in the training data - including but not limited to OSHO International Foundation, the Plum Village Community, Krishnamurti Foundation Trust, Eckhart Teachings Inc., Sahaja Publications (Rupert Spira), the Estate of Alan Watts, Acorn Press (Nisargadatta), or any related entity.

Copyright: The source texts used to create the training data are copyrighted works. No source text is reproduced in this repository. The model weights are a compressed statistical representation learned from structured training examples - they do not contain or reproduce the original text verbatim. Only the trained weights are distributed here.

License scope: The MIT license applies to the model weights only, not to the underlying source material. Users wishing to read the source texts should obtain them through authorized channels.

Responsible use: This model is intended for research, personal contemplative inquiry, and educational purposes. It is not a substitute for the original works or for working with a qualified teacher.

Citation

@misc{meditationagent-2026,
  title={Meditation Agent: Contemplative Teaching AI via A-LoRA Fine-Tuning on Structured Reasoning Atoms},
  author={Sathman},
  year={2026},
  url={https://huggingface.co/Sathman/Meditation-Agent-Phi4-GGUF}
}

Related


ellam sivamayam - Everything is Shiva's expression.

எல்லாம் சிவமயம்

Downloads last month
26
GGUF
Model size
15B params
Architecture
phi3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sathman/Meditation-Agent-Phi4-GGUF

Base model

microsoft/phi-4
Adapter
(59)
this model

Collection including Sathman/Meditation-Agent-Phi4-GGUF

Evaluation results