---
title: README
emoji: ๐ฉ
colorFrom: gray
colorTo: blue
sdk: static
pinned: false
---
# Tinman Lab
### Autonomous Machines. Second-Order Systems.
AGENT MEMORY ยท ADVERSARIAL SAFETY ยท AGENTIC ECONOMY ยท PERCEPTION SYSTEMS ยท APPLIED RESEARCH
---
## Disposition Distillation
Tinman Lab develops **Disposition Distillation (DD)** โ a multi-teacher distillation methodology that trains *how a model behaves* into weights, not system prompts. DD models plan before acting, acknowledge uncertainty, verify their own reasoning, and know what they don't know.
- **4-stage all-MIT pipeline** โ Kimi K2.5 โ GLM-5 โ MiniMax M2.7 โ GLM-5
- **7 behavioral dispositions** โ Eager, Deliberate, Adversarial, Curious, Self-Improving, Humble, Persistent
- **On-device focus** โ 0.6B to 2B parameters, quantized for mobile and edge deployment
- **100% open training data** โ MIT-licensed teachers only, zero proprietary model outputs
## Models
| Model | Size | Description |
|-------|------|-------------|
| [tinman-code-0.6B](https://huggingface.co/Tinman-Lab/tinman-code-0.6B) | 418 MB | Coding assistant with meta-cognitive awareness โ plans, verifies, flags uncertainty |
## Links
- [Website](https://tinmanlab.com)
- [GitHub](https://github.com/tinmanlabsl/)
- Research Paper โ coming soon