Spaces:
Running
Running
File size: 1,353 Bytes
908a382 8e19556 908a382 8e19556 908a382 8e19556 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | ---
title: README
emoji: 🔩
colorFrom: gray
colorTo: blue
sdk: static
pinned: false
---
<div align="center">
# Tinman Lab
### Autonomous Machines. Second-Order Systems.
<sub>AGENT MEMORY · ADVERSARIAL SAFETY · AGENTIC ECONOMY · PERCEPTION SYSTEMS · APPLIED RESEARCH</sub>
---
</div>
## Disposition Distillation
Tinman Lab develops **Disposition Distillation (DD)** — a multi-teacher distillation methodology that trains *how a model behaves* into weights, not system prompts. DD models plan before acting, acknowledge uncertainty, verify their own reasoning, and know what they don't know.
- **4-stage all-MIT pipeline** — Kimi K2.5 → GLM-5 → MiniMax M2.7 → GLM-5
- **7 behavioral dispositions** — Eager, Deliberate, Adversarial, Curious, Self-Improving, Humble, Persistent
- **On-device focus** — 0.6B to 2B parameters, quantized for mobile and edge deployment
- **100% open training data** — MIT-licensed teachers only, zero proprietary model outputs
## Models
| Model | Size | Description |
|-------|------|-------------|
| [tinman-code-0.6B](https://huggingface.co/Tinman-Lab/tinman-code-0.6B) | 418 MB | Coding assistant with meta-cognitive awareness — plans, verifies, flags uncertainty |
## Links
- [Website](https://tinmanlab.com)
- [GitHub](https://github.com/tinmanlabsl/)
- Research Paper — coming soon
|