Tinman Lab
Autonomous Machines. Second-Order Systems.
AGENT MEMORY · ADVERSARIAL SAFETY · AGENTIC ECONOMY · PERCEPTION SYSTEMS · APPLIED RESEARCH
We build on-device AI systems that reason, remember, and self-correct — small models designed to run autonomously at the edge with calibrated uncertainty and adversarial robustness.
Research Areas
- Agent Memory — Encrypted semantic memory infrastructure for persistent agent context
- Adversarial Safety — Multi-agent stress-testing and trust verification for autonomous systems
- Perception Systems — On-device vision, voice, and multimodal understanding
- Disposition Distillation — Training behavioral tendencies (planning, uncertainty acknowledgment, self-verification) into sub-billion parameter model weights via all-MIT multi-teacher distillation
Research Releases
| Model | Size | Description |
| tinman-code-0.6B |
418 MB |
Research release — Disposition Distillation proof-of-concept on Qwen3-0.6B |
Links
Website
GitHub
Research Paper — coming soon