README / README.md
TinmanLabSL's picture
Upload README.md with huggingface_hub
8e19556 verified
|
raw
history blame
1.35 kB
metadata
title: README
emoji: 🔩
colorFrom: gray
colorTo: blue
sdk: static
pinned: false

Tinman Lab

Autonomous Machines. Second-Order Systems.

AGENT MEMORY · ADVERSARIAL SAFETY · AGENTIC ECONOMY · PERCEPTION SYSTEMS · APPLIED RESEARCH


Disposition Distillation

Tinman Lab develops Disposition Distillation (DD) — a multi-teacher distillation methodology that trains how a model behaves into weights, not system prompts. DD models plan before acting, acknowledge uncertainty, verify their own reasoning, and know what they don't know.

  • 4-stage all-MIT pipeline — Kimi K2.5 → GLM-5 → MiniMax M2.7 → GLM-5
  • 7 behavioral dispositions — Eager, Deliberate, Adversarial, Curious, Self-Improving, Humble, Persistent
  • On-device focus — 0.6B to 2B parameters, quantized for mobile and edge deployment
  • 100% open training data — MIT-licensed teachers only, zero proprietary model outputs

Models

Model Size Description
tinman-code-0.6B 418 MB Coding assistant with meta-cognitive awareness — plans, verifies, flags uncertainty

Links