TinmanLabSL commited on
Commit
8e19556
·
verified ·
1 Parent(s): 908a382

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +34 -3
README.md CHANGED
@@ -1,10 +1,41 @@
1
  ---
2
  title: README
3
- emoji: 👁
4
  colorFrom: gray
5
- colorTo: purple
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: README
3
+ emoji: 🔩
4
  colorFrom: gray
5
+ colorTo: blue
6
  sdk: static
7
  pinned: false
8
  ---
9
 
10
+ <div align="center">
11
+
12
+ # Tinman Lab
13
+
14
+ ### Autonomous Machines. Second-Order Systems.
15
+
16
+ <sub>AGENT MEMORY · ADVERSARIAL SAFETY · AGENTIC ECONOMY · PERCEPTION SYSTEMS · APPLIED RESEARCH</sub>
17
+
18
+ ---
19
+
20
+ </div>
21
+
22
+ ## Disposition Distillation
23
+
24
+ Tinman Lab develops **Disposition Distillation (DD)** — a multi-teacher distillation methodology that trains *how a model behaves* into weights, not system prompts. DD models plan before acting, acknowledge uncertainty, verify their own reasoning, and know what they don't know.
25
+
26
+ - **4-stage all-MIT pipeline** — Kimi K2.5 → GLM-5 → MiniMax M2.7 → GLM-5
27
+ - **7 behavioral dispositions** — Eager, Deliberate, Adversarial, Curious, Self-Improving, Humble, Persistent
28
+ - **On-device focus** — 0.6B to 2B parameters, quantized for mobile and edge deployment
29
+ - **100% open training data** — MIT-licensed teachers only, zero proprietary model outputs
30
+
31
+ ## Models
32
+
33
+ | Model | Size | Description |
34
+ |-------|------|-------------|
35
+ | [tinman-code-0.6B](https://huggingface.co/Tinman-Lab/tinman-code-0.6B) | 418 MB | Coding assistant with meta-cognitive awareness — plans, verifies, flags uncertainty |
36
+
37
+ ## Links
38
+
39
+ - [Website](https://tinmanlab.com)
40
+ - [GitHub](https://github.com/tinmanlabsl/)
41
+ - Research Paper — coming soon