wpferrell commited on
Commit
0685c73
·
verified ·
1 Parent(s): 60c34c1

Add model card README

Browse files
Files changed (1) hide show
  1. README.md +143 -0
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - emotion-detection
7
+ - emotional-intelligence
8
+ - mental-health
9
+ - wellbeing
10
+ - psychology
11
+ - text-classification
12
+ - multi-label-classification
13
+ - deberta
14
+ - distillation
15
+ pipeline_tag: text-classification
16
+ library_name: resonance-layer
17
+ ---
18
+
19
+ # Resonance — Emotional Intelligence Layer for LLMs
20
+
21
+ **resonance-layer** sits between your users and your LLM. It reads the emotion behind what someone writes and injects that context before the LLM responds.
22
+
23
+ ```python
24
+ from resonance import Resonance
25
+
26
+ r = Resonance(user_id="your-user-id")
27
+ context = r.process("I've been so anxious about this")
28
+ llm.chat(system=context.to_prompt(), message=message)
29
+ ```
30
+
31
+ That's it. The LLM now knows the emotional state, psychological signals, and longitudinal pattern for this user — before it says a word.
32
+
33
+ ---
34
+
35
+ ## The problem this solves
36
+
37
+ Text doesn't carry emotion. When someone types *"I'm fine"* or *"whatever, doesn't matter"* — the LLM sees words. It has no idea if that person is exhausted, shutting down, or genuinely okay.
38
+
39
+ Resonance gives it that context, continuously, per user, without any extra effort from the developer or the user.
40
+
41
+ ---
42
+
43
+ ## What this model detects
44
+
45
+ This is a custom 86M parameter student model distilled from three specialist teachers, trained on real human emotional expression across 29 datasets (~740K rows).
46
+
47
+ **18 output heads across 4 groups:**
48
+
49
+ | Group | Heads |
50
+ |---|---|
51
+ | Core | Primary emotion (7-class), VAD continuous, CNN local patterns, confidence calibration |
52
+ | Frameworks | Secondary emotion (21-class TONE), PERMA ×5, Window of Tolerance, reappraisal/suppression, Wise Mind |
53
+ | Ethics | Crisis detection, alexithymia screening |
54
+ | SDT | Autonomy, competence, relatedness |
55
+
56
+ All framework scores are **continuous outputs**, not binary flags — grounded in the clinical research each framework comes from.
57
+
58
+ **What makes it different from general emotion models:**
59
+
60
+ The three specialist teachers each cover a distinct psychological blind spot:
61
+ - **T1 (DeBERTa-v3-large + CNN):** shame/guilt separation with PoliGuilt typing — the distinction most models collapse or miss entirely
62
+ - **T2 (XLNet-large):** anger and complex social context
63
+ - **T3 (ELECTRA-large):** fear, PERMA wellbeing, and efficiency
64
+
65
+ ---
66
+
67
+ ## Installation
68
+
69
+ ```bash
70
+ pip install resonance-layer
71
+ ```
72
+
73
+ Weights download automatically on first use (~700MB). Fully local after that — no API call, no data leaving your infrastructure.
74
+
75
+ ---
76
+
77
+ ## Quick start
78
+
79
+ ```python
80
+ from resonance import Resonance
81
+
82
+ r = Resonance(user_id="alice")
83
+ result = r.process("I keep second-guessing everything I do")
84
+
85
+ print(result.primary_emotion) # e.g. "anxiety"
86
+ print(result.vad) # valence, arousal, dominance
87
+ print(result.perma) # PERMA scores across 5 dimensions
88
+ print(result.crisis_detected) # always check this first
89
+ print(result.to_prompt()) # ready-made system prompt context
90
+ ```
91
+
92
+ **If `crisis_detected` is True, surface a crisis resource immediately.** This is non-negotiable — it's in the ethics documentation and built into the API contract.
93
+
94
+ ---
95
+
96
+ ## Longitudinal learning
97
+
98
+ Resonance learns in two ways per user:
99
+
100
+ 1. **Passive** — every call to `r.process()` updates the user's local profile. Patterns, suppression signals, regulation style — all accumulate silently. The LLM gets richer context with every conversation.
101
+ 2. **Active** — explicit corrections (e.g. a user tapping a different emotion chip) feed directly into per-user reinforcement and adjust future detections for them specifically.
102
+
103
+ Both paths are local. No opt-in required.
104
+
105
+ ---
106
+
107
+ ## Upgrading from v1
108
+
109
+ ```bash
110
+ pip install --upgrade resonance-layer
111
+ ```
112
+
113
+ No API changes. Drop-in replacement.
114
+
115
+ ---
116
+
117
+ ## Honest limitations
118
+
119
+ - Shame F1 is the lowest of any class — it's the hardest psychological distinction to learn
120
+ - Sadness is weaker than anger, fear, and joy
121
+ - Some high-arousal distress returns as surprise
122
+ - All gaps are documented in the repo
123
+
124
+ ---
125
+
126
+ ## Architecture
127
+
128
+ - **Student:** 86M parameter DeBERTa-v3-base, 18 active heads
129
+ - **Teachers:** DeBERTa-v3-large+CNN (T1), XLNet-large (T2), ELECTRA-large (T3)
130
+ - **Distillation:** Multi-teacher knowledge distillation with per-head weighting, mutual information maximisation, born-again networks, uncertainty-aware loss
131
+ - **Training data:** 29 datasets, ~740K rows, 7 locked dataset rules (commercial licence, institutional/peer-reviewed, real human expression, framework-mapped, not too narrow, no gated access, no severe class imbalance)
132
+
133
+ ---
134
+
135
+ ## Links
136
+
137
+ - **PyPI:** [resonance-layer](https://pypi.org/project/resonance-layer/)
138
+ - **GitHub:** [wpferrell/Resonance](https://github.com/wpferrell/Resonance)
139
+ - **Docs & landing page:** [resonance-layer.com](https://resonance-layer.com)
140
+
141
+ ---
142
+
143
+ *Named after Jody. She walks into a room and just knows. That's the standard.*