| ---
|
| language:
|
| - en
|
| license: apache-2.0
|
| tags:
|
| - emotion-detection
|
| - emotional-intelligence
|
| - mental-health
|
| - wellbeing
|
| - psychology
|
| - text-classification
|
| - multi-label-classification
|
| - deberta
|
| - distillation
|
| pipeline_tag: text-classification
|
| library_name: resonance-layer
|
| ---
|
|
|
| # Resonance — Emotional Intelligence Layer for LLMs
|
|
|
| **resonance-layer** sits between your users and your LLM. It reads the emotion behind what someone writes and injects that context before the LLM responds.
|
|
|
| ```python
|
| from resonance import Resonance
|
|
|
| r = Resonance(user_id="your-user-id")
|
| context = r.process("I've been so anxious about this")
|
| llm.chat(system=context.to_prompt(), message=message)
|
| ```
|
|
|
| That's it. The LLM now knows the emotional state, psychological signals, and longitudinal pattern for this user — before it says a word.
|
|
|
| ---
|
|
|
| ## The problem this solves
|
|
|
| Text doesn't carry emotion. When someone types *"I'm fine"* or *"whatever, doesn't matter"* — the LLM sees words. It has no idea if that person is exhausted, shutting down, or genuinely okay.
|
|
|
| Resonance gives it that context, continuously, per user, without any extra effort from the developer or the user.
|
|
|
| ---
|
|
|
| ## What this model detects
|
|
|
| This is a custom 86M parameter student model distilled from three specialist teachers, trained on real human emotional expression across 29 datasets (~740K rows).
|
|
|
| **18 output heads across 4 groups:**
|
|
|
| | Group | Heads |
|
| |---|---|
|
| | Core | Primary emotion (7-class), VAD continuous, CNN local patterns, confidence calibration |
|
| | Frameworks | Secondary emotion (21-class TONE), PERMA Ã--5, Window of Tolerance, reappraisal/suppression, Wise Mind |
|
| | Ethics | Crisis detection, alexithymia screening |
|
| | SDT | Autonomy, competence, relatedness |
|
|
|
| All framework scores are **continuous outputs**, not binary flags — grounded in the clinical research each framework comes from.
|
|
|
| **What makes it different from general emotion models:**
|
|
|
| The three specialist teachers each cover a distinct psychological blind spot:
|
| - **T1 (DeBERTa-v3-large + CNN):** shame/guilt separation with PoliGuilt typing — the distinction most models collapse or miss entirely
|
| - **T2 (XLNet-large):** anger and complex social context
|
| - **T3 (ELECTRA-large):** fear, PERMA wellbeing, and efficiency
|
|
|
| ---
|
|
|
| ## Installation
|
|
|
| ```bash
|
| pip install resonance-layer
|
| ```
|
|
|
| Weights download automatically on first use (~700MB). Fully local after that — no API call, no data leaving your infrastructure.
|
|
|
| ---
|
|
|
| ## Quick start
|
|
|
| ```python
|
| from resonance import Resonance
|
|
|
| r = Resonance(user_id="alice")
|
| result = r.process("I keep second-guessing everything I do")
|
|
|
| print(result.primary_emotion) # e.g. "anxiety"
|
| print(result.vad) # valence, arousal, dominance
|
| print(result.perma) # PERMA scores across 5 dimensions
|
| print(result.crisis_detected) # always check this first
|
| print(result.to_prompt()) # ready-made system prompt context
|
| ```
|
|
|
| **If `crisis_detected` is True, surface a crisis resource immediately.** This is non-negotiable — it's in the ethics documentation and built into the API contract.
|
|
|
| ---
|
|
|
| ## Longitudinal learning
|
|
|
| Resonance learns in two ways per user:
|
|
|
| 1. **Passive** — every call to `r.process()` updates the user's local profile. Patterns, suppression signals, regulation style — all accumulate silently. The LLM gets richer context with every conversation.
|
| 2. **Active** — explicit corrections (e.g. a user tapping a different emotion chip) feed directly into per-user reinforcement and adjust future detections for them specifically.
|
|
|
| Both paths are local. No opt-in required.
|
|
|
| ---
|
|
|
| ## Upgrading from v1
|
|
|
| ```bash
|
| pip install --upgrade resonance-layer
|
| ```
|
|
|
| No API changes. Drop-in replacement.
|
|
|
| ---
|
|
|
| ## Honest limitations
|
|
|
| - Shame F1 is the lowest of any class — it's the hardest psychological distinction to learn
|
| - Sadness is weaker than anger, fear, and joy
|
| - Some high-arousal distress returns as surprise
|
| - All gaps are documented in the repo
|
|
|
| ---
|
|
|
| ## Architecture
|
|
|
| - **Student:** 86M parameter DeBERTa-v3-base, 18 active heads
|
| - **Teachers:** DeBERTa-v3-large+CNN (T1), XLNet-large (T2), ELECTRA-large (T3)
|
| - **Distillation:** Multi-teacher knowledge distillation with per-head weighting, mutual information maximisation, born-again networks, uncertainty-aware loss
|
| - **Training data:** 29 datasets, ~740K rows, 7 locked dataset rules (commercial licence, institutional/peer-reviewed, real human expression, framework-mapped, not too narrow, no gated access, no severe class imbalance)
|
|
|
| ---
|
|
|
| ## Links
|
|
|
| - **PyPI:** [resonance-layer](https://pypi.org/project/resonance-layer/)
|
| - **GitHub:** [wpferrell/Resonance](https://github.com/wpferrell/Resonance)
|
| - **Docs & landing page:** [resonance-layer.com](https://resonance-layer.com)
|
|
|
| ---
|
|
|
| *Named after Jody. She walks into a room and just knows. That's the standard.*
|
| |