SamanthaStorm commited on
Commit
4a965dc
Β·
verified Β·
1 Parent(s): b3ba831

Upload IntentAnalyzer v1.0 - Multi-Label Communication Intent Detection Model

Browse files
Files changed (3) hide show
  1. README.md +7 -7
  2. modeling_intent.py +1 -1
  3. pytorch_model.bin +3 -1
README.md CHANGED
@@ -55,7 +55,7 @@ IntentAnalyzer is a state-of-the-art multi-label text classification model desig
55
  The model detects 6 different intent categories (multi-label):
56
 
57
  1. **🧌 Trolling** - Deliberately provocative or disruptive communication
58
- 2. **🚫 Dismissive** - Shutting down conversation or avoiding engagement
59
  3. **🎭 Manipulative** - Using emotional coercion, guilt, or pressure tactics
60
  4. **πŸŒ‹ Emotionally Reactive** - Overwhelmed by emotion, not thinking clearly
61
  5. **βœ… Constructive** - Good faith engagement and dialogue
@@ -69,7 +69,7 @@ The model detects 6 different intent categories (multi-label):
69
 
70
  ### Per-Category Performance
71
  - **Trolling**: F1=0.943 (P=0.976, R=0.911)
72
- - **Dismissive**: F1=0.850 (P=0.964, R=0.761)
73
  - **Manipulative**: F1=0.907 (P=0.867, R=0.951)
74
  - **Emotionally Reactive**: F1=0.939 (P=0.931, R=0.947)
75
  - **Constructive**: F1=0.989 (P=0.978, R=1.000)
@@ -89,7 +89,7 @@ class MultiLabelIntentClassifier(nn.Module):
89
  self.bert = AutoModel.from_pretrained(model_name)
90
  self.dropout = nn.Dropout(0.3)
91
  self.classifier = nn.Linear(self.bert.config.hidden_size, num_labels)
92
-
93
  def forward(self, input_ids, attention_mask):
94
  outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
95
  pooled_output = outputs.last_hidden_state[:, 0]
@@ -110,18 +110,18 @@ intent_categories = ['trolling', 'dismissive', 'manipulative', 'emotionally_reac
110
 
111
  def predict_intent(text, threshold=0.5):
112
  inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
113
-
114
  with torch.no_grad():
115
  outputs = model(inputs['input_ids'], inputs['attention_mask'])
116
  probabilities = torch.sigmoid(outputs).numpy()[0]
117
-
118
  # Return predictions above threshold
119
  predictions = {}
120
  for i, category in enumerate(intent_categories):
121
  prob = probabilities[i]
122
  if prob > threshold:
123
  predictions[category] = prob
124
-
125
  return predictions
126
 
127
  # Example usage
@@ -155,7 +155,7 @@ The model was trained on a carefully curated dataset of 1,226 examples with:
155
  - **Relationship Counseling**: Understand communication patterns
156
  - **Content Moderation**: Flag problematic intent patterns
157
 
158
- ### Research Applications
159
  - **Psychology**: Study communication patterns and intentions
160
  - **Linguistics**: Analyze pragmatic aspects of language
161
  - **Social Sciences**: Understanding online discourse patterns
 
55
  The model detects 6 different intent categories (multi-label):
56
 
57
  1. **🧌 Trolling** - Deliberately provocative or disruptive communication
58
+ 2. **🚫 Dismissive** - Shutting down conversation or avoiding engagement
59
  3. **🎭 Manipulative** - Using emotional coercion, guilt, or pressure tactics
60
  4. **πŸŒ‹ Emotionally Reactive** - Overwhelmed by emotion, not thinking clearly
61
  5. **βœ… Constructive** - Good faith engagement and dialogue
 
69
 
70
  ### Per-Category Performance
71
  - **Trolling**: F1=0.943 (P=0.976, R=0.911)
72
+ - **Dismissive**: F1=0.850 (P=0.964, R=0.761)
73
  - **Manipulative**: F1=0.907 (P=0.867, R=0.951)
74
  - **Emotionally Reactive**: F1=0.939 (P=0.931, R=0.947)
75
  - **Constructive**: F1=0.989 (P=0.978, R=1.000)
 
89
  self.bert = AutoModel.from_pretrained(model_name)
90
  self.dropout = nn.Dropout(0.3)
91
  self.classifier = nn.Linear(self.bert.config.hidden_size, num_labels)
92
+
93
  def forward(self, input_ids, attention_mask):
94
  outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
95
  pooled_output = outputs.last_hidden_state[:, 0]
 
110
 
111
  def predict_intent(text, threshold=0.5):
112
  inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
113
+
114
  with torch.no_grad():
115
  outputs = model(inputs['input_ids'], inputs['attention_mask'])
116
  probabilities = torch.sigmoid(outputs).numpy()[0]
117
+
118
  # Return predictions above threshold
119
  predictions = {}
120
  for i, category in enumerate(intent_categories):
121
  prob = probabilities[i]
122
  if prob > threshold:
123
  predictions[category] = prob
124
+
125
  return predictions
126
 
127
  # Example usage
 
155
  - **Relationship Counseling**: Understand communication patterns
156
  - **Content Moderation**: Flag problematic intent patterns
157
 
158
+ ### Research Applications
159
  - **Psychology**: Study communication patterns and intentions
160
  - **Linguistics**: Analyze pragmatic aspects of language
161
  - **Social Sciences**: Understanding online discourse patterns
modeling_intent.py CHANGED
@@ -9,7 +9,7 @@ class MultiLabelIntentClassifier(nn.Module):
9
  self.bert = AutoModel.from_pretrained(model_name)
10
  self.dropout = nn.Dropout(0.3)
11
  self.classifier = nn.Linear(self.bert.config.hidden_size, num_labels)
12
-
13
  def forward(self, input_ids, attention_mask):
14
  outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
15
  pooled_output = outputs.last_hidden_state[:, 0] # Use [CLS] token
 
9
  self.bert = AutoModel.from_pretrained(model_name)
10
  self.dropout = nn.Dropout(0.3)
11
  self.classifier = nn.Linear(self.bert.config.hidden_size, num_labels)
12
+
13
  def forward(self, input_ids, attention_mask):
14
  outputs = self.bert(input_ids=input_ids, attention_mask=attention_mask)
15
  pooled_output = outputs.last_hidden_state[:, 0] # Use [CLS] token
pytorch_model.bin CHANGED
@@ -1 +1,3 @@
1
- version https://git-lfs.github.com/spec/v1 oid sha256:2d06c315ccc3129b4860283640d34e976d89dea0b1d026118358e53854e43e32 size 265508834
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76ff6f53405352d5cdb384c2abb6e685d03cea834654311c19d79ec3984a7e66
3
+ size 265508834