Model Card for Model ID
This is a fine-tuned version of Llama-3.1-8B optimized for detecting and analyzing psychological coercion and PSYOP techniques in text. The model was trained on 10,137 annotated examples from YouTube interview transcripts to identify manipulation tactics, target audiences, and sentiment patterns.Use case: Content moderation, media literacy analysis, interview transcript analysis
- Developed by: Thomas Giroux
- Model type: Fine-tuned Language Model
- Language(s) (NLP): English
- Finetuned from model: meta-llama/Llama-3.1-8B
Training Details
Training Data: psychological-coercion-identification dataset (12.7k examples) Training Procedure: LoRA fine-tuning with 2 epochs Hyperparameters:
Learning rate: 2e-4 Batch size: 4 (per device) LoRA rank: 8 Training loss: 1.071 (final) Validation loss: 1.075 (final)
- PEFT 0.18.1
- Downloads last month
- 42
Model tree for LeTG/llama-3p1-8B-psyop-analysis
Base model
meta-llama/Llama-3.1-8B