empathetic-qwen3-8b

An empathetic conversational AI fine-tuned for supportive, understanding responses.

Model Description

This model is a LoRA fine-tune of Qwen3-8B, trained on:

  • EmpatheticDialogues: Emotional conversation dataset
  • ESConv: Emotional support conversations with strategy labels
  • GoEmotions: Multi-label emotion classification

Training Details

  • Base Model: Qwen3-8B (4-bit quantized)
  • Method: Multi-task SFT with auxiliary heads
  • Training: QLoRA with Unsloth optimization
  • Hardware: Kaggle T4 GPU

Usage

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    "Someet24/empathetic-qwen3-8b",
    max_seq_length=1024,
    load_in_4bit=True,
)

# Generate response
messages = [
    {"role": "system", "content": "You are an empathetic, supportive friend."},
    {"role": "user", "content": "I'm feeling really anxious about tomorrow."}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Intended Use

  • Emotional support conversations
  • Mental wellness chatbots
  • Empathetic dialogue systems

Limitations

  • Not a replacement for professional mental health support
  • May not handle crisis situations appropriately
  • English only

Citation

@misc{empathetic-qwen3,
  author = {Someet24},
  title = {Empathetic Qwen3-8B},
  year = {2026},
  publisher = {HuggingFace},
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Someet24/empathetic-qwen3-8b

Finetuned
Qwen/Qwen3-8B
Adapter
(6)
this model

Datasets used to train Someet24/empathetic-qwen3-8b