asdf98's picture
Upload README.md
9a01b20 verified

Real Human Conversations Worldwide — Multilingual Dataset

A massive, unified dataset of real human conversations from across the world, collected from publicly available sources. This is NOT synthetic customer-service data — it contains genuine human dialogue from Reddit, Discord, Twitch, Usenet, Telegram, therapy sessions, and more.

📊 Stats

  • Total rows: 1,609,769
  • Total size: ~470 MB (zstd-compressed Parquet)
  • Format: Parquet
  • Languages: English, Russian, Italian, Japanese, Korean

🌍 Languages

Language Rows Size Sources
English 1,084,282 98.3 MB Discord, Reddit, Twitch, therapy, YouTube mix
Russian 200,000 34.1 MB Telegram chats
Italian 198,508 318.3 MB Usenet newsgroups
Japanese 100,000 10.4 MB Text conversations
Korean 26,979 9.1 MB Everyday chat

📁 Dataset Format

All data unified into a consistent schema:

Column Type Description
text string Conversation / dialogue / transcript text
source string Origin dataset name
language string ISO language code
domain string Conversation type
turns int Number of dialogue turns
metadata string JSON with extra info

🚀 Usage

from datasets import load_dataset

# Load everything
ds = load_dataset("asdf98/human-chats-worldwide", split="train")

# By language
en = ds.filter(lambda x: x["language"] == "en")
ru = ds.filter(lambda x: x["language"] == "ru")
it = ds.filter(lambda x: x["language"] == "it")
ja = ds.filter(lambda x: x["language"] == "ja")
ko = ds.filter(lambda x: x["language"] == "ko")

🔗 Sources

All data from publicly available Hugging Face datasets:

File Source Type Language
discord_dialogues.parquet mookiezi/Discord-Dialogues Real Discord chats en
reddit_comments.parquet HuggingFaceGECLM/REDDIT_comments Reddit threads en
reddit_confessions.parquet SocialGrep/one-million-reddit-confessions Real confessions en
reddit_questions.parquet SocialGrep/one-million-reddit-questions AskReddit en
reddit_youtube_mix.parquet fsteig/conversations-30gb Reddit+YouTube mix en
russian_dialogues.parquet Den4ikAI/russian_dialogues_2 Telegram chats ru
italian_usenet.parquet mii-community/UsenetArchiveIT-conversations Usenet forums it
twitch_chat.parquet lparkourer10/twitch_chat Live stream chat en
mental_health.parquet Amod/mental_health_counseling_conversations Therapy sessions en
japanese_text.parquet izumi-lab/llm-japanese-dataset Japanese conversations ja
korean_conversations.parquet jojo0217/korean_safe_conversation Everyday Korean chat ko

⚠️ Notes

  • All data is publicly posted by real humans on public platforms
  • No private messages or non-consensual data
  • Filtered to remove [deleted], [removed], and very short posts
  • Some content may be informal, colloquial, or contain strong language