File size: 3,141 Bytes
9a01b20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1bce555
9a01b20
 
 
 
 
 
 
 
 
 
1bce555
 
9a01b20
 
1bce555
9a01b20
 
 
 
 
 
 
1bce555
1fe4956
9a01b20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1fe4956
9a01b20
1fe4956
9a01b20
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# Real Human Conversations Worldwide — Multilingual Dataset

A massive, unified dataset of **real human conversations** from across the world, collected from publicly available sources. This is NOT synthetic customer-service data — it contains genuine human dialogue from Reddit, Discord, Twitch, Usenet, Telegram, therapy sessions, and more.

## 📊 Stats

- **Total rows**: 1,609,769
- **Total size**: ~470 MB (zstd-compressed Parquet)
- **Format**: Parquet
- **Languages**: English, Russian, Italian, Japanese, Korean

## 🌍 Languages

| Language | Rows | Size | Sources |
|----------|------|------|---------|
| English | 1,084,282 | 98.3 MB | Discord, Reddit, Twitch, therapy, YouTube mix |
| Russian | 200,000 | 34.1 MB | Telegram chats |
| Italian | 198,508 | 318.3 MB | Usenet newsgroups |
| Japanese | 100,000 | 10.4 MB | Text conversations |
| Korean | 26,979 | 9.1 MB | Everyday chat |

## 📁 Dataset Format

All data unified into a consistent schema:

| Column | Type | Description |
|--------|------|-------------|
| `text` | string | Conversation / dialogue / transcript text |
| `source` | string | Origin dataset name |
| `language` | string | ISO language code |
| `domain` | string | Conversation type |
| `turns` | int | Number of dialogue turns |
| `metadata` | string | JSON with extra info |

## 🚀 Usage

```python
from datasets import load_dataset

# Load everything
ds = load_dataset("asdf98/human-chats-worldwide", split="train")

# By language
en = ds.filter(lambda x: x["language"] == "en")
ru = ds.filter(lambda x: x["language"] == "ru")
it = ds.filter(lambda x: x["language"] == "it")
ja = ds.filter(lambda x: x["language"] == "ja")
ko = ds.filter(lambda x: x["language"] == "ko")
```

## 🔗 Sources

All data from publicly available Hugging Face datasets:

| File | Source | Type | Language |
|------|--------|------|----------|
| `discord_dialogues.parquet` | mookiezi/Discord-Dialogues | Real Discord chats | en |
| `reddit_comments.parquet` | HuggingFaceGECLM/REDDIT_comments | Reddit threads | en |
| `reddit_confessions.parquet` | SocialGrep/one-million-reddit-confessions | Real confessions | en |
| `reddit_questions.parquet` | SocialGrep/one-million-reddit-questions | AskReddit | en |
| `reddit_youtube_mix.parquet` | fsteig/conversations-30gb | Reddit+YouTube mix | en |
| `russian_dialogues.parquet` | Den4ikAI/russian_dialogues_2 | Telegram chats | ru |
| `italian_usenet.parquet` | mii-community/UsenetArchiveIT-conversations | Usenet forums | it |
| `twitch_chat.parquet` | lparkourer10/twitch_chat | Live stream chat | en |
| `mental_health.parquet` | Amod/mental_health_counseling_conversations | Therapy sessions | en |
| `japanese_text.parquet` | izumi-lab/llm-japanese-dataset | Japanese conversations | ja |
| `korean_conversations.parquet` | jojo0217/korean_safe_conversation | Everyday Korean chat | ko |

## ⚠️ Notes

- All data is **publicly posted** by real humans on public platforms
- No private messages or non-consensual data
- Filtered to remove `[deleted]`, `[removed]`, and very short posts
- Some content may be informal, colloquial, or contain strong language