asdf98 commited on
Commit
9a01b20
·
verified ·
1 Parent(s): f9f72b5

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -41
README.md CHANGED
@@ -1,49 +1,74 @@
1
- ---
2
- tags:
3
- - ml-intern
4
- ---
5
- # Real Human Conversations Multilingual Dataset
6
-
7
- A unified dataset of real human conversations from across the world.
8
-
9
- ## Languages
10
- - English: Reddit, Discord, Twitch, therapy, Usenet
11
- - Russian: Telegram chats
12
- - Italian: Usenet newsgroups
13
- - Japanese: Speech transcriptions
14
- - Korean: Everyday conversations
15
-
16
- ## Format (Parquet)
 
 
 
 
 
 
 
 
 
17
  | Column | Type | Description |
18
- | text | string | Conversation text |
19
- | source | string | Origin dataset |
20
- | language | string | ISO code |
21
- | domain | string | Conversation type |
22
- | turns | int | Dialogue turns |
23
- | metadata | string | Extra JSON info |
24
-
25
- ## Sources
26
- - mookiezi/Discord-Dialogues
27
- - HuggingFaceGECLM/REDDIT_comments
28
- - fsteig/conversations-30gb
29
- - SocialGrep/one-million-reddit-confessions
30
- - Den4ikAI/russian_dialogues_2
31
- - mii-community/UsenetArchiveIT-conversations
32
- - japanese-asr/whisper_transcriptions.reazon_speech_all
33
- - jojo0217/korean_safe_conversation
34
- - lparkourer10/twitch_chat
35
- - Amod/mental_health_counseling_conversations
36
-
37
- ## Usage
38
  ```python
39
  from datasets import load_dataset
 
 
40
  ds = load_dataset("asdf98/human-chats-worldwide", split="train")
 
 
 
 
 
 
 
41
  ```
42
 
43
- <!-- ml-intern-provenance -->
44
- ## Generated by ML Intern
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
- This dataset repository was generated by [ML Intern](https://github.com/huggingface/ml-intern), an agent for machine learning research and development on the Hugging Face Hub.
47
 
48
- - Try ML Intern: https://smolagents-ml-intern.hf.space
49
- - Source code: https://github.com/huggingface/ml-intern
 
 
 
1
+ # Real Human Conversations Worldwide — Multilingual Dataset
2
+
3
+ A massive, unified dataset of **real human conversations** from across the world, collected from publicly available sources. This is NOT synthetic customer-service data — it contains genuine human dialogue from Reddit, Discord, Twitch, Usenet, Telegram, therapy sessions, and more.
4
+
5
+ ## 📊 Stats
6
+
7
+ - **Total rows**: 1,609,769
8
+ - **Total size**: ~470 MB (zstd-compressed Parquet)
9
+ - **Format**: Parquet
10
+ - **Languages**: English, Russian, Italian, Japanese, Korean
11
+
12
+ ## 🌍 Languages
13
+
14
+ | Language | Rows | Size | Sources |
15
+ |----------|------|------|---------|
16
+ | English | 1,084,282 | 98.3 MB | Discord, Reddit, Twitch, therapy, YouTube mix |
17
+ | Russian | 200,000 | 34.1 MB | Telegram chats |
18
+ | Italian | 198,508 | 318.3 MB | Usenet newsgroups |
19
+ | Japanese | 100,000 | 10.4 MB | Text conversations |
20
+ | Korean | 26,979 | 9.1 MB | Everyday chat |
21
+
22
+ ## 📁 Dataset Format
23
+
24
+ All data unified into a consistent schema:
25
+
26
  | Column | Type | Description |
27
+ |--------|------|-------------|
28
+ | `text` | string | Conversation / dialogue / transcript text |
29
+ | `source` | string | Origin dataset name |
30
+ | `language` | string | ISO language code |
31
+ | `domain` | string | Conversation type |
32
+ | `turns` | int | Number of dialogue turns |
33
+ | `metadata` | string | JSON with extra info |
34
+
35
+ ## 🚀 Usage
36
+
 
 
 
 
 
 
 
 
 
 
37
  ```python
38
  from datasets import load_dataset
39
+
40
+ # Load everything
41
  ds = load_dataset("asdf98/human-chats-worldwide", split="train")
42
+
43
+ # By language
44
+ en = ds.filter(lambda x: x["language"] == "en")
45
+ ru = ds.filter(lambda x: x["language"] == "ru")
46
+ it = ds.filter(lambda x: x["language"] == "it")
47
+ ja = ds.filter(lambda x: x["language"] == "ja")
48
+ ko = ds.filter(lambda x: x["language"] == "ko")
49
  ```
50
 
51
+ ## 🔗 Sources
52
+
53
+ All data from publicly available Hugging Face datasets:
54
+
55
+ | File | Source | Type | Language |
56
+ |------|--------|------|----------|
57
+ | `discord_dialogues.parquet` | mookiezi/Discord-Dialogues | Real Discord chats | en |
58
+ | `reddit_comments.parquet` | HuggingFaceGECLM/REDDIT_comments | Reddit threads | en |
59
+ | `reddit_confessions.parquet` | SocialGrep/one-million-reddit-confessions | Real confessions | en |
60
+ | `reddit_questions.parquet` | SocialGrep/one-million-reddit-questions | AskReddit | en |
61
+ | `reddit_youtube_mix.parquet` | fsteig/conversations-30gb | Reddit+YouTube mix | en |
62
+ | `russian_dialogues.parquet` | Den4ikAI/russian_dialogues_2 | Telegram chats | ru |
63
+ | `italian_usenet.parquet` | mii-community/UsenetArchiveIT-conversations | Usenet forums | it |
64
+ | `twitch_chat.parquet` | lparkourer10/twitch_chat | Live stream chat | en |
65
+ | `mental_health.parquet` | Amod/mental_health_counseling_conversations | Therapy sessions | en |
66
+ | `japanese_text.parquet` | izumi-lab/llm-japanese-dataset | Japanese conversations | ja |
67
+ | `korean_conversations.parquet` | jojo0217/korean_safe_conversation | Everyday Korean chat | ko |
68
 
69
+ ## ⚠️ Notes
70
 
71
+ - All data is **publicly posted** by real humans on public platforms
72
+ - No private messages or non-consensual data
73
+ - Filtered to remove `[deleted]`, `[removed]`, and very short posts
74
+ - Some content may be informal, colloquial, or contain strong language