Datasets:
doc_id stringlengths 13 16 | source_name stringclasses 1
value | domain stringclasses 1
value | language stringclasses 1
value | text stringlengths 8.74k 3.27M | url null | license stringclasses 1
value | source_file stringlengths 104 110 | source_index int64 1 1.63k | timestamp stringlengths 32 32 | raw_path stringclasses 1
value | processing_version stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|
aihub_books_0 | aihub_books | korean | ko | "각 상태의 행동을 별개의 클래스로 국지화 코드를 수정하거나 이해하기가 (...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 1 | 2026-04-02T18:13:25.103654+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_1 | aihub_books | korean | ko | "그림 43 자율주행자동차는 제때 브레이크를 밟거나 피해갈 수 없다. 하지만(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 2 | 2026-04-02T18:13:25.229573+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_2 | aihub_books | korean | ko | "세상 밖으로 나온 키보 바로 이것이 어머니의 완벽에 가까운 성공의 진수(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 3 | 2026-04-02T18:13:25.360902+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_3 | aihub_books | korean | ko | "인류 역사에서 과학과 기술의 발전도 대체로 생산력 고양에 도움 되어왔다(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 4 | 2026-04-02T18:13:25.477221+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_4 | aihub_books | korean | ko | "엔! ! 이라는 공식 사용 취소 사유를 입력한 후 취소를 클릭합니다. 수정 처(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 5 | 2026-04-02T18:13:25.603493+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_5 | aihub_books | korean | ko | "나는 키케로를 사랑했습니다. 베르길리우스도 사랑했습니다. 그러나 이제(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 6 | 2026-04-02T18:13:25.721656+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_6 | aihub_books | korean | ko | "이번에는 입자 대신 파동을 쏘아보자. 구슬탄알을 쏘는 기계를 제거하고 (...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 7 | 2026-04-02T18:13:25.853672+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_7 | aihub_books | korean | ko | "답 아니요. 는 오로지 내장 하드에만 설치가능합니다. 뿐만 아니라 설치 전(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 8 | 2026-04-02T18:13:25.974151+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_8 | aihub_books | korean | ko | "인증 아이디패스워드 인증, 전자서명 인증 등 인증 수단을 이용하여 사용(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 9 | 2026-04-02T18:13:26.105878+09:00 | data/raw/aihub_books | aihub_v1 |
aihub_books_9 | aihub_books | korean | ko | "2.5 구현소스 쇠전을 거쳐 도수장 앞에 와 돌 때 변수 선언 “아니, 꼭 그렇(...TRUNCATED) | null | AI Hub Terms | "data/raw/aihub_books/029.대규모_구매도서_기반_한국어_말뭉치_데이터/01.데이터/2(...TRUNCATED) | 10 | 2026-04-02T18:13:26.220808+09:00 | data/raw/aihub_books | aihub_v1 |
Normalized Datasets for Korean LLM
Stage 0.5 — Normalization output of the Keural Korean LLM pretraining pipeline. Raw text from 19 source datasets converted into a single unified JSONL schema. No filtering has been applied. This is the clean, structured raw form.
Quick Stats
| Metric | Value |
|---|---|
| Total documents normalized | ~534,984,997 |
| Number of source datasets | 19 |
| Domains | English · Korean · Code · Science |
| Format | JSONL (one JSON object per line) |
| Schema version | v1 |
| Pipeline stage | Stage 0.5 (after download, before filtering) |
| Last updated | 2026-04-09 |
Where This Fits in the Pipeline
flowchart LR
A["Stage 0\nRaw Download\n~1.5 TB"] --> B["Stage 0.5\nNormalization\n← YOU ARE HERE\n~535M docs"]
B --> C["Stage 1\nFiltering\nmkd-chanwoo/filtered-datasets-for-koreanLLM\n~293M docs"]
C --> D["Stage 2\nDedup + Shard\nmkd-chanwoo/keural-datasets\n~329M docs / ~220B tokens"]
style B fill:#f0a500,color:#000
What Is Normalization? (For Beginners)
Each source dataset stores text in a different format and field name.
For example, Gutenberg uses the field name TEXT, but CCNews uses plain_text, and StarCoderData uses content.
Normalization solves this by:
- Reading each source dataset in its original format
- Extracting the text from its specific field
- Writing everything into one unified schema with the same field names
- Adding metadata like
domain,language,doc_id,license
After normalization, all downstream stages (filtering, deduplication) work on one consistent format regardless of the original source.
Source Datasets & Field Mappings
Each row shows: what dataset was used, where it came from, which field was extracted as
text, and how many documents were written after normalization.
English Domain
| Dataset Key | HuggingFace Source | Source Field → text |
Docs Normalized | Language |
|---|---|---|---|---|
gutenberg |
sedthh/gutenberg_english | TEXT |
48,284 | en |
openwebtext |
Skylion007/openwebtext | text |
8,013,769 | en |
ccnews |
stanford-oval/ccnews | plain_text |
82,416,441 | en |
falcon-refinedweb |
tiiuae/falcon-refinedweb | content |
78,887,040 | en |
fineweb |
HuggingFaceFW/fineweb | text |
81,595,324 | en |
wikipedia |
wikimedia/wikipedia config: 20231101.en |
text |
6,407,814 | en |
Korean Domain
| Dataset Key | Source | Source Field → text |
Docs Normalized | Language |
|---|---|---|---|---|
namuwiki |
heegyu/namuwiki-extracted | text |
565,293 | ko |
wikipedia_ko |
lcw99/wikipedia-korean-20240501 | text |
515,425 | ko |
oscar_ko_only |
lcw99/oscar-ko-only | text |
3,675,420 | ko |
korean_webtext |
HAERAE-HUB/KOREAN-WEBTEXT | text |
1,284,878 | ko |
aihub_modu |
AIHub — Korean government open data (local) | parsed from structured AIHUB format | 58,997 | ko |
aihub_books |
AIHub — Korean government open data (local) | parsed from structured AIHUB format | 5,823 | ko |
aihub_online_colloquial |
AIHub — Korean government open data (local) | parsed from structured AIHUB format | 22,859 | ko |
Code Domain
| Dataset Key | HuggingFace Source | Source Field → text |
Docs Normalized | Language |
|---|---|---|---|---|
github-top-code |
ronantakizawa/github-top-code | content |
1,121,474 | en (code) |
codeparrot_clean |
codeparrot/codeparrot-clean | content |
5,361,374 | en (code) |
starcoderdata |
bigcode/starcoderdata | content |
104,640,054 | en (code) |
Science Domain
| Dataset Key | HuggingFace Source | Source Field → text |
Docs Normalized | Language |
|---|---|---|---|---|
arxiv |
KiteFishAI/arxiv-tex-corpus-full | text |
1,089,469 | en |
open-web-math |
open-web-math/open-web-math | text |
6,315,233 | en |
peS2o |
allenai/peS2o | text |
151,960,046 | en |
Total Documents by Domain
| Domain | Docs Normalized | % of Total |
|---|---|---|
| English | ~257,368,672 | 48.1% |
| Science | ~159,364,748 | 29.8% |
| Code | ~111,122,902 | 20.8% |
| Korean | ~6,128,675 | 1.1% |
| Total | ~534,984,997 | 100% |
Normalization Process — Step by Step
flowchart TD
A["Source File\n(HuggingFace / AIHub)"] --> B["Read original format\n(JSONL / Parquet / CSV / AIHUB)"]
B --> C["Extract text from\ndataset-specific field\n(TEXT / text / content / plain_text)"]
C --> D["Assign doc_id\n= source_name + '_' + index"]
D --> E["Compute char_count\nand tokens_count\n(keural SentencePiece tokenizer)"]
E --> F["Write unified JSONL\nwith full metadata"]
F --> G["Update checkpoint\n(resumable mid-stream)"]
Normalization is resumable. Each dataset tracks line_index and doc_id in a checkpoint file (stage1_savepoint.json), so if the process is interrupted, it resumes exactly where it left off.
What normalization does NOT do:
- ❌ Does not modify or clean text content
- ❌ Does not filter documents
- ❌ Does not deduplicate
- ❌ Does not re-encode or translate
- ✅ Only reshapes structure and adds metadata
Unified Document Schema
Every document in this repository follows this exact schema:
{
"doc_id": "gutenberg_000000042",
"source_name": "gutenberg",
"domain": "english",
"language": "en",
"text": "The full original text of the document...",
"url": "https://source-url.com/page (if available, else null)",
"license": "Public Domain",
"source_file": "data/raw/gutenberg/train-00000-of-00001.parquet",
"source_index": 42,
"timestamp": "2026-03-15T08:22:11Z (if available in source, else null)",
"processing_version": "v1"
}
Field Descriptions
| Field | Type | Description |
|---|---|---|
doc_id |
string | Unique ID: {source_name}_{source_index} |
source_name |
string | Dataset key (e.g. gutenberg, ccnews) |
domain |
string | One of: english, korean, code, science |
language |
string | ISO 639-1 code: en or ko |
text |
string | Raw document text (unmodified from source) |
url |
string|null | Original URL if provided by source dataset |
license |
string | Source dataset license |
source_file |
string | Local path to the source file it was read from |
source_index |
int | Row index within that source file |
timestamp |
string|null | Publication date/time if available in source |
processing_version |
string | Pipeline version (v1) |
Normalization Statistics (Seen vs Written)
"Seen" = total rows read from source. "Written" = successfully normalized. Rows not written are rows that failed to parse (malformed JSON, empty text, encoding errors).
| Dataset | Seen | Written | Normalized Rate |
|---|---|---|---|
| aihub_books | 5,974 | 5,823 | 97.5% |
| aihub_modu | 117,994 | 58,997 | 50.0% (deduped at read) |
| aihub_online_colloquial | 45,894 | 22,859 | 49.8% (deduped at read) |
| arxiv | 1,089,469 | 1,089,469 | 100% |
| ccnews | 82,416,441 | 82,416,441 | 100% |
| codeparrot_clean | 5,361,374 | 5,361,374 | 100% |
| falcon-refinedweb | 78,888,470 | 78,887,040 | ~100% |
| fineweb | 81,595,324 | 81,595,324 | 100% |
| github-top-code | 1,122,139 | 1,121,474 | ~100% |
| gutenberg | 48,285 | 48,284 | ~100% |
| korean_webtext | 1,284,879 | 1,284,878 | ~100% |
| namuwiki | 565,293 | 565,293 | 100% |
| open-web-math | 6,315,233 | 6,315,233 | 100% |
| openwebtext | 8,013,769 | 8,013,769 | 100% |
| oscar_ko_only | 3,675,421 | 3,675,420 | ~100% |
| peS2o | 151,960,046 | 151,960,046 | 100% |
| starcoderdata | 104,640,054 | 104,640,054 | 100% |
| wikipedia (en) | 6,407,814 | 6,407,814 | 100% |
| wikipedia_ko | 515,425 | 515,425 | 100% |
Tokenizer Used for Token Counting
All tokens_count values in this dataset are computed using the Keural SentencePiece tokenizer:
- Model:
mkd-ai/keural-tokenizer - Type: SentencePiece (Unigram)
- Vocabulary file:
keural_tokenizer.vocab - Model file:
keural_tokenizer.model
This is the same tokenizer used by the Keural LLM model.
Download & Processing Timeline
| Event | Date (KST) |
|---|---|
| Download of first datasets begins | 2026-04-01 |
| Normalization of first batch complete | 2026-04-08 |
| All 19 datasets normalized | 2026-04-09 |
| Upload to this HuggingFace repo | 2026-04-09 ~09:43 KST |
| Last updated | 2026-04-10 |
Raw Download Sizes (Stage 0)
| Dataset | Raw Download Size |
|---|---|
| peS2o | 286.58 GB |
| aihub_specialized_corpus | 166.18 GB |
| ccnews | 148.52 GB |
| starcoderdata | 149.65 GB |
| fineweb | 149.97 GB |
| falcon-refinedweb | 127.93 GB |
| open-web-math | 25.55 GB |
| openwebtext | 22.53 GB |
| aihub_books | 110.98 GB |
| aihub_modu | 38.16 GB |
| aihub_online_colloquial | 17.66 GB |
| codeparrot_clean | 11.93 GB |
| wikipedia (en) | 10.83 GB |
| gutenberg | 10.01 GB |
| oscar_ko_only | 6.49 GB |
| korean_webtext | 4.17 GB |
| wikipedia_ko | 1.63 GB |
Licenses
This dataset contains content from multiple sources with mixed licenses. Each source retains its original license.
| Dataset | License |
|---|---|
| gutenberg | Public Domain |
| openwebtext | CC0 1.0 |
| ccnews | CC-BY 4.0 |
| falcon-refinedweb | Falcon License (TII) |
| fineweb | ODC-By 1.0 |
| wikipedia (en) | CC-BY-SA 3.0 |
| namuwiki | CC-BY-NC-SA 3.0 |
| wikipedia_ko | CC-BY-SA 3.0 |
| oscar_ko_only | CC0 1.0 |
| korean_webtext | CC-BY 4.0 |
| aihub_modu | AIHub Open License (Korean government open data) |
| aihub_books | AIHub Open License |
| aihub_online_colloquial | AIHub Open License |
| github-top-code | Various open source (see source repo) |
| codeparrot_clean | OpenRAIL |
| starcoderdata | BigCode OpenRAIL-M |
| arxiv | CC-BY 4.0 |
| open-web-math | CC-BY 4.0 |
| peS2o | CC-BY 4.0 |
⚠️ License Notice: This repository inherits mixed licenses from its source datasets. Please review the license of each individual source before commercial or research use.
Related Repositories
| Repo | Stage | Description |
|---|---|---|
| This repo | Stage 0.5 | Normalized raw data |
| mkd-chanwoo/filtered-datasets-for-koreanLLM | Stage 1 | Quality + language + toxicity filtered |
| mkd-chanwoo/keural-datasets | Stage 2 | Final deduplicated + sharded production data |
| mkd-chanwoo/simplemodel-270M | Model | LLM trained on this pipeline's output |
- Downloads last month
- 322