TatCorp-222M (Private)
TatCorp-222M — the largest open corpus of Tatar language texts prepared for research in Natural Language Processing (NLP) and language modelling. The corpus is provided in Parquet/JSONL format and optimized for streaming processing and training large language models.
📥 Access and License
- Repository:
arabovs-ai-lab/TatCorp_222M(private on Hugging Face Hub) - Access: available upon request for academic and non-commercial research after signing a Data Use Agreement (DUA).
- Redistribution: redistribution of source texts is prohibited without explicit permission from the rights holders.
- Contact: cool.araby@gmail.com
🗂 Structure
- Format: JSONL (one record per line) and sharded Parquet
- Total records: 541,543
- Splits: train (530,712 / ~98%), validation (5,415 / ~1%), test (5,416 / ~1%)
- Fields:
id,url,title,content,category,date,lang,source,license
📊 Detailed Statistics
Key Metrics
- Total volume: 541,543 documents
- Tokens (text): ~222,312,958
- Tokens (titles): ~1,680,896
- Total tokens: ~223,993,854
- Average document length: 410.6 tokens
- Unique vocabulary: ~2,200,895 words
- Categories: 482 unique categories
Data Completeness
| Metric | Count | Percentage |
|---|---|---|
| Records with URL | 541,543 | 100.0% |
| Records with category | 541,193 | 99.94% |
| Records with title | 541,087 | 99.92% |
| Records with text | 541,472 | 99.99% |
| Records with date | 90,475 | 16.7% |
Distribution by Source (TOP-10)
| # | Domain | Documents | Share | Content Type |
|---|---|---|---|---|
| 1 | wikipedia.org |
456,070 | 84.2% | Encyclopaedic |
| 2 | matbugat.ru |
44,880 | 8.3% | News |
| 3 | intertat.tatar |
19,529 | 3.6% | News/Cultural |
| 4 | azatliq.org |
8,180 | 1.5% | News |
| 5 | vk.com |
6,796 | 1.3% | Social network |
| 6 | shahrikazan.ru |
2,399 | 0.4% | Regional |
| 7 | tatar-inform.tatar |
1,524 | 0.3% | News |
| 8 | mamadysh-rt.ru |
1,212 | 0.2% | Regional |
| 9 | belgech.ru |
833 | 0.2% | Blog |
| 10 | vatantat.ru |
119 | 0.02% | Cultural |
Total unique domains: 11
Distribution by Categories (TOP-15)
| # | Category | Documents | Share |
|---|---|---|---|
| 1 | Wikipedia | 456,071 | 84.3% |
| 2 | Matbugat.ru | 11,886 | 2.2% |
| 3 | Economics | 6,488 | 1.2% |
| 4 | Shähri Kazan | 6,276 | 1.2% |
| 5 | Vatanım Tatarstan | 6,270 | 1.2% |
| 6 | Yañalıqlar (News) | 4,191 | 0.8% |
| 7 | Ömet (Hope) | 2,128 | 0.4% |
| 8 | Intertat.ru | 1,991 | 0.4% |
| 9 | Culture | 1,975 | 0.4% |
| 10 | Tatarstan Youth | 1,591 | 0.3% |
| 11 | Fän (Science) | 1,333 | 0.2% |
| 12 | Sälämätlek (Health) | 1,193 | 0.2% |
| 13 | Sport | 1,086 | 0.2% |
| 14 | Mädäniät (Culture) | 898 | 0.2% |
| 15 | Tormış (Life) | 895 | 0.2% |
Total unique categories: 482
Document Length Statistics
| Metric | Titles (title) | Texts (content) |
|---|---|---|
| Total tokens | 1,680,896 | 222,312,958 |
| Mean | 3.1 | 410.6 |
| Median | 3 | ~250 |
| Minimum | 0 | 1 |
| Maximum | 26 | 608,353 |
| Standard deviation | 2.6 | 3,301.9 |
| Unique words | 344,151 | 2,196,225 |
Temporal Distribution
- Dated documents: 90,475 (16.7% of the corpus)
- Range: 2007-2025
- Majority: 2020-2025 (over 50,000 documents)
- Most recent year: 2025 (14,213 documents)
🎯 Key Insights
- Wikipedia dominance - 84.2% of the corpus consists of encyclopaedic articles
- Content currency - most dated documents are from 2020-2025
- Large length variance - standard deviation of 3,301.9 indicates very long documents
- Rich vocabulary - 2.2 million unique tokens
⚖️ Ethics and PII
- Source: publicly available web content
- PII removal: automatic and selective manual removal of personal data applied. Complete removal is not guaranteed — additional verification is recommended for sensitive use cases
- Collection scripts: not published for legal/ethical reasons. Post-processing code (normalisation, deduplication, filtering) is available
🔗 Models Trained on This Corpus
TatarTokenizers — Trained tokeniser models and BPE/WordPiece configurations based on this corpus:
https://huggingface.co/arabovs-ai-lab/TatarTokenizersTatar2Vec — Trained word and document embedding models based on this corpus:
https://huggingface.co/arabovs-ai-lab/Tatar2Vec
🔬 Potential Use Cases
The TatCorp-222M corpus can be used for the following research tasks:
- Language modelling — fine-tuning or training large language models for Tatar from scratch
- Text classification — multi-class categorisation by topic, genre, or source
- Topic modelling — discovering latent topics and their temporal dynamics
- Text generation — training models for news, article, or creative text generation
- Information retrieval — building semantic search systems for Tatar
- Linguistic research — analysing morphology, syntax, and lexicon of modern Tatar
▶️ Quick Start — Examples (Python)
Важно: перед запуском задайте HF-токен в переменной окружения Windows (PowerShell):
setx HF_TOKEN "hf_...your_token..."Linux / macOS:export HF_TOKEN=hf_...your_token...
1) Loading Private Dataset (Standard Mode)
import os
from huggingface_hub import login
from datasets import load_dataset
# 1) Authentication (do this once per environment)
token = os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_HUB_TOKEN")
login(token=token)
# 2) File paths
REPO = "arabovs-ai-lab/TatCorp_222M"
train_files = [
f"hf://datasets/{REPO}@main/data/train-{i:05d}-of-00030.parquet"
for i in range(30)
]
validation_files = [
f"hf://datasets/{REPO}@main/data/validation-00000-of-00001.parquet"
]
test_files = [
f"hf://datasets/{REPO}@main/data/test-00000-of-00001.parquet"
]
data_files = {
"train": train_files,
"validation": validation_files,
"test": test_files,
}
# 3) Load dataset (auth handled automatically after login)
ds_dict = load_dataset("parquet", data_files=data_files)
# 4) Verification
for split, ds in ds_dict.items():
print(f"{split}: {len(ds)} examples")
print("Columns:", ds_dict["train"].column_names)
print("First example:", ds_dict["train"][0])
2) Loading Private Dataset (Streaming)
import os
from huggingface_hub import login
from datasets import load_dataset
# --- Auth
token = os.getenv("HF_TOKEN") or os.getenv("HUGGINGFACE_HUB_TOKEN")
login(token=token)
# --- Streaming
REPO = "arabovs-ai-lab/TatCorp_222M"
train_files = [
f"hf://datasets/{REPO}@main/data/train-{i:05d}-of-00030.parquet"
for i in range(30)
]
ds_stream = load_dataset(
"parquet",
data_files={"train": train_files},
split="train",
streaming=True,
)
# --- Test
it = iter(ds_stream)
print(next(it))
3) Tokenisation for MLM (Example with XLM-R)
from transformers import (
AutoTokenizer,
AutoModelForMaskedLM,
DataCollatorForLanguageModeling,
)
# assume ds_dict already loaded
ds = ds_dict["train"]
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
def tokenize_function(examples):
return tokenizer(
examples["content"],
truncation=True,
max_length=512,
)
tokenized = ds.map(
tokenize_function,
batched=True,
remove_columns=ds.column_names,
)
model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-base")
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=True,
mlm_probability=0.15,
)
4) Preparing a Classifier (by Categories)
from transformers import AutoTokenizer, AutoModelForSequenceClassification
ds = ds_dict["train"]
labels = sorted(set(ds["category"]))
label2id = {label: i for i, label in enumerate(labels)}
id2label = {i: label for label, i in label2id.items()}
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-base")
def preprocess(examples):
tok = tokenizer(
examples["title"],
examples["content"],
truncation=True,
max_length=512,
)
tok["labels"] = [label2id[c] for c in examples["category"]]
return tok
tokenized_clf = ds.map(
preprocess,
batched=True,
remove_columns=ds.column_names,
)
model_clf = AutoModelForSequenceClassification.from_pretrained(
"xlm-roberta-base",
num_labels=len(labels),
label2id=label2id,
id2label=id2label,
)
5) Local Dataset Saving
from datasets import load_dataset
REPO = "arabovs-ai-lab/TatCorp_222M"
train_files = [
f"hf://datasets/{REPO}@main/data/train-{i:05d}-of-00030.parquet"
for i in range(30)
]
ds_local = load_dataset(
"parquet",
data_files={"train": train_files},
split="train",
)
ds_local.save_to_disk("tatcorp_train_dataset")
print("Saved to tatcorp_train_dataset")
📚 References and Contacts
- Dataset page (private):
https://huggingface.co/datasets/arabovs-ai-lab/TatCorp_222M - Tokenisers: https://huggingface.co/arabovs-ai-lab/TatarTokenizers
- Embeddings: https://huggingface.co/arabovs-ai-lab/Tatar2Vec
Access requests / DUA / contact: cool.araby@gmail.com
🔬 About the Laboratory
Arabovs AI Research Lab — a private research laboratory of Associate Professor Mullosharaf Kurbonovich Arabov, specialising in Natural Language Processing, computational linguistics, and machine learning for low-resource languages.
Last updated: January 2025
- Downloads last month
- 6