Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
source_term: string
target_term: string
domain: string
confidence: double
source_en: null
target_ko: null
quality_score: null
labse_score: null
comet_score: null
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 739
to
{'source_en': Value('string'), 'target_ko': Value('string'), 'domain': Value('string'), 'quality_score': Value('float32'), 'labse_score': Value('float32'), 'comet_score': Value('float32')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2260, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
source_term: string
target_term: string
domain: string
confidence: double
source_en: null
target_ko: null
quality_score: null
labse_score: null
comet_score: null
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 739
to
{'source_en': Value('string'), 'target_ko': Value('string'), 'domain': Value('string'), 'quality_score': Value('float32'), 'labse_score': Value('float32'), 'comet_score': Value('float32')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dari EN↔KO Technical Translation Corpus (Sample)
🌉 Dari (다리, "bridge") — 67M+ EN↔KO parallel sentence pairs for technical translation.
Dataset Description
This is a 10,000-pair sample from the full Dari corpus. The full corpus contains 67.2 million high-quality EN↔KO parallel segments across multiple technical domains.
Domains
| Domain | Full Corpus Pairs |
|---|---|
| Patent (KIPRIS) | 15M+ |
| Medical (PubMed) | 12M+ |
| IT/Software | 10M+ |
| Legal | 8M+ |
| General | 22M+ |
Files
data/sample.csv— 10,000 parallel EN↔KO sentence pairs (stratified by domain)data/glossary.csv— 5,000 bilingual term entriesdata/stats.csv— Domain distribution statistics
Quality Metrics
- LaBSE semantic similarity scores
- COMET translation quality scores
- Human quality ratings (0-1 scale)
Usage
from datasets import load_dataset
ds = load_dataset("dogdoh/dari-enko-corpus-sample")
Full Corpus Access
| Tier | Price | Includes |
|---|---|---|
| Free | $0 | This 10K sample |
| API Basic | $49/mo | 50K chars/month, REST API |
| API Pro | $199/mo | 500K chars/month, batch export |
| Enterprise | Custom | Full corpus license, on-prem |
🔗 API: dari.is-a.dev 📧 Contact: dogdoh1338@icloud.com
- Downloads last month
- 25