CoCoDoc-MT / README.md
FiveC's picture
Update README.md
02374cc verified
metadata
license: other
license_name: un-corpus-license
license_link: https://conferences.unite.un.org/UNCorpus
language:
  - en
  - fr
  - es
  - zh
  - ru
  - ar
tags:
  - machine-translation
  - document-level
  - long-context
  - parallel-corpus
  - multilingual
  - benchmark
size_categories:
  - 10K<n<100K
task_categories:
  - translation
pretty_name: CoCoDoc-MT

CoCoDoc-MT: A Controlled-Complexity Document-Level Machine Translation Benchmark

Overview

CoCoDoc-MT is a document-level machine translation benchmark derived from the United Nations Parallel Corpus v1.0 (Ziemski et al., 2016). It is designed to support research on long-context neural machine translation and document-level understanding across six official UN languages: Arabic (ar), Chinese (zh), English (en), French (fr), Russian (ru), and Spanish (es).

The dataset is constructed to provide controlled translation complexity across three difficulty tiers defined by document length, enabling systematic evaluation of model capabilities as context length increases. It is associated with the NeurIPS submission under the NeurISP Long-Attention collection.


Source Corpus

The raw data originates from the United Nations Parallel Corpus v1.0, the first official parallel corpus published by the United Nations Department for General Assembly and Conference Management (DGACM). The corpus covers UN documents from 1990 to 2014 across all six official UN languages, with sentence-level alignments produced using a two-step pipeline combining Hunalign and the BLEU-Champ monolingual sentence aligner.

The fully aligned six-way subcorpus used in this work contains 86,307 documents comprising 11,365,709 sentence pairs and approximately 335 million English tokens. Raw data was downloaded directly from the official UN corpus portal at https://www.un.org/dgacm/en/content/uncorpus/Download.


Dataset Construction

Pass 1: Document Chaining and Accumulation

The six-way aligned plain-text files (UNv1.0.6way.{en,fr,es,zh,ru,ar} and UNv1.0.6way.ids) were processed in batches of 1,000,000 lines. For each batch, all seven files were verified to have identical line counts before processing, and all six language files were read in strict parallel using iterator-level synchronization to guarantee sentence-level alignment across all languages.

Sentences sharing the same document identifier were grouped into complete documents without truncation. Documents were then accumulated into multi-document chains according to a controlled-slot assignment scheme. Each chain is assigned to one of three difficulty slots before accumulation begins, with the slot determining the target word count range for that chain:

  • easy: 2,000 to 4,000 English words
  • medium: 4,000 to 8,000 English words
  • hard: 8,000 to 15,000 English words

Slots are assigned in a deterministic round-robin cycle of 20 positions: 10 easy, 7 medium, 3 hard. This cycle directly encodes the target distribution of 50 percent easy, 35 percent medium, and 15 percent hard samples. Accumulation for a given chain continues until the accumulated English word count meets the floor of the assigned slot. A hard ceiling is applied to prevent excessively long chains. Semantic breaks, defined as a change in UN document domain code or publication year between consecutive documents, trigger an immediate flush of the current chain regardless of accumulated length, preserving thematic coherence within each sample.

The domain is derived from the UN document symbol prefix following standard UN documentation conventions: A (General Assembly), S (Security Council), E (Economic and Social Council), ST (Secretariat), HRC (Human Rights Council), DP (UNDP), UNEP (UNEP), TD (UNCTAD).

Pass 2: Resampling

After all 11 batches were processed, the raw per-batch parquet files were resampled using reservoir sampling to select the final balanced set. Reservoir sampling was applied per difficulty tier independently, ensuring that no sample was overrepresented. The final dataset was randomly shuffled with a fixed random seed of 42 for reproducibility.

Alignment Verification

Three independent alignment guarantees are enforced throughout construction:

  1. Pre-batch line count verification: after each extraction, wc -l is run on all seven files and the pipeline halts if any file differs from the ids file by even one line.
  2. Parallel iterator synchronization: all seven file handles are consumed simultaneously via a single zip iterator, making it structurally impossible for any language stream to advance independently of the others.
  3. Per-sample completeness guard: immediately before each sample is written, all six language fields are checked to be non-empty. A non-empty check failure raises a hard error with full diagnostic context.

Dataset Statistics

Split Easy Medium Hard Total
train 39,610 (50.0%) 27,727 (35.0%) 11,883 (15.0%) 79,220

Word count ranges are computed over the English field of each sample.

Difficulty Min wc (en) Target Floor Target Ceiling
easy ~200 2,000 4,000
medium ~200 4,000 8,000
hard ~200 8,000 15,000

Note: Samples produced by a semantic break flush may fall below the slot floor. The word_count_en field in each sample records the actual English word count and should be used for precise filtering.


Schema

Each sample contains the following fields:

Field Type Description
id string Unique sample identifier (UN-CHAIN-XXXXXXX)
organization string Always "United Nations"
year string Publication year of the first document in the chain
domain string UN body or committee name derived from document symbol
source_documents string Concatenated unique document symbols in the chain
word_count_en int English word count of the sample
difficulty string Assigned difficulty tier: easy, medium, or hard
en string English text
fr string French text
es string Spanish text
zh string Chinese text
ru string Russian text
ar string Arabic text

Intended Use

CoCoDoc-MT is intended as a benchmark dataset for research on:

  • Document-level and long-context machine translation
  • Multi-source and pivot-based translation across six languages
  • Evaluation of long-context language model attention mechanisms
  • Cross-lingual transfer at the document level

The three-tier difficulty structure allows controlled ablation of model performance as a function of input length and document complexity.


Limitations

The dataset inherits the domain characteristics of UN documents: formal diplomatic and parliamentary language, institutional subject matter, and a publication window of 1990 to 2014. Performance on this benchmark may not generalize to informal, literary, or conversational translation tasks.

Samples produced by semantic break flushing may be shorter than the nominal floor for their assigned difficulty tier. The word_count_en field should be used for any length-based analysis.

Chinese text in the source corpus was processed with Jieba segmentation prior to alignment. The zh field in this dataset reflects that tokenization convention.


License

The source data is composed of official United Nations documents in the public domain. Use of this dataset is subject to the UN Corpus disclaimer, which requires acknowledgment of the United Nations as the source and carries no warranty of accuracy or completeness. The United Nations shall not be held liable for any use of this data. For the full disclaimer, see the official corpus page at https://conferences.unite.un.org/UNCorpus.

When using this dataset, please cite both the original corpus paper and this dataset.


Citation

If you use CoCoDoc-MT in your research, please cite the original UN Parallel Corpus:

@inproceedings{ziemski2016united,
  title     = {The United Nations Parallel Corpus v1.0},
  author    = {Ziemski, Michal and Junczys-Dowmunt, Marcin and Pouliquen, Bruno},
  booktitle = {Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)},
  year      = {2016},
  publisher = {European Language Resources Association (ELRA)}
}

Acknowledgments

This dataset was constructed from the United Nations Parallel Corpus v1.0, published by the United Nations Department for General Assembly and Conference Management (DGACM). We thank the authors of the original corpus for making this resource publicly available.