--- language: - en - zh - es - ur license: apache-2.0 task_categories: - text-generation tags: - code - multilingual - legesher - transpilation - tiny-aya-expedition - language-decoded pretty_name: Language Decoded Data size_categories: - 100K **Note (2026-04-21):** Phase 2 configs have been renamed to `phase-2-the-stack-v1-*` and moved under `data/phase-2-the-stack-v1/`. Existing `load_dataset(...)` calls using the short `condition-*` names will no longer resolve. The short `condition-*` namespace is reserved for Phase 3 builds (streamed from The Stack v2, coming soon). See the _Loading the Dataset_ section below for updated config names. Multilingual Python code datasets for the **Language Decoded** project (part of [Cohere's Tiny Aya Expedition](https://aya.for.ai)), investigating whether code's reasoning benefit for language models is **language-dependent** or **structure-dependent**. ## Research Question > Does fine-tuning on non-English code (Python with translated keywords) improve multilingual reasoning as much as English code does? Prior work ([Aryabumi et al., 2024 -- "To Code or Not to Code"](https://arxiv.org/abs/2408.10914)) demonstrated that including English code in pre-training data improves downstream reasoning performance by approximately 8%. However, that study only tested English code. This dataset enables the natural follow-up: does the reasoning benefit come from the _structure_ of code, or from the _language_ of its keywords? ## Dataset Description This dataset provides filtered, quality-controlled Python source code in multiple configurations: the original English, three keyword-swapped variants (Chinese, Spanish, Urdu), a blended native+transpiled mix, and strictly native Chinese code. The source data is drawn from [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset), filtered for quality using the following criteria: - AST-valid Python only (must parse without errors) - Permissive licenses only (MIT, Apache-2.0, BSD, etc.) - 10--1000 lines of code - Minimum 21 GitHub stars - No autogenerated files - SHA-256 deduplication Keyword-swapped variants are produced using [Legesher](https://github.com/legesher/legesher) v0.7.3, which translates Python reserved words (37 keywords, 72 builtins, 66 exceptions) into the target language while preserving code structure and semantics. ## Available Configs Each condition is available in two sizes: `-32k` (full filtered corpus, ~31.8k train + ~3.5k validation) and `-5k` (stratified subset, 4.5k train + 500 validation). The `-5k` subsets are used for QLoRA fine-tuning on consumer GPUs. | Config | Condition | Language | Description | Train | Val | | -------------------- | ----------- | -------- | ------------------------------------------------------------ | ------ | ----- | | `condition-1-en-32k` | 1 (control) | English | Unmodified filtered Python from The Stack Dedup | 31,818 | 3,536 | | `condition-1-en-5k` | 1 (control) | English | Stratified 5k subset of condition-1 | 4,500 | 500 | | `condition-2-zh-32k` | 2 | Chinese | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 | | `condition-2-zh-5k` | 2 | Chinese | Stratified 5k subset of condition-2-zh | 4,500 | 500 | | `condition-2-es-32k` | 2 | Spanish | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 | | `condition-2-es-5k` | 2 | Spanish | Stratified 5k subset of condition-2-es | 4,500 | 500 | | `condition-2-ur-32k` | 2 | Urdu | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 | | `condition-2-ur-5k` | 2 | Urdu | Stratified 5k subset of condition-2-ur | 4,500 | 500 | | `condition-3-zh-5k` | 3 | Chinese | Blended: 3,486 native Chinese code + 1,514 transpiled Python | 4,500 | 500 | | `condition-4-zh-5k` | 4 | Chinese | Strictly native Chinese code (no transpiled code) | 6,553 | 729 | ## Schema ### Conditions 1--2 Used by: `condition-1-en-*`, `condition-2-zh-*`, `condition-2-es-*`, `condition-2-ur-*` | Column | Type | Description | | ------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- | | `code` | string | Python source code. For condition-2 configs, this is the transpiled (keyword-swapped) version. For condition-1, this is the original English source. | | `code_en` | string | Original English Python source code. Identical to `code` for condition-1-en. | | `language` | string | ISO 639-1 language code: `en`, `ur`, `zh`, or `es`. | | `file_path` | string | Original file path in The Stack Dedup. | | `license` | string | SPDX license identifier for the source file. | | `token_count` | int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer. | ### Condition 3 Used by: `condition-3-zh-5k` Condition 3 blends native Chinese code with transpiled code and adds a `source_type` column to distinguish them. `code_en` is populated for transpiled rows (keeping them in sync with conditions 1--2) but null for native code rows, which have no English equivalent. | Column | Type | Description | | ------------- | ----------- | ---------------------------------------------------------------------------------- | | `file_path` | string | File identifier (native filename or transpiled file path) | | `code` | string | The code content (native or transpiled) | | `code_en` | string/null | English original -- populated for transpiled rows, null for native code rows | | `language` | string | ISO 639-1 language code (`zh`) | | `license` | string | Source license (SPDX identifier, `UNKNOWN`, or `varies`) | | `token_count` | int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer | | `source_type` | string | `"native"` (natively Chinese-authored) or `"transpiled"` (keyword-swapped English) | ### Condition 4 Used by: `condition-4-zh-5k` Condition 4 contains strictly native Chinese code -- code written by developers who think and code in Chinese. This uses the same schema as the [language-decoded-community](https://huggingface.co/datasets/legesher/language-decoded-community) dataset rather than the transpilation schema, since there is no English original to reference. | Column | Type | Description | | -------------- | ------- | -------------------------------------------------------------- | | `filename` | string | Original filename | | `content` | string | The code content | | `extension` | string | File extension (e.g., `.py`, `.c`, `.wenyan`) | | `source` | string | Data source (e.g., `thestack`, `wenyan`, `program_in_chinese`) | | `quality_tier` | string | Quality rating: `A` (highest) through `D` (lowest) | | `sha256` | string | SHA-256 hash for deduplication | | `byte_size` | int64 | File size in bytes | | `total_lines` | int64 | Total line count | | `cjk_ratio` | float64 | Ratio of CJK characters in the file | | `has_cjk` | bool | Whether the file contains CJK characters | ## Experimental Conditions The Language Decoded experiment uses a ladder of conditions to isolate the mechanism behind code's reasoning benefit: | Condition | Name | Purpose | | ----------- | -------------------- | ----------------------------------------------------------------------------------------- | | Baseline | No fine-tuning | Establishes the performance floor | | Condition 1 | English code | Tests whether code fine-tuning helps at all (replicates Aryabumi et al.) | | Condition 2 | Keyword-swapped code | Tests whether the _language_ of keywords matters for the reasoning benefit | | Condition 3 | Mixed native sources | Tests whether diverse native-language code adds value beyond keyword swapping | | Condition 4 | Strictly native code | Tests whether code authored by native speakers carries unique signal beyond transpilation | ### The Experimental Ladder - **Baseline --> 1**: Does code help at all? - **1 --> 2**: Does the language of keywords matter? - **2 --> 3**: Does diversity of native-language sources add value beyond keyword swap? - **3 --> 4**: Does code written in the cultural context of a language carry something that transpiled+mixed can't? ## Usage ```python from datasets import load_dataset # Load full-size English code (control) ds = load_dataset("legesher/language-decoded-data", "condition-1-en-32k") # Load 5k subset (for QLoRA fine-tuning) ds = load_dataset("legesher/language-decoded-data", "condition-1-en-5k") # Load keyword-swapped variants ds = load_dataset("legesher/language-decoded-data", "condition-2-zh-5k") ds = load_dataset("legesher/language-decoded-data", "condition-2-es-5k") ds = load_dataset("legesher/language-decoded-data", "condition-2-ur-5k") # Load blended native + transpiled (condition 3) ds = load_dataset("legesher/language-decoded-data", "condition-3-zh-5k") # Load strictly native code (condition 4) ds = load_dataset("legesher/language-decoded-data", "condition-4-zh-5k") # Access splits train = ds["train"] val = ds["validation"] # Filter condition-3 by source type native_only = train.filter(lambda x: x["source_type"] == "native") ``` ## Technical Details | Parameter | Value | | ---------------------- | ------------------------------------------------------------------------------------------------------------------ | | Source dataset | [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset) | | Transpilation tool | [Legesher](https://github.com/legesher/legesher) v0.7.3 (legesher-core, legesher-i18n) | | Tokenizer | CohereLabs/tiny-aya-base | | Base model | [CohereLabs/tiny-aya-base](https://huggingface.co/CohereLabs/tiny-aya-base) (3.35B params) | | Train/validation split | 90% / 10% (seed 42) | | File format | Parquet (snappy compression) | | Filtering criteria | AST-valid, permissive licenses, 10--1000 lines, min 21 GitHub stars, no autogenerated files, SHA-256 deduplication | ## Limitations - **Source bias**: The Stack Dedup skews toward popular, well-starred GitHub repositories, which may not represent the full diversity of Python code in the wild. - **Keyword-only transpilation**: Legesher translates Python reserved words (keywords, builtins, exceptions) but leaves comments, docstrings, string literals, and variable/function names in their original language (typically English). This means condition-2 code is a hybrid of translated keywords and English identifiers. - **Token count variation**: Transpiled code may have different token counts than the English original due to multi-byte characters (especially for Chinese and Urdu), even though the code structure is identical. - **Single programming language**: Currently limited to Python. Results may not generalize to other programming languages. - **Condition 4 scope**: Native Chinese code is limited to publicly available sources (The Stack, Wenyan, Program-in-Chinese, Qi, Mulan) and may not represent the full spectrum of Chinese-language programming. ## Citation ```bibtex @misc{language-decoded-2026, title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code}, author={Madison Edgar and Saad Ahmed Bazaz and Tom Sherborne and Rashik Shahjahan and Khojasteh Mirza and Sarah Jawaid and Rafay Mustafa and Sohaib Ahmed Bazaz}, year={2026}, publisher={Hugging Face}, url={https://huggingface.co/datasets/legesher/language-decoded-data} } ``` ## Links - [Legesher on GitHub](https://github.com/legesher/legesher) - [Tiny Aya Expedition](https://aya.for.ai) - [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) - [Language Decoded Community (native code)](https://huggingface.co/datasets/legesher/language-decoded-community) - [Language Decoded Experiments (tracking)](https://huggingface.co/datasets/legesher/language-decoded-experiments) - [Language Decoded LoRA (model hub)](https://huggingface.co/legesher/language-decoded-lora) ## License Apache 2.0