madiedgar commited on
Commit
76f9d73
·
1 Parent(s): cbac750

docs: fix condition-4-zh-5k glob pattern to use train-* (files renamed) (#9)

Browse files

- docs: fix condition-4-zh-5k glob pattern to use train-* (files renamed) (09081e23a1b0b5a18656d1e6817cba9213a10e54)

Files changed (1) hide show
  1. README.md +389 -266
README.md CHANGED
@@ -1,255 +1,324 @@
1
  ---
2
  language:
3
- - en
4
- - zh
5
- - es
6
- - ur
7
  license: apache-2.0
8
  task_categories:
9
- - text-generation
10
  tags:
11
- - code
12
- - multilingual
13
- - legesher
14
- - transpilation
15
- - tiny-aya-expedition
16
- - language-decoded
17
  pretty_name: Language Decoded Data
18
  size_categories:
19
- - 10K<n<100K
20
  configs:
21
- - config_name: condition-1-en
22
- data_files:
23
- - split: train
24
- path: data/condition-1-en/train-*
25
- - split: validation
26
- path: data/condition-1-en/validation-*
27
- - config_name: condition-1-en-5k
28
- data_files:
29
- - split: train
30
- path: data/condition-1-en-5k/train-*
31
- - split: validation
32
- path: data/condition-1-en-5k/validation-*
33
- - config_name: condition-2-es
34
- data_files:
35
- - split: train
36
- path: data/condition-2-es/train-*.parquet
37
- - split: validation
38
- path: data/condition-2-es/validation-*.parquet
39
- - config_name: condition-2-es-5k
40
- data_files:
41
- - split: train
42
- path: data/condition-2-es-5k/train-*
43
- - split: validation
44
- path: data/condition-2-es-5k/validation-*
45
- - config_name: condition-2-ur
46
- data_files:
47
- - split: train
48
- path: data/condition-2-ur/train-*.parquet
49
- - split: validation
50
- path: data/condition-2-ur/validation-*.parquet
51
- - config_name: condition-2-ur-5k
52
- data_files:
53
- - split: train
54
- path: data/condition-2-ur-5k/train-*
55
- - split: validation
56
- path: data/condition-2-ur-5k/validation-*
57
- - config_name: condition-2-zh
58
- data_files:
59
- - split: train
60
- path: data/condition-2-zh/train-*
61
- - split: validation
62
- path: data/condition-2-zh/validation-*
63
- - config_name: condition-2-zh-5k
64
- data_files:
65
- - split: train
66
- path: data/condition-2-zh-5k/train-*
67
- - split: validation
68
- path: data/condition-2-zh-5k/validation-*
69
- - config_name: condition-3-zh-5k
70
- data_files:
71
- - split: train
72
- path: data/condition-3-zh-5k/train-*
73
- - split: validation
74
- path: data/condition-3-zh-5k/validation-*
 
 
 
 
 
 
75
  dataset_info:
76
- - config_name: condition-1-en
77
- features:
78
- - name: file_path
79
- dtype: string
80
- - name: code
81
- dtype: string
82
- - name: code_en
83
- dtype: string
84
- - name: language
85
- dtype: string
86
- - name: license
87
- dtype: string
88
- - name: token_count
89
- dtype: int32
90
- splits:
91
- - name: train
92
- num_bytes: 403718262
93
- num_examples: 31818
94
- - name: validation
95
- num_bytes: 42626910
96
- num_examples: 3536
97
- download_size: 164619518
98
- dataset_size: 446345172
99
- - config_name: condition-1-en-5k
100
- features:
101
- - name: file_path
102
- dtype: string
103
- - name: code
104
- dtype: string
105
- - name: code_en
106
- dtype: string
107
- - name: language
108
- dtype: string
109
- - name: license
110
- dtype: string
111
- - name: token_count
112
- dtype: int32
113
- splits:
114
- - name: train
115
- num_bytes: 55261555
116
- num_examples: 4500
117
- - name: validation
118
- num_bytes: 6365959
119
- num_examples: 500
120
- download_size: 22897728
121
- dataset_size: 61627514
122
- - config_name: condition-2-es-5k
123
- features:
124
- - name: file_path
125
- dtype: string
126
- - name: code
127
- dtype: string
128
- - name: code_en
129
- dtype: string
130
- - name: language
131
- dtype: string
132
- - name: license
133
- dtype: string
134
- - name: token_count
135
- dtype: int32
136
- splits:
137
- - name: train
138
- num_bytes: 55864731
139
- num_examples: 4500
140
- - name: validation
141
- num_bytes: 6432095
142
- num_examples: 500
143
- download_size: 23031674
144
- dataset_size: 62296826
145
- - config_name: condition-2-ur-5k
146
- features:
147
- - name: file_path
148
- dtype: string
149
- - name: code
150
- dtype: string
151
- - name: code_en
152
- dtype: string
153
- - name: language
154
- dtype: string
155
- - name: license
156
- dtype: string
157
- - name: token_count
158
- dtype: int32
159
- splits:
160
- - name: train
161
- num_bytes: 56906247
162
- num_examples: 4500
163
- - name: validation
164
- num_bytes: 6545730
165
- num_examples: 500
166
- download_size: 23158039
167
- dataset_size: 63451977
168
- - config_name: condition-2-zh
169
- features:
170
- - name: file_path
171
- dtype: string
172
- - name: code
173
- dtype: string
174
- - name: code_en
175
- dtype: string
176
- - name: language
177
- dtype: string
178
- - name: license
179
- dtype: string
180
- - name: token_count
181
- dtype: int32
182
- splits:
183
- - name: train
184
- num_bytes: 405515831
185
- num_examples: 31818
186
- - name: validation
187
- num_bytes: 45065811
188
- num_examples: 3536
189
- download_size: 165387142
190
- dataset_size: 450581642
191
- - config_name: condition-2-zh-5k
192
- features:
193
- - name: file_path
194
- dtype: string
195
- - name: code
196
- dtype: string
197
- - name: code_en
198
- dtype: string
199
- - name: language
200
- dtype: string
201
- - name: license
202
- dtype: string
203
- - name: token_count
204
- dtype: int32
205
- splits:
206
- - name: train
207
- num_bytes: 55793642
208
- num_examples: 4500
209
- - name: validation
210
- num_bytes: 6422792
211
- num_examples: 500
212
- download_size: 22978834
213
- dataset_size: 62216434
214
- - config_name: condition-3-zh-5k
215
- features:
216
- - name: file_path
217
- dtype: large_string
218
- - name: code
219
- dtype: large_string
220
- - name: code_en
221
- dtype: string
222
- - name: language
223
- dtype: large_string
224
- - name: license
225
- dtype: large_string
226
- - name: token_count
227
- dtype: int64
228
- - name: source_type
229
- dtype: large_string
230
- splits:
231
- - name: train
232
- num_bytes: 40782466
233
- num_examples: 4500
234
- - name: validation
235
- num_bytes: 4531385
236
- num_examples: 500
237
- download_size: 17299185
238
- dataset_size: 45313851
239
- - config_name: default
240
- features:
241
- - name: code
242
- dtype: string
243
- - name: code_en
244
- dtype: string
245
- - name: language
246
- dtype: string
247
- - name: file_path
248
- dtype: string
249
- - name: license
250
- dtype: string
251
- - name: token_count
252
- dtype: int64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
253
  ---
254
 
255
  # Language Decoded | Multilingual Code Dataset
@@ -264,7 +333,7 @@ Prior work ([Aryabumi et al., 2024 -- "To Code or Not to Code"](https://arxiv.or
264
 
265
  ## Dataset Description
266
 
267
- This dataset provides filtered, quality-controlled Python source code in four configurations: the original English and three keyword-swapped variants (Chinese, Spanish, Urdu). The source data is drawn from [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset), filtered for quality using the following criteria:
268
 
269
  - AST-valid Python only (must parse without errors)
270
  - Permissive licenses only (MIT, Apache-2.0, BSD, etc.)
@@ -277,18 +346,27 @@ Keyword-swapped variants are produced using [Legesher](https://github.com/legesh
277
 
278
  ## Available Configs
279
 
280
- | Config | Condition | Language | Description |
281
- | --------------------- | --------------------- | -------- | ------------------------------------------------------------------------------------------------ |
282
- | `condition-1-en` | Condition 1 (control) | English | Unmodified filtered Python from The Stack Dedup |
283
- | `condition-2-ur` | Condition 2 | Urdu | Keyword-swapped Python -- 37 keywords, 72 builtins, 66 exceptions translated via Legesher v0.7.3 |
284
- | `condition-2-zh` | Condition 2 | Chinese | Keyword-swapped Python -- same transpilation method |
285
- | `condition-2-es` | Condition 2 | Spanish | Keyword-swapped Python -- same transpilation method |
286
- | `condition-3-zh-5k` | Condition 3 | Chinese | Blended: 3,486 native Chinese code + 1,514 transpiled Python (see Condition 3 section below) |
 
 
 
 
 
 
 
287
 
288
  ## Schema
289
 
290
  ### Conditions 1--2
291
 
 
 
292
  | Column | Type | Description |
293
  | ------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
294
  | `code` | string | Python source code. For condition-2 configs, this is the transpiled (keyword-swapped) version. For condition-1, this is the original English source. |
@@ -300,6 +378,8 @@ Keyword-swapped variants are produced using [Legesher](https://github.com/legesh
300
 
301
  ### Condition 3
302
 
 
 
303
  Condition 3 blends native Chinese code with transpiled code and adds a `source_type` column to distinguish them. `code_en` is populated for transpiled rows (keeping them in sync with conditions 1--2) but null for native code rows, which have no English equivalent.
304
 
305
  | Column | Type | Description |
@@ -312,34 +392,66 @@ Condition 3 blends native Chinese code with transpiled code and adds a `source_t
312
  | `token_count` | int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer |
313
  | `source_type` | string | `"native"` (natively Chinese-authored) or `"transpiled"` (keyword-swapped English) |
314
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
315
  ## Experimental Conditions
316
 
317
- The Language Decoded experiment uses a ladder of six conditions to isolate the mechanism behind code's reasoning benefit. This dataset currently provides data for conditions 1 and 2:
318
 
319
- | Condition | Name | Purpose |
320
- | --------------- | -------------------- | ----------------------------------------------------------------------------------------- |
321
- | Baseline | No fine-tuning | Establishes the performance floor |
322
- | Condition 1 | English code | Tests whether code fine-tuning helps at all (replicates Aryabumi et al.) |
323
- | Condition 2 | Keyword-swapped code | Tests whether the _language_ of keywords matters for the reasoning benefit |
324
- | Condition 3 | Mixed native sources | Tests whether diverse native-language code adds value beyond keyword swapping |
325
- | Conditions 4--6 | (planned) | Additional controls not yet included in this dataset |
 
 
 
 
 
 
 
326
 
327
  ## Usage
328
 
329
  ```python
330
  from datasets import load_dataset
331
 
332
- # Load English code (control)
333
- ds = load_dataset("legesher/language-decoded-data", "condition-1-en")
 
 
 
334
 
335
- # Load a keyword-swapped variant
336
- ds = load_dataset("legesher/language-decoded-data", "condition-2-ur")
337
- ds = load_dataset("legesher/language-decoded-data", "condition-2-zh")
338
- ds = load_dataset("legesher/language-decoded-data", "condition-2-es")
339
 
340
  # Load blended native + transpiled (condition 3)
341
  ds = load_dataset("legesher/language-decoded-data", "condition-3-zh-5k")
342
 
 
 
 
343
  # Access splits
344
  train = ds["train"]
345
  val = ds["validation"]
@@ -360,12 +472,20 @@ native_only = train.filter(lambda x: x["source_type"] == "native")
360
  | File format | Parquet (snappy compression) |
361
  | Filtering criteria | AST-valid, permissive licenses, 10--1000 lines, min 21 GitHub stars, no autogenerated files, SHA-256 deduplication |
362
 
 
 
 
 
 
 
 
 
363
  ## Citation
364
 
365
  ```bibtex
366
  @misc{language-decoded-2026,
367
  title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code},
368
- author={Madison Edgar and Saad Bazaz and Rafay Mustafa and Sarah Jawaid and Rashik Shahjahan and Khojasteh Mirza and Sohaib Bazaz},
369
  year={2026},
370
  publisher={Hugging Face},
371
  url={https://huggingface.co/datasets/legesher/language-decoded-data}
@@ -377,6 +497,9 @@ native_only = train.filter(lambda x: x["source_type"] == "native")
377
  - [Legesher on GitHub](https://github.com/legesher/legesher)
378
  - [Tiny Aya Expedition](https://aya.for.ai)
379
  - [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup)
 
 
 
380
 
381
  ## License
382
 
 
1
  ---
2
  language:
3
+ - en
4
+ - zh
5
+ - es
6
+ - ur
7
  license: apache-2.0
8
  task_categories:
9
+ - text-generation
10
  tags:
11
+ - code
12
+ - multilingual
13
+ - legesher
14
+ - transpilation
15
+ - tiny-aya-expedition
16
+ - language-decoded
17
  pretty_name: Language Decoded Data
18
  size_categories:
19
+ - 100K<n<1M
20
  configs:
21
+ - config_name: condition-1-en-32k
22
+ data_files:
23
+ - split: train
24
+ path: data/condition-1-en-32k/train-*
25
+ - split: validation
26
+ path: data/condition-1-en-32k/validation-*
27
+ - config_name: condition-1-en-5k
28
+ data_files:
29
+ - split: train
30
+ path: data/condition-1-en-5k/train-*
31
+ - split: validation
32
+ path: data/condition-1-en-5k/validation-*
33
+ - config_name: condition-2-es-32k
34
+ data_files:
35
+ - split: train
36
+ path: data/condition-2-es-32k/train-*
37
+ - split: validation
38
+ path: data/condition-2-es-32k/validation-*
39
+ - config_name: condition-2-es-5k
40
+ data_files:
41
+ - split: train
42
+ path: data/condition-2-es-5k/train-*
43
+ - split: validation
44
+ path: data/condition-2-es-5k/validation-*
45
+ - config_name: condition-2-ur-32k
46
+ data_files:
47
+ - split: train
48
+ path: data/condition-2-ur-32k/train-*
49
+ - split: validation
50
+ path: data/condition-2-ur-32k/validation-*
51
+ - config_name: condition-2-ur-5k
52
+ data_files:
53
+ - split: train
54
+ path: data/condition-2-ur-5k/train-*
55
+ - split: validation
56
+ path: data/condition-2-ur-5k/validation-*
57
+ - config_name: condition-2-zh-32k
58
+ data_files:
59
+ - split: train
60
+ path: data/condition-2-zh-32k/train-*
61
+ - split: validation
62
+ path: data/condition-2-zh-32k/validation-*
63
+ - config_name: condition-2-zh-5k
64
+ data_files:
65
+ - split: train
66
+ path: data/condition-2-zh-5k/train-*
67
+ - split: validation
68
+ path: data/condition-2-zh-5k/validation-*
69
+ - config_name: condition-3-zh-5k
70
+ data_files:
71
+ - split: train
72
+ path: data/condition-3-zh-5k/train-*
73
+ - split: validation
74
+ path: data/condition-3-zh-5k/validation-*
75
+ - config_name: condition-4-zh-5k
76
+ data_files:
77
+ - split: train
78
+ path: data/condition-4-zh-5k/train-*
79
+ - split: validation
80
+ path: data/condition-4-zh-5k/validation-*
81
  dataset_info:
82
+ - config_name: condition-1-en-32k
83
+ features:
84
+ - name: file_path
85
+ dtype: string
86
+ - name: code
87
+ dtype: string
88
+ - name: code_en
89
+ dtype: string
90
+ - name: language
91
+ dtype: string
92
+ - name: license
93
+ dtype: string
94
+ - name: token_count
95
+ dtype: int32
96
+ splits:
97
+ - name: train
98
+ num_bytes: 403718262
99
+ num_examples: 31818
100
+ - name: validation
101
+ num_bytes: 42626910
102
+ num_examples: 3536
103
+ download_size: 164619518
104
+ dataset_size: 446345172
105
+ - config_name: condition-1-en-5k
106
+ features:
107
+ - name: file_path
108
+ dtype: string
109
+ - name: code
110
+ dtype: string
111
+ - name: code_en
112
+ dtype: string
113
+ - name: language
114
+ dtype: string
115
+ - name: license
116
+ dtype: string
117
+ - name: token_count
118
+ dtype: int32
119
+ splits:
120
+ - name: train
121
+ num_bytes: 55261555
122
+ num_examples: 4500
123
+ - name: validation
124
+ num_bytes: 6365959
125
+ num_examples: 500
126
+ download_size: 22897728
127
+ dataset_size: 61627514
128
+ - config_name: condition-2-es-32k
129
+ features:
130
+ - name: file_path
131
+ dtype: string
132
+ - name: code
133
+ dtype: string
134
+ - name: code_en
135
+ dtype: string
136
+ - name: language
137
+ dtype: string
138
+ - name: license
139
+ dtype: string
140
+ - name: token_count
141
+ dtype: int32
142
+ splits:
143
+ - name: train
144
+ num_bytes: 408041994
145
+ num_examples: 31818
146
+ - name: validation
147
+ num_bytes: 43090956
148
+ num_examples: 3536
149
+ download_size: 166000000
150
+ dataset_size: 451132950
151
+ - config_name: condition-2-es-5k
152
+ features:
153
+ - name: file_path
154
+ dtype: string
155
+ - name: code
156
+ dtype: string
157
+ - name: code_en
158
+ dtype: string
159
+ - name: language
160
+ dtype: string
161
+ - name: license
162
+ dtype: string
163
+ - name: token_count
164
+ dtype: int32
165
+ splits:
166
+ - name: train
167
+ num_bytes: 55864731
168
+ num_examples: 4500
169
+ - name: validation
170
+ num_bytes: 6432095
171
+ num_examples: 500
172
+ download_size: 23031674
173
+ dataset_size: 62296826
174
+ - config_name: condition-2-ur-32k
175
+ features:
176
+ - name: file_path
177
+ dtype: string
178
+ - name: code
179
+ dtype: string
180
+ - name: code_en
181
+ dtype: string
182
+ - name: language
183
+ dtype: string
184
+ - name: license
185
+ dtype: string
186
+ - name: token_count
187
+ dtype: int32
188
+ splits:
189
+ - name: train
190
+ num_bytes: 415552907
191
+ num_examples: 31818
192
+ - name: validation
193
+ num_bytes: 43879443
194
+ num_examples: 3536
195
+ download_size: 166000000
196
+ dataset_size: 459432350
197
+ - config_name: condition-2-ur-5k
198
+ features:
199
+ - name: file_path
200
+ dtype: string
201
+ - name: code
202
+ dtype: string
203
+ - name: code_en
204
+ dtype: string
205
+ - name: language
206
+ dtype: string
207
+ - name: license
208
+ dtype: string
209
+ - name: token_count
210
+ dtype: int32
211
+ splits:
212
+ - name: train
213
+ num_bytes: 56906247
214
+ num_examples: 4500
215
+ - name: validation
216
+ num_bytes: 6545730
217
+ num_examples: 500
218
+ download_size: 23158039
219
+ dataset_size: 63451977
220
+ - config_name: condition-2-zh-32k
221
+ features:
222
+ - name: file_path
223
+ dtype: string
224
+ - name: code
225
+ dtype: string
226
+ - name: code_en
227
+ dtype: string
228
+ - name: language
229
+ dtype: string
230
+ - name: license
231
+ dtype: string
232
+ - name: token_count
233
+ dtype: int32
234
+ splits:
235
+ - name: train
236
+ num_bytes: 405515831
237
+ num_examples: 31818
238
+ - name: validation
239
+ num_bytes: 45065811
240
+ num_examples: 3536
241
+ download_size: 165387142
242
+ dataset_size: 450581642
243
+ - config_name: condition-2-zh-5k
244
+ features:
245
+ - name: file_path
246
+ dtype: string
247
+ - name: code
248
+ dtype: string
249
+ - name: code_en
250
+ dtype: string
251
+ - name: language
252
+ dtype: string
253
+ - name: license
254
+ dtype: string
255
+ - name: token_count
256
+ dtype: int32
257
+ splits:
258
+ - name: train
259
+ num_bytes: 55793642
260
+ num_examples: 4500
261
+ - name: validation
262
+ num_bytes: 6422792
263
+ num_examples: 500
264
+ download_size: 22978834
265
+ dataset_size: 62216434
266
+ - config_name: condition-3-zh-5k
267
+ features:
268
+ - name: file_path
269
+ dtype: large_string
270
+ - name: code
271
+ dtype: large_string
272
+ - name: code_en
273
+ dtype: string
274
+ - name: language
275
+ dtype: large_string
276
+ - name: license
277
+ dtype: large_string
278
+ - name: token_count
279
+ dtype: int64
280
+ - name: source_type
281
+ dtype: large_string
282
+ splits:
283
+ - name: train
284
+ num_bytes: 40782466
285
+ num_examples: 4500
286
+ - name: validation
287
+ num_bytes: 4531385
288
+ num_examples: 500
289
+ download_size: 17299185
290
+ dataset_size: 45313851
291
+ - config_name: condition-4-zh-5k
292
+ features:
293
+ - name: filename
294
+ dtype: string
295
+ - name: content
296
+ dtype: string
297
+ - name: extension
298
+ dtype: string
299
+ - name: source
300
+ dtype: string
301
+ - name: quality_tier
302
+ dtype: string
303
+ - name: sha256
304
+ dtype: string
305
+ - name: byte_size
306
+ dtype: int64
307
+ - name: total_lines
308
+ dtype: int64
309
+ - name: cjk_ratio
310
+ dtype: float64
311
+ - name: has_cjk
312
+ dtype: bool
313
+ splits:
314
+ - name: train
315
+ num_bytes: 44246508
316
+ num_examples: 6553
317
+ - name: validation
318
+ num_bytes: 7522476
319
+ num_examples: 729
320
+ download_size: 18300000
321
+ dataset_size: 51768984
322
  ---
323
 
324
  # Language Decoded | Multilingual Code Dataset
 
333
 
334
  ## Dataset Description
335
 
336
+ This dataset provides filtered, quality-controlled Python source code in multiple configurations: the original English, three keyword-swapped variants (Chinese, Spanish, Urdu), a blended native+transpiled mix, and strictly native Chinese code. The source data is drawn from [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset), filtered for quality using the following criteria:
337
 
338
  - AST-valid Python only (must parse without errors)
339
  - Permissive licenses only (MIT, Apache-2.0, BSD, etc.)
 
346
 
347
  ## Available Configs
348
 
349
+ Each condition is available in two sizes: `-32k` (full filtered corpus, ~31.8k train + ~3.5k validation) and `-5k` (stratified subset, 4.5k train + 500 validation). The `-5k` subsets are used for QLoRA fine-tuning on consumer GPUs.
350
+
351
+ | Config | Condition | Language | Description | Train | Val |
352
+ | -------------------- | ----------- | -------- | ------------------------------------------------------------ | ------ | ----- |
353
+ | `condition-1-en-32k` | 1 (control) | English | Unmodified filtered Python from The Stack Dedup | 31,818 | 3,536 |
354
+ | `condition-1-en-5k` | 1 (control) | English | Stratified 5k subset of condition-1 | 4,500 | 500 |
355
+ | `condition-2-zh-32k` | 2 | Chinese | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 |
356
+ | `condition-2-zh-5k` | 2 | Chinese | Stratified 5k subset of condition-2-zh | 4,500 | 500 |
357
+ | `condition-2-es-32k` | 2 | Spanish | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 |
358
+ | `condition-2-es-5k` | 2 | Spanish | Stratified 5k subset of condition-2-es | 4,500 | 500 |
359
+ | `condition-2-ur-32k` | 2 | Urdu | Keyword-swapped Python via Legesher v0.7.3 | 31,818 | 3,536 |
360
+ | `condition-2-ur-5k` | 2 | Urdu | Stratified 5k subset of condition-2-ur | 4,500 | 500 |
361
+ | `condition-3-zh-5k` | 3 | Chinese | Blended: 3,486 native Chinese code + 1,514 transpiled Python | 4,500 | 500 |
362
+ | `condition-4-zh-5k` | 4 | Chinese | Strictly native Chinese code (no transpiled code) | 6,553 | 729 |
363
 
364
  ## Schema
365
 
366
  ### Conditions 1--2
367
 
368
+ Used by: `condition-1-en-*`, `condition-2-zh-*`, `condition-2-es-*`, `condition-2-ur-*`
369
+
370
  | Column | Type | Description |
371
  | ------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
372
  | `code` | string | Python source code. For condition-2 configs, this is the transpiled (keyword-swapped) version. For condition-1, this is the original English source. |
 
378
 
379
  ### Condition 3
380
 
381
+ Used by: `condition-3-zh-5k`
382
+
383
  Condition 3 blends native Chinese code with transpiled code and adds a `source_type` column to distinguish them. `code_en` is populated for transpiled rows (keeping them in sync with conditions 1--2) but null for native code rows, which have no English equivalent.
384
 
385
  | Column | Type | Description |
 
392
  | `token_count` | int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer |
393
  | `source_type` | string | `"native"` (natively Chinese-authored) or `"transpiled"` (keyword-swapped English) |
394
 
395
+ ### Condition 4
396
+
397
+ Used by: `condition-4-zh-5k`
398
+
399
+ Condition 4 contains strictly native Chinese code -- code written by developers who think and code in Chinese. This uses the same schema as the [language-decoded-community](https://huggingface.co/datasets/legesher/language-decoded-community) dataset rather than the transpilation schema, since there is no English original to reference.
400
+
401
+ | Column | Type | Description |
402
+ | -------------- | ------- | -------------------------------------------------------------- |
403
+ | `filename` | string | Original filename |
404
+ | `content` | string | The code content |
405
+ | `extension` | string | File extension (e.g., `.py`, `.c`, `.wenyan`) |
406
+ | `source` | string | Data source (e.g., `thestack`, `wenyan`, `program_in_chinese`) |
407
+ | `quality_tier` | string | Quality rating: `A` (highest) through `D` (lowest) |
408
+ | `sha256` | string | SHA-256 hash for deduplication |
409
+ | `byte_size` | int64 | File size in bytes |
410
+ | `total_lines` | int64 | Total line count |
411
+ | `cjk_ratio` | float64 | Ratio of CJK characters in the file |
412
+ | `has_cjk` | bool | Whether the file contains CJK characters |
413
+
414
  ## Experimental Conditions
415
 
416
+ The Language Decoded experiment uses a ladder of conditions to isolate the mechanism behind code's reasoning benefit:
417
 
418
+ | Condition | Name | Purpose |
419
+ | ----------- | -------------------- | ----------------------------------------------------------------------------------------- |
420
+ | Baseline | No fine-tuning | Establishes the performance floor |
421
+ | Condition 1 | English code | Tests whether code fine-tuning helps at all (replicates Aryabumi et al.) |
422
+ | Condition 2 | Keyword-swapped code | Tests whether the _language_ of keywords matters for the reasoning benefit |
423
+ | Condition 3 | Mixed native sources | Tests whether diverse native-language code adds value beyond keyword swapping |
424
+ | Condition 4 | Strictly native code | Tests whether code authored by native speakers carries unique signal beyond transpilation |
425
+
426
+ ### The Experimental Ladder
427
+
428
+ - **Baseline --> 1**: Does code help at all?
429
+ - **1 --> 2**: Does the language of keywords matter?
430
+ - **2 --> 3**: Does diversity of native-language sources add value beyond keyword swap?
431
+ - **3 --> 4**: Does code written in the cultural context of a language carry something that transpiled+mixed can't?
432
 
433
  ## Usage
434
 
435
  ```python
436
  from datasets import load_dataset
437
 
438
+ # Load full-size English code (control)
439
+ ds = load_dataset("legesher/language-decoded-data", "condition-1-en-32k")
440
+
441
+ # Load 5k subset (for QLoRA fine-tuning)
442
+ ds = load_dataset("legesher/language-decoded-data", "condition-1-en-5k")
443
 
444
+ # Load keyword-swapped variants
445
+ ds = load_dataset("legesher/language-decoded-data", "condition-2-zh-5k")
446
+ ds = load_dataset("legesher/language-decoded-data", "condition-2-es-5k")
447
+ ds = load_dataset("legesher/language-decoded-data", "condition-2-ur-5k")
448
 
449
  # Load blended native + transpiled (condition 3)
450
  ds = load_dataset("legesher/language-decoded-data", "condition-3-zh-5k")
451
 
452
+ # Load strictly native code (condition 4)
453
+ ds = load_dataset("legesher/language-decoded-data", "condition-4-zh-5k")
454
+
455
  # Access splits
456
  train = ds["train"]
457
  val = ds["validation"]
 
472
  | File format | Parquet (snappy compression) |
473
  | Filtering criteria | AST-valid, permissive licenses, 10--1000 lines, min 21 GitHub stars, no autogenerated files, SHA-256 deduplication |
474
 
475
+ ## Limitations
476
+
477
+ - **Source bias**: The Stack Dedup skews toward popular, well-starred GitHub repositories, which may not represent the full diversity of Python code in the wild.
478
+ - **Keyword-only transpilation**: Legesher translates Python reserved words (keywords, builtins, exceptions) but leaves comments, docstrings, string literals, and variable/function names in their original language (typically English). This means condition-2 code is a hybrid of translated keywords and English identifiers.
479
+ - **Token count variation**: Transpiled code may have different token counts than the English original due to multi-byte characters (especially for Chinese and Urdu), even though the code structure is identical.
480
+ - **Single programming language**: Currently limited to Python. Results may not generalize to other programming languages.
481
+ - **Condition 4 scope**: Native Chinese code is limited to publicly available sources (The Stack, Wenyan, Program-in-Chinese, Qi, Mulan) and may not represent the full spectrum of Chinese-language programming.
482
+
483
  ## Citation
484
 
485
  ```bibtex
486
  @misc{language-decoded-2026,
487
  title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code},
488
+ author={Madison Edgar and Saad Ahmed Bazaz and Tom Sherborne and Rashik Shahjahan and Khojasteh Mirza and Sarah Jawaid and Rafay Mustafa and Sohaib Ahmed Bazaz},
489
  year={2026},
490
  publisher={Hugging Face},
491
  url={https://huggingface.co/datasets/legesher/language-decoded-data}
 
497
  - [Legesher on GitHub](https://github.com/legesher/legesher)
498
  - [Tiny Aya Expedition](https://aya.for.ai)
499
  - [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup)
500
+ - [Language Decoded Community (native code)](https://huggingface.co/datasets/legesher/language-decoded-community)
501
+ - [Language Decoded Experiments (tracking)](https://huggingface.co/datasets/legesher/language-decoded-experiments)
502
+ - [Language Decoded LoRA (model hub)](https://huggingface.co/legesher/language-decoded-lora)
503
 
504
  ## License
505