Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
input_dir: string
selection_method: string
seed: int64
total_tokens: int64
total_documents: int64
train_tokens_target: int64
test_tokens_target: int64
required_tokens: int64
discarded_tokens_actual: int64
train: struct<output_prefix: string, manifest_path: string, tokens_actual: int64, documents_written: int64, (... 68 chars omitted)
  child 0, output_prefix: string
  child 1, manifest_path: string
  child 2, tokens_actual: int64
  child 3, documents_written: int64
  child 4, unique_source_documents: int64
  child 5, touched_shards: list<item: string>
      child 0, item: string
test: struct<output_prefix: string, manifest_path: string, tokens_actual: int64, documents_written: int64, (... 68 chars omitted)
  child 0, output_prefix: string
  child 1, manifest_path: string
  child 2, tokens_actual: int64
  child 3, documents_written: int64
  child 4, unique_source_documents: int64
  child 5, touched_shards: list<item: string>
      child 0, item: string
selection_summary: struct<seed: int64, selection_unit: string, permutation_documents_examined: int64, test_documents_se (... 336 chars omitted)
  child 0, seed: int64
  child 1, selection_unit: string
  child 2, permutation_documents_examined: int64
  child 3, test_documents_selected: int64
  child 4, train_documents_selected: int64
  child 5, test_tokens_actual: int64
  child 6, train_tokens_actual: int64
  child 7, test_tokens_overshoot: int64
  child 8, train_tokens_overshoot: int64
  child 9, train_tokens_shortfall: int64
  child 10, document_overlap_count: int64
  child 11, remaining_tokens_after_test: int64
  child 12, train_fill_target: int64
  child 13, consumed_tokens: int64
  child 14, discarded_tokens: int64
source_shards: list<item: struct<name: string, documents: int64, tokens: int64>>
  child 0, item: struct<name: string, documents: int64, tokens: int64>
      child 0, name: string
      child 1, documents: int64
      child 2, tokens: int64
source_global_doc_idx: int64
source_local_doc_idx: int64
source_shard_name: string
piece_tokens: int64
output_doc_idx: int64
token_end: int64
token_start: int64
split: string
source_doc_tokens: int64
to
{'split': Value('string'), 'output_doc_idx': Value('int64'), 'source_global_doc_idx': Value('int64'), 'source_shard_name': Value('string'), 'source_local_doc_idx': Value('int64'), 'source_doc_tokens': Value('int64'), 'token_start': Value('int64'), 'token_end': Value('int64'), 'piece_tokens': Value('int64')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  for key, table in generator:
                                    ^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 609, in wrapped
                  for item in generator(*args, **kwargs):
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 260, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 120, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              input_dir: string
              selection_method: string
              seed: int64
              total_tokens: int64
              total_documents: int64
              train_tokens_target: int64
              test_tokens_target: int64
              required_tokens: int64
              discarded_tokens_actual: int64
              train: struct<output_prefix: string, manifest_path: string, tokens_actual: int64, documents_written: int64, (... 68 chars omitted)
                child 0, output_prefix: string
                child 1, manifest_path: string
                child 2, tokens_actual: int64
                child 3, documents_written: int64
                child 4, unique_source_documents: int64
                child 5, touched_shards: list<item: string>
                    child 0, item: string
              test: struct<output_prefix: string, manifest_path: string, tokens_actual: int64, documents_written: int64, (... 68 chars omitted)
                child 0, output_prefix: string
                child 1, manifest_path: string
                child 2, tokens_actual: int64
                child 3, documents_written: int64
                child 4, unique_source_documents: int64
                child 5, touched_shards: list<item: string>
                    child 0, item: string
              selection_summary: struct<seed: int64, selection_unit: string, permutation_documents_examined: int64, test_documents_se (... 336 chars omitted)
                child 0, seed: int64
                child 1, selection_unit: string
                child 2, permutation_documents_examined: int64
                child 3, test_documents_selected: int64
                child 4, train_documents_selected: int64
                child 5, test_tokens_actual: int64
                child 6, train_tokens_actual: int64
                child 7, test_tokens_overshoot: int64
                child 8, train_tokens_overshoot: int64
                child 9, train_tokens_shortfall: int64
                child 10, document_overlap_count: int64
                child 11, remaining_tokens_after_test: int64
                child 12, train_fill_target: int64
                child 13, consumed_tokens: int64
                child 14, discarded_tokens: int64
              source_shards: list<item: struct<name: string, documents: int64, tokens: int64>>
                child 0, item: struct<name: string, documents: int64, tokens: int64>
                    child 0, name: string
                    child 1, documents: int64
                    child 2, tokens: int64
              source_global_doc_idx: int64
              source_local_doc_idx: int64
              source_shard_name: string
              piece_tokens: int64
              output_doc_idx: int64
              token_end: int64
              token_start: int64
              split: string
              source_doc_tokens: int64
              to
              {'split': Value('string'), 'output_doc_idx': Value('int64'), 'source_global_doc_idx': Value('int64'), 'source_shard_name': Value('string'), 'source_local_doc_idx': Value('int64'), 'source_doc_tokens': Value('int64'), 'token_start': Value('int64'), 'token_end': Value('int64'), 'piece_tokens': Value('int64')}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

split
string
output_doc_idx
int64
source_global_doc_idx
int64
source_shard_name
string
source_local_doc_idx
int64
source_doc_tokens
int64
token_start
int64
token_end
int64
piece_tokens
int64
train
0
215,596
shard_00010_text_document
15,596
821
0
821
821
train
1
804,945
shard_00040_text_document
4,945
1,885
0
1,885
1,885
train
2
1,119,650
shard_00055_text_document
19,650
1,334
0
1,334
1,334
train
3
775,485
shard_00038_text_document
15,485
409
0
409
409
train
4
1,525,906
shard_00076_text_document
5,906
3,762
0
3,762
3,762
train
5
1,004,449
shard_00050_text_document
4,449
414
0
414
414
train
6
1,660,492
shard_00083_text_document
492
576
0
576
576
train
7
1,661,965
shard_00083_text_document
1,965
1,215
0
1,215
1,215
train
8
1,839,940
shard_00091_text_document
19,940
1,275
0
1,275
1,275
train
9
1,901,394
shard_00095_text_document
1,394
365
0
365
365
train
10
2,235,576
shard_00111_text_document
15,576
648
0
648
648
train
11
429,671
shard_00021_text_document
9,671
11,987
0
11,987
11,987
train
12
1,452,960
shard_00072_text_document
12,960
1,883
0
1,883
1,883
train
13
828,934
shard_00041_text_document
8,934
813
0
813
813
train
14
96,426
shard_00004_text_document
16,426
271
0
271
271
train
15
1,995,831
shard_00099_text_document
15,831
863
0
863
863
train
16
1,438,113
shard_00071_text_document
18,113
511
0
511
511
train
17
2,040,238
shard_00102_text_document
238
1,712
0
1,712
1,712
train
18
2,532,191
shard_00126_text_document
12,191
417
0
417
417
train
19
809,567
shard_00040_text_document
9,567
1,177
0
1,177
1,177
train
20
2,347,941
shard_00117_text_document
7,941
6,602
0
6,602
6,602
train
21
1,635,059
shard_00081_text_document
15,059
487
0
487
487
train
22
2,321,827
shard_00116_text_document
1,827
2,804
0
2,804
2,804
train
23
2,234,825
shard_00111_text_document
14,825
2,671
0
2,671
2,671
train
24
1,354,625
shard_00067_text_document
14,625
7,253
0
7,253
7,253
train
25
374,554
shard_00018_text_document
14,554
2,862
0
2,862
2,862
train
26
2,094,481
shard_00104_text_document
14,481
1,026
0
1,026
1,026
train
27
802,271
shard_00040_text_document
2,271
3,232
0
3,232
3,232
train
28
1,629,578
shard_00081_text_document
9,578
416
0
416
416
train
29
46,587
shard_00002_text_document
6,587
604
0
604
604
train
30
2,531,738
shard_00126_text_document
11,738
1,791
0
1,791
1,791
train
31
661,533
shard_00033_text_document
1,533
1,414
0
1,414
1,414
train
32
2,344,380
shard_00117_text_document
4,380
289
0
289
289
train
33
1,640,819
shard_00082_text_document
819
2,270
0
2,270
2,270
train
34
1,127,636
shard_00056_text_document
7,636
587
0
587
587
train
35
1,685,957
shard_00084_text_document
5,957
1,305
0
1,305
1,305
train
36
706,663
shard_00035_text_document
6,663
1,118
0
1,118
1,118
train
37
1,579,174
shard_00078_text_document
19,174
35,743
0
35,743
35,743
train
38
2,626,478
shard_00131_text_document
6,478
1,731
0
1,731
1,731
train
39
1,548,068
shard_00077_text_document
8,068
471
0
471
471
train
40
982,083
shard_00049_text_document
2,083
3,465
0
3,465
3,465
train
41
140,288
shard_00007_text_document
288
1,490
0
1,490
1,490
train
42
644,230
shard_00032_text_document
4,230
4,270
0
4,270
4,270
train
43
1,431,239
shard_00071_text_document
11,239
9,010
0
9,010
9,010
train
44
600,383
shard_00030_text_document
383
8,569
0
8,569
8,569
train
45
878,066
shard_00043_text_document
18,066
601
0
601
601
train
46
1,094,043
shard_00054_text_document
14,043
723
0
723
723
train
47
1,218,083
shard_00060_text_document
18,083
1,468
0
1,468
1,468
train
48
1,931,911
shard_00096_text_document
11,911
2,407
0
2,407
2,407
train
49
1,137,869
shard_00056_text_document
17,869
648
0
648
648
train
50
422,009
shard_00021_text_document
2,009
638
0
638
638
train
51
333,009
shard_00016_text_document
13,009
2,841
0
2,841
2,841
train
52
1,556,511
shard_00077_text_document
16,511
358
0
358
358
train
53
797,417
shard_00039_text_document
17,417
651
0
651
651
train
54
9,450
shard_00000_text_document
9,450
553
0
553
553
train
55
1,954,596
shard_00097_text_document
14,596
12,914
0
12,914
12,914
train
56
1,082,894
shard_00054_text_document
2,894
481
0
481
481
train
57
177,029
shard_00008_text_document
17,029
3,494
0
3,494
3,494
train
58
708,824
shard_00035_text_document
8,824
340
0
340
340
train
59
1,101,734
shard_00055_text_document
1,734
452
0
452
452
train
60
1,635,714
shard_00081_text_document
15,714
1,568
0
1,568
1,568
train
61
454,776
shard_00022_text_document
14,776
3,588
0
3,588
3,588
train
62
1,803,916
shard_00090_text_document
3,916
2,431
0
2,431
2,431
train
63
2,272,817
shard_00113_text_document
12,817
2,941
0
2,941
2,941
train
64
1,253,554
shard_00062_text_document
13,554
1,593
0
1,593
1,593
train
65
2,368,158
shard_00118_text_document
8,158
4,483
0
4,483
4,483
train
66
1,076,873
shard_00053_text_document
16,873
830
0
830
830
train
67
280,819
shard_00014_text_document
819
742
0
742
742
train
68
475,463
shard_00023_text_document
15,463
630
0
630
630
train
69
1,613,863
shard_00080_text_document
13,863
995
0
995
995
train
70
1,951,918
shard_00097_text_document
11,918
1,493
0
1,493
1,493
train
71
2,373,402
shard_00118_text_document
13,402
2,227
0
2,227
2,227
train
72
414,369
shard_00020_text_document
14,369
1,000
0
1,000
1,000
train
73
970,441
shard_00048_text_document
10,441
321
0
321
321
train
74
406,062
shard_00020_text_document
6,062
1,112
0
1,112
1,112
train
75
1,670,125
shard_00083_text_document
10,125
28,681
0
28,681
28,681
train
76
1,956,336
shard_00097_text_document
16,336
2,808
0
2,808
2,808
train
77
1,711,881
shard_00085_text_document
11,881
3,240
0
3,240
3,240
train
78
187,144
shard_00009_text_document
7,144
865
0
865
865
train
79
1,120,636
shard_00056_text_document
636
2,322
0
2,322
2,322
train
80
1,106,043
shard_00055_text_document
6,043
2,864
0
2,864
2,864
train
81
2,402,031
shard_00120_text_document
2,031
1,746
0
1,746
1,746
train
82
2,460,918
shard_00123_text_document
918
629
0
629
629
train
83
55,384
shard_00002_text_document
15,384
373
0
373
373
train
84
1,131,372
shard_00056_text_document
11,372
7,269
0
7,269
7,269
train
85
1,677,464
shard_00083_text_document
17,464
929
0
929
929
train
86
1,970,722
shard_00098_text_document
10,722
3,330
0
3,330
3,330
train
87
858,204
shard_00042_text_document
18,204
276
0
276
276
train
88
453,907
shard_00022_text_document
13,907
1,193
0
1,193
1,193
train
89
2,423,259
shard_00121_text_document
3,259
550
0
550
550
train
90
1,190,597
shard_00059_text_document
10,597
1,492
0
1,492
1,492
train
91
431,032
shard_00021_text_document
11,032
11,076
0
11,076
11,076
train
92
546,396
shard_00027_text_document
6,396
6,870
0
6,870
6,870
train
93
417,478
shard_00020_text_document
17,478
6,225
0
6,225
6,225
train
94
1,495,939
shard_00074_text_document
15,939
15,389
0
15,389
15,389
train
95
718,110
shard_00035_text_document
18,110
1,673
0
1,673
1,673
train
96
920,260
shard_00046_text_document
260
1,221
0
1,221
1,221
train
97
2,633,131
shard_00131_text_document
13,131
746
0
746
746
train
98
296,465
shard_00014_text_document
16,465
582
0
582
582
train
99
1,153,271
shard_00057_text_document
13,271
499
0
499
499
End of preview.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

YeMoKoo/flamedata2

Random document-sampled Megatron indexed-dataset splits for FLAME-MoE.

Repository layout:

  • wiki/train
  • wiki/test
  • code/train
  • code/test

Notes:

  • Seed is fixed to 42.
  • Test documents are sampled first.
  • Train documents are sampled from the remaining documents only.
  • Documents are never split across train/test.
  • These are Megatron indexed-dataset artifacts (.bin / .idx), not parquet exports.
Downloads last month
38