Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: IndexError
Message: list index out of range
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1901, in _prepare_split_single
original_shard_lengths[original_shard_id] += len(table)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text string |
|---|
GpuFreq=control_disabled |
GpuFreq=control_disabled |
[2026-03-27 13:49:23,609] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified. |
[2026-03-27 13:49:23,618] torch.distributed.run: [WARNING] master_addr is only used for static rdzv_backend and when rdzv_endpoint is not specified. |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:01<00:03, 1.68s/it] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:01<00:03, 1.62s/it] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:01<00:03, 1.63s/it] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:08<00:17, 8.87s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:09<00:05, 5.35s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:09<00:05, 5.32s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:09<00:05, 5.33s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:16<00:07, 7.92s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 8.36s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 7.18s/it] |
Some weights of the model checkpoint at ./checkpoints/llava-v1.5-7b_01_7-11_14_16t2v_pt_ft were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_tower.vision_model.embeddings.class_embedding', 'model.vision_tower.vision_tower.vision_model.embeddings.patch_embedding.weight', 'model.vision_to... |
- This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). |
- This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 8.33s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 7.15s/it] |
Some weights of the model checkpoint at ./checkpoints/llava-v1.5-7b_01_7-11_14_16t2v_pt_ft were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_tower.vision_model.embeddings.class_embedding', 'model.vision_tower.vision_tower.vision_model.embeddings.patch_embedding.weight', 'model.vision_to... |
- This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). |
- This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 8.34s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 7.15s/it] |
Some weights of the model checkpoint at ./checkpoints/llava-v1.5-7b_01_7-11_14_16t2v_pt_ft were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_tower.vision_model.embeddings.class_embedding', 'model.vision_tower.vision_tower.vision_model.embeddings.patch_embedding.weight', 'model.vision_to... |
- This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). |
- This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:01<00:03, 1.60s/it] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:01<00:03, 1.60s/it] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:01<00:03, 1.61s/it] |
Loading checkpoint shards: 33%|ββββ | 1/3 [00:01<00:03, 1.61s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:09<00:05, 5.32s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:09<00:05, 5.33s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:09<00:05, 5.32s/it] |
Loading checkpoint shards: 67%|βββββββ | 2/3 [00:09<00:05, 5.34s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 8.34s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 8.34s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 7.15s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 7.15s/it] |
Some weights of the model checkpoint at ./checkpoints/llava-v1.5-7b_01_7-11_14_16t2v_pt_ft were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_tower.vision_model.embeddings.class_embedding', 'model.vision_tower.vision_tower.vision_model.embeddings.patch_embedding.weight', 'model.vision_to... |
- This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). |
- This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). |
Some weights of the model checkpoint at ./checkpoints/llava-v1.5-7b_01_7-11_14_16t2v_pt_ft were not used when initializing LlavaLlamaForCausalLM: ['model.vision_tower.vision_tower.vision_model.embeddings.class_embedding', 'model.vision_tower.vision_tower.vision_model.embeddings.patch_embedding.weight', 'model.vision_to... |
- This IS expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). |
- This IS NOT expected if you are initializing LlavaLlamaForCausalLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 8.34s/it] |
Loading checkpoint shards: 100%|ββββββββββ| 3/3 [00:21<00:00, 7.15s/it] |
End of preview.
No dataset card yet
- Downloads last month
- 203