The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
LMD Simple-Pack Shards: Download and Unpack
This document explains how to download the simple-packed LMD training shards from Hugging Face and restore them back into the raw grouped-sample directory tree expected by the local LMD loader.
The unpack step uses lmd_training_shards_unpack.py.
It can be run either:
- from this repository:
pretrain/data/lmd_training_shards_unpack.py - from the downloaded Hugging Face dataset:
scripts/lmd_training_shards_unpack.py
What Is In The Hugging Face Dataset
The uploaded dataset is expected to follow this layout:
<download_root>/
train/
piano/piano_train_shard_00000.tar
bass/bass_train_shard_00000.tar
...
val/
...
eval/
...
manifests/
train/piano.jsonl
train/bass.jsonl
val/...
eval/...
scripts/
lmd_training_shards_unpack.py
repack_summary.json
SHA256SUMS # optional
In simple tar mode, each tar shard stores the original
event_segment/*.pt and pianoroll_segment/*.pt.gz files by their
source-root-relative paths.
The matching manifests/<split>/<instrument>.jsonl files are required for
unpacking. Do not download only the tar files without the manifests/
directory.
1. Download From Hugging Face
Replace YOUR_HF_DATASET_REPO with your dataset repo id on Hugging Face.
Option A: Python (snapshot_download)
pip install huggingface_hub
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="YOUR_HF_DATASET_REPO",
repo_type="dataset",
local_dir="/path/to/lmd_simple_pack",
)
Option B: CLI (hf download)
pip install "huggingface_hub[cli]"
hf download YOUR_HF_DATASET_REPO \
--repo-type dataset \
--local-dir /path/to/lmd_simple_pack
Download Only A Subset
If you only need specific splits or instruments, make sure to download both:
- the matching tar shards under
train/,val/, oreval/ - the matching manifest JSONL files under
manifests/
Example with Python:
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="YOUR_HF_DATASET_REPO",
repo_type="dataset",
local_dir="/path/to/lmd_simple_pack",
allow_patterns=[
"train/piano/*.tar",
"manifests/train/piano.jsonl",
"scripts/lmd_training_shards_unpack.py",
"repack_summary.json",
"SHA256SUMS",
],
)
If the dataset is private or gated, log in first:
hf auth login
2. Optional: Verify Checksums
If the repo includes SHA256SUMS, you can verify the downloaded shards:
cd /path/to/lmd_simple_pack
sha256sum -c SHA256SUMS
3. Unpack The Shards
You can use either the local repository script or the copy downloaded from Hugging Face.
Option A: Use The Repository Script
cd /path/to/think-before-you-play
python pretrain/data/lmd_training_shards_unpack.py \
--source-root /path/to/lmd_simple_pack \
--output-root /path/to/lmd_unpacked
Option B: Use The Downloaded Hugging Face Script
cd /path/to/lmd_simple_pack
python scripts/lmd_training_shards_unpack.py \
--source-root /path/to/lmd_simple_pack \
--output-root /path/to/lmd_unpacked
This script:
- finds all
*_shard_00000.tar-style files undersource-root - extracts the original event and pianoroll tensors
- rebuilds
_sample_index/<split>/<instrument>.jsonl - writes
unpack_summary.json
The output directory must be empty. If you want to replace an existing output directory, add:
python pretrain/data/lmd_training_shards_unpack.py \
--source-root /path/to/lmd_simple_pack \
--output-root /path/to/lmd_unpacked \
--overwrite-output
4. Unpack Only Selected Splits Or Instruments
You can filter during unpack:
python pretrain/data/lmd_training_shards_unpack.py \
--source-root /path/to/lmd_simple_pack \
--output-root /path/to/lmd_unpacked \
--splits train val \
--target-instruments piano bass
5. Output Layout
After unpacking, the directory will look like:
<output_root>/
train/<bucket>/<md5>/event_segment/*.pt
train/<bucket>/<md5>/pianoroll_segment/*.pt.gz
val/<bucket>/<md5>/...
eval/<bucket>/<md5>/...
_sample_index/
train/piano.jsonl
train/bass.jsonl
val/...
eval/...
unpack_summary.json
This unpacked tree matches the grouped LMD layout used by
dataset.loaders.lmd_token_pr_load, so the raw loader can read it immediately
without rebuilding the manifest again.
Notes
- Run the unpack script in an environment that has
torchinstalled. - You can run either
python pretrain/data/lmd_training_shards_unpack.py ...from this repo orpython scripts/lmd_training_shards_unpack.py ...from the downloaded Hugging Face dataset. - If you downloaded only part of the dataset, the unpack filters should match the shards and manifest files you kept locally.
- For
simpletar mode, missing manifest JSONL files will cause unpacking to fail.
References
- Hugging Face download guide: https://huggingface.co/docs/huggingface_hub/guides/download
- Hugging Face CLI reference for
hf download: https://huggingface.co/docs/huggingface_hub/package_reference/cli
- Downloads last month
- 960