Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 80, in _split_generators
                  first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 33, in _get_pipeline_from_tar
                  for filename, f in tar_iterator:
                                     ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
                  for x in self.generator(*self.args):
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1380, in _iter_from_urlpath
                  yield from cls._iter_tar(f)
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1331, in _iter_tar
                  stream = tarfile.open(fileobj=f, mode="r|*")
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 1886, in open
                  t = cls(name, filemode, stream, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 1762, in __init__
                  self.firstmember = self.next()
                                     ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 2750, in next
                  raise ReadError(str(e)) from None
              tarfile.ReadError: invalid header
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

huper-clean100-proxyphones

LibriSpeech train-clean-100 audio paired with HuPER-style proxy ARPAbet phone labels (machine-generated / proxy, not human verified).
This corresponds to the 100h train-clean-100 split (28,539 utterances).
Note: HuggingFace Dataset Viewer is not supported because the data is provided as tar+zstd shards. Follow the instructions below to download and extract locally.

What’s inside

The data is stored as 5 shards under blobs/:

  • blobs/train.tar.zst.part-000blobs/train.tar.zst.part-004

After extraction, you will obtain a train/ directory (same structure as the original repo on disk), containing:

  • audio files (LibriSpeech train-clean-100 audio)
  • phone labels / metadata file(s) with fields such as:
    • id: utterance id
    • file_name: audio path
    • phones: list of ARPAbet phones

(If you rename the metadata file, update the loading example below accordingly.)

Quickstart: download + extract

Option A: Git LFS (recommended)

git lfs install
git clone https://huggingface.co/datasets/huper29/huper-clean100-proxyphones
cd huper-clean100-proxyphones
git lfs pull

Reconstruct and extract:

cat blobs/train.tar.zst.part-* > train.tar.zst

# If your tar supports zstd:
tar -I zstd -xf train.tar.zst

# If tar does NOT support zstd, use:
# unzstd -c train.tar.zst | tar -xf -

You should now see a train/ directory.

Option B: download shards from Python (no git)

from huggingface_hub import hf_hub_download

repo_id = "huper29/huper-clean100-proxyphones"
parts = [f"blobs/train.tar.zst.part-{i:03d}" for i in range(5)]
local_paths = [hf_hub_download(repo_id=repo_id, filename=p, repo_type="dataset") for p in parts]
print(local_paths)

Then cat + extract as above.

Load in Python (HuggingFace Datasets)

If you have a JSONL metadata file like train/metadata.jsonl with columns
id, file_name, phones:

from datasets import load_dataset, Audio

ds = load_dataset("json", data_files="train/metadata.jsonl", split="train")
ds = ds.cast_column("file_name", Audio(sampling_rate=16_000))
print(ds[0])

If your metadata is TSV/CSV, replace "json" with "csv" and set the correct delimiter.

Notes / Limitations

  • Labels are proxy phones produced by HuPER-style pipelines; they are not human-verified.

  • Please follow the original LibriSpeech licensing terms (CC BY 4.0).

Citation

If you use this dataset, please cite:

@article{guo2026huper,
  title   = {HuPER: A Human-Inspired Framework for Phonetic Perception},
  author  = {Guo, Chenxu and Lian, Jiachen and Liu, Yisi and Huang, Baihe and Narayanan, Shriyaa and Cho, Cheol Jun and Anumanchipalli, Gopala},
  journal = {arXiv preprint arXiv:2602.01634},
  year    = {2026}
}

@inproceedings{panayotov2015librispeech,
  title={Librispeech: an ASR corpus based on public domain audio books},
  author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
  booktitle={ICASSP},
  year={2015}
}
Downloads last month
20

Paper for huper29/huper-clean100-proxyphones