libriheavy / README.md
brthor's picture
Add usage snippet for concatenating all configs
784858e verified
metadata
license: apache-2.0
task_categories:
  - text-to-speech
  - automatic-speech-recognition
configs:
  - config_name: small
    default: true
    data_files:
      - split: train
        path:
          - default/small/small-*.parquet
  - config_name: medium
    data_files:
      - split: train
        path:
          - default/medium/medium-*.parquet
  - config_name: large
    data_files:
      - split: train
        path:
          - default/large/shard-*/large-*.parquet
  - config_name: dev
    data_files:
      - split: train
        path:
          - default/dev/dev-*.parquet
  - config_name: test_clean
    data_files:
      - split: train
        path:
          - default/test_clean/test_clean-*.parquet
  - config_name: test_clean_large
    data_files:
      - split: train
        path:
          - default/test_clean_large/test_clean_large-*.parquet
  - config_name: test_other
    data_files:
      - split: train
        path:
          - default/test_other/test_other-*.parquet
  - config_name: test_other_large
    data_files:
      - split: train
        path:
          - default/test_other_large/test_other_large-*.parquet
language:
  - en
pretty_name: Libriheavy
size_categories:
  - 10M<n<100M

Libriheavy

Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context. Libriheavy is a labeled version of Librilight.

This uploaded version replaces the default Libri-Light audio files with the highest quality available versions from librivox. In most cases, this consists an upgrade of the source audio from a 64kbps mp3 to a 128kbps mp3.

Audio files are then re-encoded using the Opus 68kbps codec to retain quality and reduce size.

Configs

Each dataset config exposes a single split named train.

  • small (train): 509 hours of speech. 417 speakers averaging 1.22 hours per speaker.
  • medium (train): 5042 hours of speech. 1531 speakers averaging 3.29 hours per speaker.
  • large (train): 50794 hours of speech. 6736 speakers averaging 7.54 hours per speaker.
  • dev (train): 22.3 hours of speech. 141 speakers averaging 0.16 hours per speaker.
  • test_clean (train): 10.5 hours of speech. 70 speakers averaging 0.15 hours per speaker.
  • test_other (train): 11.5 hours of speech. 72 speakers averaging 0.16 hours per speaker.
  • test_clean_large (train): 107.5 hours of speech. 72 speakers averaging 1.49 hours per speaker.
  • test_other_large (train): 100.3 hours of speech. 73 speakers averaging 1.37 hours per speaker.

Usage

Load a Single Config

from datasets import load_dataset

small = load_dataset("mythicinfinity/libriheavy", "small", split="train")

Targeting a specific config only downloads files declared for that config, which is a good way to control disk usage.

Load the Full Dataset (All Configs)

from datasets import concatenate_datasets, load_dataset

ALL_CONFIGS = [
    "small",
    "medium",
    "large",
    "dev",
    "test_clean",
    "test_clean_large",
    "test_other",
    "test_other_large",
]


def load_libriheavy_all_train(configs: list[str] | None = None):
    cfgs = configs or ALL_CONFIGS
    parts = [load_dataset("mythicinfinity/libriheavy", cfg, split="train") for cfg in cfgs]
    return concatenate_datasets(parts)


full = load_libriheavy_all_train()

Citation

@misc{kang2023libriheavy,
      title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context},
      author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel Povey},
      year={2023},
      eprint={2309.08105},
      archivePrefix={arXiv},
      primaryClass={eess.AS}
}