Long-Form Test Sets
Collection
A collection of long-form (samples > 30s) datasets used to evaluate the Distil-Whisper models. • 5 items • Updated • 6
audio audioduration (s) 187 1.3k | text stringclasses 8
values | speaker_id stringclasses 8
values |
|---|---|---|
last year i showed these two slides so that demonstrate that the arctic ice cap which for most of the last three million years has been the size of the lower forty eight states has shrunk by forty percent but this understates the seriousness of this particular problem because it doesn't show the thickness of the ice t... | Al_Gore | |
i'm going to talk to you about some stuff that's in this book of mine that i hope will resonate with other things you've already heard and i'll try to make some connections myself in case you miss them i want to start with what i call the official dogma the official dogma of what the official dogma of all western indu... | Barry_Schwartz | |
what i'm going to show you first as quickly as i can is is some some foundational work some some new technology that we brought to microsoft as part of an acquisition almost exactly a year ago this is seadragon and it's an environment in which you can either locally or remotely interact with vast amounts of of visual ... | Blaise_Agueray_Arcas | |
last year at ted i gave an introduction to the lhc and i promised to come back and give you an update on how that machine works so this is it and for those of you who weren't there the lhc is the largest scientific experiment ever attempted twenty seven kilometers in circumference its job is to recreate the conditions... | Brian_Cox | |
you know i've talked about some of these projects before about the human genome and what that might mean and discovering new sets of genes we're actually starting at a new point we've been digitizing biology and now we're trying to go from that digital code into a new phase of biology with designing and synthesizing l... | Craig_Venter | |
i want to start out by asking you to think back to when you were a kid playing with blocks as you figured out how to reach out and grasp pick them up and move them around you were actually learning how to think and solve problems by understanding and manipulating spatial relationships spatial reasoning is deeply conne... | David_Merrill | |
i am a writer writing books is my profession but it's more than that of course it is also my great lifelong love and fascination and i don't expect that that's ever going to change but that said something kind of peculiar has happened recently in my life and in my career which has caused me to have to sort of recalibr... | Elizabeth_Gilbert | |
you know one of the intense pleasures of travel and one of the delights of ethnographic research is the opportunity to live amongst those who have not forgotten the old ways who still feel their past in the wind touch it in stones polished by rain taste it in the bitter leaves of plants just to know that jaguar shaman... | Wade_Davis |
To create the dataset:
import os
import numpy as np
from datasets import load_dataset, DatasetDict, Dataset, Audio
import soundfile as sf
from tqdm import tqdm
tedlium = load_dataset("LIUM/tedlium", "release3")
merged_dataset = DatasetDict()
validation_speaker_ids = [
"Al_Gore",
"Barry_Schwartz",
"Blaise_Agueray_Arcas",
"Brian_Cox",
"Craig_Venter",
"David_Merrill",
"Elizabeth_Gilbert",
"Wade_Davis",
]
validation_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in validation_speaker_ids}
test_speaker_ids = [
"AimeeMullins",
"BillGates",
"DanBarber",
"DanBarber_2010_S103",
"DanielKahneman",
"EricMead_2009P_EricMead",
"GaryFlake",
"JamesCameron",
"JaneMcGonigal",
"MichaelSpecter",
"RobertGupta",
]
test_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in test_speaker_ids}
for split, dataset in zip(["validation", "test"], [validation_dataset_merged, test_dataset_merged]):
sampling_rate = tedlium[split].features["audio"].sampling_rate
for sample in tqdm(tedlium[split]):
if sample["speaker_id"] in dataset:
dataset[sample["speaker_id"]]["audio"].extend(sample["audio"]["array"])
dataset[sample["speaker_id"]]["text"] += " " + sample["text"]
audio_paths = []
os.makedirs(split, exist_ok=True)
for speaker in dataset:
path = os.path.join(split, f"{speaker}-merged.wav")
audio_paths.append(path)
sf.write(path, np.asarray(dataset[speaker]["audio"]), samplerate=sampling_rate)
merged_dataset[split] = Dataset.from_dict({"audio": audio_paths}).cast_column("audio", Audio())
# remove spaced apostrophes (e.g. it 's -> it's)
merged_dataset[split] = merged_dataset[split].add_column("text", [dataset[speaker]["text"].replace(" '", "'") for speaker in dataset])
merged_dataset[split] = merged_dataset[split].add_column("speaker_id", dataset.keys())