Datasets:
Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 236, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 93, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 250, in _generate_tables
batch = json_encode_fields_in_json_lines(original_batch, json_field_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 90, in json_encode_fields_in_json_lines
examples = [ujson_loads(line) for line in original_batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
UN Transcription Benchmark Results
Evaluation results for speech-to-text systems on UN Security Council and General Assembly meeting recordings, assessed against official UN verbatim records.
See united-nations/transcription-corpus for the underlying audio and ground truth data.
Metrics
- WER: Word Error Rate (reference = verbatim record, no normalization)
- normalized_wer: WER after lowercasing, punctuation removal, and filler word removal
- CER: Character Error Rate (same reference, no normalization)
- normalized_cer: CER after normalization
Note: WER of 20–40% is expected for high-quality transcription on these recordings due to the editing gap between live speech and published verbatim records. For Chinese, use CER as the primary metric (Chinese text has no word boundaries).
Providers Evaluated
| Provider | Model | Pricing |
|---|---|---|
assemblyai |
AssemblyAI Universal-2 (diarization enabled) | ~$0.17/hr |
azure-openai |
Azure OpenAI gpt-4o-transcribe-diarize | ~$0.36/hr |
elevenlabs |
ElevenLabs Scribe v2 | ~$0.40/hr |
azure-speech |
Azure Cognitive Services Speech Batch Transcription | ~$0.36/hr |
gemini |
Gemini 3 Flash (structured diarization via prompt) | ~$0.01/hr |
google-chirp |
Google Cloud Chirp 3 (Speech V2 API) | ~$0.016/min |
groq-whisper |
Whisper large-v3 (via Groq) | ~$0.015/min |
Schema
| Column | Type | Description |
|---|---|---|
| symbol | string | UN document symbol, e.g. S/PV.10100 |
| assetId | string | UN Web TV asset ID |
| language | string | ISO 639-1 code |
| provider | string | Transcription provider name |
| wer | float | Word Error Rate (0–1) |
| normalized_wer | float | Normalized WER (0–1) |
| cer | float | Character Error Rate (0–1) |
| normalized_cer | float | Normalized CER (0–1) |
| substitutions | int | Number of word substitutions |
| insertions | int | Number of word insertions |
| deletions | int | Number of word deletions |
| ref_length | int | Reference word count |
| hyp_length | int | Hypothesis word count |
| duration_ms | int | Audio duration in ms |
| timestamp | string | ISO 8601 evaluation timestamp |
- Downloads last month
- 223