Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,76 +1,104 @@
|
|
| 1 |
---
|
| 2 |
license:
|
| 3 |
-
|
| 4 |
- cc-by-sa-4.0
|
| 5 |
- cc-by-nc-4.0
|
| 6 |
- cc-by-4.0
|
| 7 |
-
|
| 8 |
annotation_creators:
|
| 9 |
- human-annotated
|
| 10 |
- crowdsourced
|
| 11 |
-
|
| 12 |
language_creators:
|
| 13 |
- creator_1
|
| 14 |
tags:
|
| 15 |
- audio
|
|
|
|
|
|
|
| 16 |
language:
|
| 17 |
- ach
|
| 18 |
- aka
|
| 19 |
-
- dga
|
| 20 |
- dag
|
|
|
|
| 21 |
- ewe
|
|
|
|
| 22 |
- ful
|
|
|
|
|
|
|
| 23 |
- kpo
|
| 24 |
- lin
|
| 25 |
- lug
|
| 26 |
-
- mlg
|
| 27 |
- mas
|
|
|
|
| 28 |
- nyn
|
| 29 |
- sna
|
| 30 |
- sog
|
|
|
|
|
|
|
|
|
|
| 31 |
multilinguality:
|
| 32 |
- multilingual
|
| 33 |
-
pretty_name: Waxal
|
| 34 |
task_categories:
|
| 35 |
- automatic-speech-recognition
|
| 36 |
- text-to-speech
|
| 37 |
-
- text-to-audio
|
| 38 |
source_datasets:
|
| 39 |
- UGSpeechData
|
| 40 |
- DigitalUmuganda/AfriVoice
|
| 41 |
- original
|
| 42 |
dataset_info:
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
---
|
| 64 |
|
| 65 |
-
# Waxal
|
| 66 |
|
| 67 |
## Table of Contents
|
| 68 |
|
| 69 |
- [Dataset Description](#dataset-description)
|
|
|
|
|
|
|
| 70 |
- [How to Use](#how-to-use)
|
| 71 |
- [Dataset Structure](#dataset-structure)
|
| 72 |
-
- [Data
|
| 73 |
-
- [Data Fields](#data-fields)
|
| 74 |
- [Data Splits](#data-splits)
|
| 75 |
- [Dataset Curation](#dataset-curation)
|
| 76 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
|
@@ -78,19 +106,23 @@ dataset_info:
|
|
| 78 |
|
| 79 |
## Dataset Description
|
| 80 |
|
| 81 |
-
The Waxal
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
-
The Waxal dataset is a collection
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
|
| 95 |
| Provider | Languages | License |
|
| 96 |
| :--- | :--- | :---: |
|
|
@@ -98,169 +130,123 @@ accessible.
|
|
| 98 |
| University of Ghana | Akan, Ewe, Dagbani, Dagaare, Ikposo | `CC-BY-NC-4.0` |
|
| 99 |
| Digital Umuganda | Fula, Lingala, Shona, Malagasy | `CC-BY-4.0` |
|
| 100 |
|
| 101 |
-
###
|
| 102 |
|
| 103 |
-
The
|
| 104 |
-
|
| 105 |
-
|
| 106 |
|
| 107 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
``
|
|
|
|
| 112 |
|
| 113 |
-
|
| 114 |
-
`<language>` code for a specific language).
|
| 115 |
|
| 116 |
-
|
| 117 |
|
| 118 |
```python
|
| 119 |
from datasets import load_dataset
|
| 120 |
|
| 121 |
-
#
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
# Load Shona (sna) dataset
|
| 125 |
-
shona_data = load_dataset("google/WaxalNLP", "sna", data_dir="data/ASR")
|
| 126 |
|
| 127 |
# Access splits
|
| 128 |
-
train =
|
| 129 |
-
val = shona_data['validation']
|
| 130 |
-
test = shona_data['test']
|
| 131 |
-
unlabeled = shona_data['unlabeled']
|
| 132 |
-
|
| 133 |
-
# The 'audio' column is automatically decoded when accessed.
|
| 134 |
-
# It returns a dictionary containing 'path', 'array', and 'sampling_rate'.
|
| 135 |
-
example = train[0]
|
| 136 |
-
audio_data = example['audio']
|
| 137 |
-
|
| 138 |
-
print(f"Transcription: {example['transcription']}")
|
| 139 |
-
print(f"Audio array shape: {audio_data['array'].shape}")
|
| 140 |
-
print(f"Sampling rate: {audio_data['sampling_rate']}")
|
| 141 |
```
|
| 142 |
|
| 143 |
-
|
|
|
|
|
|
|
| 144 |
|
| 145 |
```python
|
| 146 |
from datasets import load_dataset
|
| 147 |
|
| 148 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 149 |
```
|
| 150 |
|
| 151 |
## Dataset Structure
|
| 152 |
|
| 153 |
-
|
| 154 |
-
languages with a human-written transcription of each recording. Each data point
|
| 155 |
-
consists of a recording of at least 15 seconds, with a transcript. Each data
|
| 156 |
-
point includes speaker-id, name of the language (one of 14 languages), speaker
|
| 157 |
-
age, speaker gender, and speaker environment (Indoor, Outdoor, Other, In a car,
|
| 158 |
-
Office or Studio).
|
| 159 |
-
|
| 160 |
-
### Data Instances
|
| 161 |
-
|
| 162 |
-
* **Size of ASR dataset:** 1.7T
|
| 163 |
-
* **Number of Instances:** 1,937,031
|
| 164 |
-
* **Number of Fields:** 6
|
| 165 |
-
* **Labeled Classes:** N/A (Each label is a manually written transcription of
|
| 166 |
-
the audio, no classes apply)
|
| 167 |
-
* **Number of Labels:** 224,767
|
| 168 |
-
* **Percentage labeled instances:** 11.62%
|
| 169 |
-
* **Algorithmic Labels:** 0
|
| 170 |
-
* **Human Labels:** 224,767
|
| 171 |
-
* **Hours:** 10960
|
| 172 |
-
|
| 173 |
-
#### Language Codes
|
| 174 |
-
|
| 175 |
-
The data entries are grouped by ISO 639-2 language codes. This is so that the
|
| 176 |
-
audio has a single universal name according to international standards removing
|
| 177 |
-
ambiguity for languages that have multiple names.
|
| 178 |
-
|
| 179 |
-
```
|
| 180 |
-
ach, aka, dga, dag, ewe, ful, kpo, lin, lug, mlg, mas, nyn, sna, sog
|
| 181 |
-
```
|
| 182 |
-
|
| 183 |
-
The dataset includes 14 African languages:
|
| 184 |
-
|
| 185 |
-
| ASR Language | ISO 639-2 | Audio Files | Transcribed Hours | Untranscribed Hours | Total Hours |
|
| 186 |
-
| :--- | :---: | ---: | ---: | ---: | ---: |
|
| 187 |
-
| Acholi | ach | 114,308 | 32.32 | 659.49 | 691.81 |
|
| 188 |
-
| Akan | aka | 195,285 | 101.93 | 941.37 | 1,043.30 |
|
| 189 |
-
| Dagaare | dga | 191,404 | 104.66 | 949.80 | 1,054.47 |
|
| 190 |
-
| Dagbani | dag | 188,808 | 98.54 | 962.28 | 1,060.82 |
|
| 191 |
-
| Ewe | ewe | 203,391 | 99.77 | 976.58 | 1,076.35 |
|
| 192 |
-
| Fulani | ful | 100,827 | 124.24 | 403.21 | 527.45 |
|
| 193 |
-
| Ikposo | kpo | 191,984 | 103.81 | 941.22 | 1,045.03 |
|
| 194 |
-
| Lingala | lin | 100,226 | 101.53 | 415.61 | 517.14 |
|
| 195 |
-
| Luganda | lug | 98,475 | 45.96 | 631.44 | 677.40 |
|
| 196 |
-
| Malagasy | mlg | 101,183 | 182.51 | 333.71 | 516.22 |
|
| 197 |
-
| Masaaba | mas | 116,102 | 48.82 | 645.09 | 693.90 |
|
| 198 |
-
| Nyankole | nyn | 131,743 | 50.87 | 754.51 | 805.38 |
|
| 199 |
-
| Shona | sna | 102,969 | 99.23 | 474.94 | 574.16 |
|
| 200 |
-
| Soga | sog | 120,172 | 50.34 | 736.45 | 786.79 |
|
| 201 |
-
| **Total** | **all** | **1,956,877** | **1,244.52** | **9,825.71** | **11,070.23** |
|
| 202 |
-
|
| 203 |
-
### Data Fields
|
| 204 |
-
|
| 205 |
-
The data is structured as follows:
|
| 206 |
|
| 207 |
```python
|
| 208 |
{
|
| 209 |
'id': 'sna_0',
|
| 210 |
-
'speaker_id': '
|
| 211 |
'audio': {
|
| 212 |
'array': [...],
|
| 213 |
'sample_rate': 16_000
|
| 214 |
},
|
| 215 |
-
'transcription': '
|
| 216 |
'language': 'sna',
|
| 217 |
'gender': 'Female',
|
| 218 |
}
|
| 219 |
```
|
| 220 |
|
| 221 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 222 |
|
| 223 |
-
|
| 224 |
-
- **speaker_id**: `(string)` unique identifier for every speaker.
|
| 225 |
-
- **audio**: `(Audio)` audio data for each example as a sound array.
|
| 226 |
-
- **transcription**: `(string)` the transcription of the audio file if
|
| 227 |
-
labeled, otherwise empty string.
|
| 228 |
-
- **language**: `(string)` language code for the language in ISO 639-2 format.
|
| 229 |
-
- **gender**: `(string)` represents the gender of the speaker if present,
|
| 230 |
-
('Male', 'Female' or empty if not present).
|
| 231 |
|
| 232 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 233 |
|
| 234 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 235 |
|
| 236 |
-
|
| 237 |
-
* **validation**: Contains 10% of samples with transcriptions.
|
| 238 |
-
* **test**: Contains 10% of samples with transcriptions.
|
| 239 |
-
* **unlabeled**: Contains all samples that do not have a transcription.
|
| 240 |
|
| 241 |
-
|
| 242 |
-
|
| 243 |
|
| 244 |
## Dataset Curation
|
| 245 |
|
| 246 |
-
The data
|
| 247 |
-
collective data collection with a standard interface to make it more universally
|
| 248 |
-
accessible and useful for model training.
|
| 249 |
-
|
| 250 |
-
The original data sources and providers are listed below:
|
| 251 |
|
| 252 |
-
| Provider |
|
| 253 |
| :--- | :--- | :--- |
|
| 254 |
| University of Ghana | [UGSpeechData](https://doi.org/10.57760/sciencedb.22298) | `CC BY-NC-ND 4.0` |
|
| 255 |
| Digital Umuganda | [AfriVoice](DigitalUmuganda/AfriVoice) | `CC-BY 4.0` |
|
| 256 |
| Makerere University | [Yogera Dataset](https://doi.org/10.7910/DVN/BEROE0) | `CC-BY 4.0` |
|
|
|
|
| 257 |
|
| 258 |
## Considerations for Using the Data
|
| 259 |
|
| 260 |
-
|
| 261 |
-
|
| 262 |
-
the specific languages that you are using to make sure it is fit for your
|
| 263 |
-
purposes.
|
| 264 |
|
| 265 |
**Affiliation:** Google Research
|
| 266 |
|
|
@@ -268,4 +254,3 @@ purposes.
|
|
| 268 |
|
| 269 |
- **Current Version:** 1.0.0
|
| 270 |
- **Last Updated:** 01/2026
|
| 271 |
-
- **Release Date:** 01/2026
|
|
|
|
| 1 |
---
|
| 2 |
license:
|
|
|
|
| 3 |
- cc-by-sa-4.0
|
| 4 |
- cc-by-nc-4.0
|
| 5 |
- cc-by-4.0
|
|
|
|
| 6 |
annotation_creators:
|
| 7 |
- human-annotated
|
| 8 |
- crowdsourced
|
|
|
|
| 9 |
language_creators:
|
| 10 |
- creator_1
|
| 11 |
tags:
|
| 12 |
- audio
|
| 13 |
+
- automatic-speech-recognition
|
| 14 |
+
- text-to-speech
|
| 15 |
language:
|
| 16 |
- ach
|
| 17 |
- aka
|
|
|
|
| 18 |
- dag
|
| 19 |
+
- dga
|
| 20 |
- ewe
|
| 21 |
+
- fat
|
| 22 |
- ful
|
| 23 |
+
- hau
|
| 24 |
+
- ibo
|
| 25 |
- kpo
|
| 26 |
- lin
|
| 27 |
- lug
|
|
|
|
| 28 |
- mas
|
| 29 |
+
- mlg
|
| 30 |
- nyn
|
| 31 |
- sna
|
| 32 |
- sog
|
| 33 |
+
- swa
|
| 34 |
+
- twi
|
| 35 |
+
- yor
|
| 36 |
multilinguality:
|
| 37 |
- multilingual
|
| 38 |
+
pretty_name: Waxal Datasets
|
| 39 |
task_categories:
|
| 40 |
- automatic-speech-recognition
|
| 41 |
- text-to-speech
|
|
|
|
| 42 |
source_datasets:
|
| 43 |
- UGSpeechData
|
| 44 |
- DigitalUmuganda/AfriVoice
|
| 45 |
- original
|
| 46 |
dataset_info:
|
| 47 |
+
- config_name: asr
|
| 48 |
+
features:
|
| 49 |
+
- name: id
|
| 50 |
+
dtype: string
|
| 51 |
+
- name: speaker_id
|
| 52 |
+
dtype: string
|
| 53 |
+
- name: transcription
|
| 54 |
+
dtype: string
|
| 55 |
+
- name: language
|
| 56 |
+
dtype: string
|
| 57 |
+
- name: gender
|
| 58 |
+
dtype: string
|
| 59 |
+
- name: audio
|
| 60 |
+
dtype: audio
|
| 61 |
+
splits:
|
| 62 |
+
- name: train
|
| 63 |
+
- name: validation
|
| 64 |
+
- name: test
|
| 65 |
+
- name: unlabeled
|
| 66 |
+
- config_name: tts
|
| 67 |
+
features:
|
| 68 |
+
- name: id
|
| 69 |
+
dtype: string
|
| 70 |
+
- name: speaker_id
|
| 71 |
+
dtype: string
|
| 72 |
+
- name: transcription
|
| 73 |
+
dtype: string
|
| 74 |
+
- name: locale
|
| 75 |
+
dtype: string
|
| 76 |
+
- name: gender
|
| 77 |
+
dtype: string
|
| 78 |
+
- name: creator
|
| 79 |
+
dtype: string
|
| 80 |
+
- name: file_name
|
| 81 |
+
dtype: string
|
| 82 |
+
- name: audio
|
| 83 |
+
dtype: audio
|
| 84 |
+
splits:
|
| 85 |
+
- name: train
|
| 86 |
+
- name: validation
|
| 87 |
+
- name: test
|
| 88 |
+
- name: unlabeled
|
| 89 |
---
|
| 90 |
|
| 91 |
+
# Waxal Datasets
|
| 92 |
|
| 93 |
## Table of Contents
|
| 94 |
|
| 95 |
- [Dataset Description](#dataset-description)
|
| 96 |
+
- [ASR Dataset](#asr-dataset)
|
| 97 |
+
- [TTS Dataset](#tts-dataset)
|
| 98 |
- [How to Use](#how-to-use)
|
| 99 |
- [Dataset Structure](#dataset-structure)
|
| 100 |
+
- [ASR Data Fields](#asr-data-fields)
|
| 101 |
+
- [TTS Data Fields](#tts-data-fields)
|
| 102 |
- [Data Splits](#data-splits)
|
| 103 |
- [Dataset Curation](#dataset-curation)
|
| 104 |
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
|
|
|
| 106 |
|
| 107 |
## Dataset Description
|
| 108 |
|
| 109 |
+
The Waxal project provides datasets for both Automated Speech Recognition (ASR)
|
| 110 |
+
and Text-to-Speech (TTS) for African languages. The goal of this dataset's
|
| 111 |
+
creation and release is to facilitate research that improves the accuracy and
|
| 112 |
+
fluency of speech and language technology for these underserved languages, and
|
| 113 |
+
to serve as a repository for digital preservation.
|
| 114 |
+
|
| 115 |
+
The Waxal datasets are collections acquired through partnerships with Makerere
|
| 116 |
+
University, The University of Ghana, Digital Umuganda, and Media Trust.
|
| 117 |
+
Acquisition was funded by Google and the Gates Foundation under an agreement to
|
| 118 |
+
make the dataset openly accessible.
|
| 119 |
+
|
| 120 |
+
### ASR Dataset
|
| 121 |
|
| 122 |
+
The Waxal ASR dataset is a collection of data in 14 African languages. It
|
| 123 |
+
consists of approximately 1,250 hours of transcribed natural speech from a wide
|
| 124 |
+
variety of voices. The 14 languages in this dataset represent over 100 million
|
| 125 |
+
speakers across 40 Sub-Saharan African countries.
|
| 126 |
|
| 127 |
| Provider | Languages | License |
|
| 128 |
| :--- | :--- | :---: |
|
|
|
|
| 130 |
| University of Ghana | Akan, Ewe, Dagbani, Dagaare, Ikposo | `CC-BY-NC-4.0` |
|
| 131 |
| Digital Umuganda | Fula, Lingala, Shona, Malagasy | `CC-BY-4.0` |
|
| 132 |
|
| 133 |
+
### TTS Dataset
|
| 134 |
|
| 135 |
+
The Waxal TTS dataset is a collection of text-to-speech data in 10 African
|
| 136 |
+
languages. It consists of approximately 240 hours of scripted natural speech
|
| 137 |
+
from a wide variety of voices.
|
| 138 |
|
| 139 |
+
| Provider | Languages | License |
|
| 140 |
+
| :--- | :--- | :---: |
|
| 141 |
+
| Makerere University | Acholi, Luganda, Kiswahili, Nyankole | `CC-BY-4.0` |
|
| 142 |
+
| University of Ghana | Akan (Fante, Twi) | `CC-BY-NC-4.0` |
|
| 143 |
+
| Media Trust | Fula, Igbo, Hausa, Yoruba | `CC-BY-4.0` |
|
| 144 |
|
| 145 |
+
### How to Use
|
| 146 |
+
|
| 147 |
+
The `datasets` library allows you to load and pre-process your dataset in pure
|
| 148 |
+
Python, at scale.
|
| 149 |
|
| 150 |
+
**Loading ASR Data**
|
|
|
|
| 151 |
|
| 152 |
+
To load ASR data, point to the `data/ASR` directory.
|
| 153 |
|
| 154 |
```python
|
| 155 |
from datasets import load_dataset
|
| 156 |
|
| 157 |
+
# Load Shona (sna) ASR dataset
|
| 158 |
+
asr_data = load_dataset("google/WaxalNLP", "sna", data_dir="data/ASR")
|
|
|
|
|
|
|
|
|
|
| 159 |
|
| 160 |
# Access splits
|
| 161 |
+
train = asr_data['train']
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 162 |
```
|
| 163 |
|
| 164 |
+
**Loading TTS Data**
|
| 165 |
+
|
| 166 |
+
To load TTS data, point to the `data/TTS` directory.
|
| 167 |
|
| 168 |
```python
|
| 169 |
from datasets import load_dataset
|
| 170 |
|
| 171 |
+
# Load Swahili (swa) TTS dataset
|
| 172 |
+
tts_data = load_dataset("google/WaxalNLP", "swa", data_dir="data/TTS")
|
| 173 |
+
|
| 174 |
+
# Access splits
|
| 175 |
+
train = tts_data['train']
|
| 176 |
```
|
| 177 |
|
| 178 |
## Dataset Structure
|
| 179 |
|
| 180 |
+
### ASR Data Fields
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 181 |
|
| 182 |
```python
|
| 183 |
{
|
| 184 |
'id': 'sna_0',
|
| 185 |
+
'speaker_id': '...',
|
| 186 |
'audio': {
|
| 187 |
'array': [...],
|
| 188 |
'sample_rate': 16_000
|
| 189 |
},
|
| 190 |
+
'transcription': '...',
|
| 191 |
'language': 'sna',
|
| 192 |
'gender': 'Female',
|
| 193 |
}
|
| 194 |
```
|
| 195 |
|
| 196 |
+
* **id**: Unique identifier.
|
| 197 |
+
* **speaker_id**: Unique identifier for the speaker.
|
| 198 |
+
* **audio**: Audio data.
|
| 199 |
+
* **transcription**: Transcription of the audio.
|
| 200 |
+
* **language**: ISO 639-2 language code.
|
| 201 |
+
* **gender**: Speaker gender ('Male', 'Female', or empty).
|
| 202 |
|
| 203 |
+
### TTS Data Fields
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 204 |
|
| 205 |
+
```python
|
| 206 |
+
{
|
| 207 |
+
'id': 'swa_0',
|
| 208 |
+
'creator': 'media_trust',
|
| 209 |
+
'speaker_id': '...',
|
| 210 |
+
'audio': {
|
| 211 |
+
'array': [...],
|
| 212 |
+
'sample_rate': 16_000
|
| 213 |
+
},
|
| 214 |
+
'transcription': '...',
|
| 215 |
+
'file_name': 'audio_file.wav',
|
| 216 |
+
'locale': 'swa',
|
| 217 |
+
'gender': 'Female',
|
| 218 |
+
}
|
| 219 |
+
```
|
| 220 |
|
| 221 |
+
* **id**: Unique identifier.
|
| 222 |
+
* **creator**: Institution that provided the data.
|
| 223 |
+
* **speaker_id**: Unique identifier for the speaker.
|
| 224 |
+
* **audio**: Audio data.
|
| 225 |
+
* **transcription**: Transcription.
|
| 226 |
+
* **file_name**: Original filename.
|
| 227 |
+
* **locale**: ISO 639-2 language code.
|
| 228 |
+
* **gender**: Speaker gender.
|
| 229 |
|
| 230 |
+
### Data Splits
|
|
|
|
|
|
|
|
|
|
| 231 |
|
| 232 |
+
Each language configuration contains four data splits: `train`, `validation`,
|
| 233 |
+
`test`, and `unlabeled`.
|
| 234 |
|
| 235 |
## Dataset Curation
|
| 236 |
|
| 237 |
+
The data was gathered by multiple partners:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 238 |
|
| 239 |
+
| Provider | Dataset | License |
|
| 240 |
| :--- | :--- | :--- |
|
| 241 |
| University of Ghana | [UGSpeechData](https://doi.org/10.57760/sciencedb.22298) | `CC BY-NC-ND 4.0` |
|
| 242 |
| Digital Umuganda | [AfriVoice](DigitalUmuganda/AfriVoice) | `CC-BY 4.0` |
|
| 243 |
| Makerere University | [Yogera Dataset](https://doi.org/10.7910/DVN/BEROE0) | `CC-BY 4.0` |
|
| 244 |
+
| Media Trust | | `CC-BY 4.0` |
|
| 245 |
|
| 246 |
## Considerations for Using the Data
|
| 247 |
|
| 248 |
+
Please check the license for the specific languages you are using, as they may
|
| 249 |
+
differ between providers.
|
|
|
|
|
|
|
| 250 |
|
| 251 |
**Affiliation:** Google Research
|
| 252 |
|
|
|
|
| 254 |
|
| 255 |
- **Current Version:** 1.0.0
|
| 256 |
- **Last Updated:** 01/2026
|
|
|