WaxalNLP / README.md
perrynelson's picture
Update README.md
d145d01 verified
|
raw
history blame
8.37 kB
---
license:
- cc-by-sa-4.0
- cc-by-nc-4.0
- cc-by-4.0
annotation_creators:
- human-annotated
- crowdsourced
language_creators:
- creator_1
tags:
- audio
language:
- ach
- aka
- dga
- dag
- ewe
- ful
- kpo
- lin
- lug
- mlg
- mas
- nyn
- sna
- sog
multilinguality:
- multilingual
pretty_name: Waxal Dataset
task_categories:
- automatic-speech-recognition
- text-to-speech
- text-to-audio
source_datasets:
- UGSpeechData
- DigitalUmuganda/AfriVoice
- original
dataset_info:
features:
- name: id
dtype: string
- name: speaker_id
dtype: string
- name: transcription
dtype: string
- name: language
dtype: string
- name: gender
dtype: string
- name: audio
dtype: audio
config_name: all
splits:
- name: train
- name: validation
- name: test
- name: unlabeled
---
# Waxal ASR Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [How to Use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Curation](#dataset-curation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Additional Information](#additional-information)
## Dataset Description
The Waxal dataset is a collection of automated speech recognition (ASR) data in
14 African languages. It consists of approximately 1,250 hours of transcribed
natural speech from a wide variety of voices, suitable for ASR. The 14 languages
in this dataset represent over 100 million speakers across 40 Sub-Saharan
African countries. The goal of this dataset's creation and release is to
facilitate research that improves the accuracy and fluency of speech and
language technology for these underserved languages, and to serve as a
repository for digital preservation.
The Waxal dataset is a collection acquired through partnerships with Makerere
University, The University of Ghana and Digital Umuganda. Acquisition was funded
by Google and the Gates Foundation under an agreement to make the dataset openly
accessible.
| Provider | Languages | License |
| :--- | :--- | :---: |
| Makerere University | Acholi, Luganda, Masaaba, Nyankole, Soga | `CC-BY-4.0` |
| University of Ghana | Akan, Ewe, Dagbani, Dagaare, Ikposo | `CC-BY-NC-4.0` |
| Digital Umuganda | Fula, Lingala, Shona, Malagasy | `CC-BY-4.0` |
### How to Use
The `datasets` library allows you to load and pre-process your dataset in pure
Python, at scale. The dataset can be downloaded and prepared in one call to your
local drive by using the `load_dataset` function.
The following language configurations may be used:
```python
all, ach, aka, dga, dag, ewe, ful, kpo, lin, lug, mlg, mas, nyn, sna, sog
```
To download the config, specify the language code, (`all` for all languages,
`<language>` code for a specific language).
Downloading specific language data (e.g. Shona):
```python
from datasets import load_dataset
# Ensure you have the audio dependencies installed:
# pip install datasets[audio]
# Load Shona (sna) dataset
shona_data = load_dataset("google/WaxalNLP", "sna")
# Access splits
train = shona_data['train']
val = shona_data['validation']
test = shona_data['test']
unlabeled = shona_data['unlabeled']
# The 'audio' column is automatically decoded when accessed.
# It returns a dictionary containing 'path', 'array', and 'sampling_rate'.
example = train[0]
audio_data = example['audio']
print(f"Transcription: {example['transcription']}")
print(f"Audio array shape: {audio_data['array'].shape}")
print(f"Sampling rate: {audio_data['sampling_rate']}")
```
Downloading ALL data (large):
```python
from datasets import load_dataset
all_data = load_dataset("google/WaxalNLP", "all")
```
## Dataset Structure
The Waxal ASR dataset is a collection of recordings of speakers in 14 African
languages with a human-written transcription of each recording. Each data point
consists of a recording of at least 15 seconds, with a transcript. Each data
point includes speaker-id, name of the language (one of 14 languages), speaker
age, speaker gender, and speaker environment (Indoor, Outdoor, Other, In a car,
Office or Studio).
### Data Instances
* **Size of ASR dataset:** 1.7T
* **Number of Instances:** 1,937,031
* **Number of Fields:** 6
* **Labeled Classes:** N/A (Each label is a manually written transcription of
the audio, no classes apply)
* **Number of Labels:** 224,767
* **Percentage labeled instances:** 11.62%
* **Algorithmic Labels:** 0
* **Human Labels:** 224,767
* **Hours:** 10960
#### Language Codes
The data entries are grouped by ISO 639-2 language codes. This is so that the
audio has a single universal name according to international standards removing
ambiguity for languages that have multiple names.
```
ach, aka, dga, dag, ewe, ful, kpo, lin, lug, mlg, mas, nyn, sna, sog
```
The dataset includes 14 African languages:
| ASR Language | ISO 639-2 | Audio Files | Transcribed Hours | Untranscribed Hours | Total Hours |
| :--- | :---: | ---: | ---: | ---: | ---: |
| Acholi | ach | 114,308 | 32.32 | 659.49 | 691.81 |
| Akan | aka | 195,285 | 101.93 | 941.37 | 1,043.30 |
| Dagaare | dga | 191,404 | 104.66 | 949.80 | 1,054.47 |
| Dagbani | dag | 188,808 | 98.54 | 962.28 | 1,060.82 |
| Ewe | ewe | 203,391 | 99.77 | 976.58 | 1,076.35 |
| Fulani | ful | 100,827 | 124.24 | 403.21 | 527.45 |
| Ikposo | kpo | 191,984 | 103.81 | 941.22 | 1,045.03 |
| Lingala | lin | 100,226 | 101.53 | 415.61 | 517.14 |
| Luganda | lug | 98,475 | 45.96 | 631.44 | 677.40 |
| Malagasy | mlg | 101,183 | 182.51 | 333.71 | 516.22 |
| Masaaba | mas | 116,102 | 48.82 | 645.09 | 693.90 |
| Nyankole | nyn | 131,743 | 50.87 | 754.51 | 805.38 |
| Shona | sna | 102,969 | 99.23 | 474.94 | 574.16 |
| Soga | sog | 120,172 | 50.34 | 736.45 | 786.79 |
| **Total** | **all** | **1,956,877** | **1,244.52** | **9,825.71** | **11,070.23** |
### Data Fields
The data is structured as follows:
```python
{
'id': 'sna_0',
'speaker_id': '2Eud8lyLlsMcciYhmlkwVRtBwi82',
'audio': {
'array': [...],
'sample_rate': 16_000
},
'transcription': '<transcription | "">',
'language': 'sna',
'gender': 'Female',
}
```
Field descriptions:
- **id**: `(string)` unique identifier for the record.
- **speaker_id**: `(string)` unique identifier for every speaker.
- **audio**: `(Audio)` audio data for each example as a sound array.
- **transcription**: `(string)` the transcription of the audio file if
labeled, otherwise empty string.
- **language**: `(string)` language code for the language in ISO 639-2 format.
- **gender**: `(string)` represents the gender of the speaker if present,
('Male', 'Female' or empty if not present).
### Data Splits
Each language configuration contains four data splits:
* **train**: Contains 80% of samples with transcriptions.
* **validation**: Contains 10% of samples with transcriptions.
* **test**: Contains 10% of samples with transcriptions.
* **unlabeled**: Contains all samples that do not have a transcription.
The `all` configuration will load data from all languages, while other
configurations (e.g., `sna`) will load data only for the specified language.
## Dataset Curation
The data is a curation of data that was gathered by multiple partners into one
collective data collection with a standard interface to make it more universally
accessible and useful for model training.
The original data sources and providers are listed below:
| Provider | Data Corpus | License |
| :--- | :--- | :--- |
| University of Ghana | [UGSpeechData](https://doi.org/10.57760/sciencedb.22298) | `CC BY-NC-ND 4.0` |
| Digital Umuganda | [AfriVoice](DigitalUmuganda/AfriVoice) | `CC-BY 4.0` |
| Makerere University | [Yogera Dataset](https://doi.org/10.7910/DVN/BEROE0) | `CC-BY 4.0` |
## Considerations for Using the Data
When using this data corpus please keep in mind that data from different
providers may license their data differently. Please check the license for
the specific languages that you are using to make sure it is fit for your
purposes.
**Affiliation:** Google Research
## Version and Maintenance
- **Current Version:** 1.0.0
- **Last Updated:** 01/2026
- **Release Date:** 01/2026