Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -147,18 +147,33 @@ from a wide variety of voices.
|
|
| 147 |
The `datasets` library allows you to load and pre-process your dataset in pure
|
| 148 |
Python, at scale.
|
| 149 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 150 |
**Loading ASR Data**
|
| 151 |
|
| 152 |
To load ASR data, point to the `data/ASR` directory.
|
| 153 |
|
| 154 |
```python
|
| 155 |
-
from datasets import load_dataset
|
| 156 |
|
| 157 |
# Load Shona (sna) ASR dataset
|
| 158 |
asr_data = load_dataset("google/WaxalNLP", "sna", data_dir="data/ASR")
|
| 159 |
|
| 160 |
# Access splits
|
| 161 |
train = asr_data['train']
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 162 |
```
|
| 163 |
|
| 164 |
**Loading TTS Data**
|
|
@@ -229,8 +244,16 @@ train = tts_data['train']
|
|
| 229 |
|
| 230 |
### Data Splits
|
| 231 |
|
| 232 |
-
|
| 233 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 234 |
|
| 235 |
## Dataset Curation
|
| 236 |
|
|
|
|
| 147 |
The `datasets` library allows you to load and pre-process your dataset in pure
|
| 148 |
Python, at scale.
|
| 149 |
|
| 150 |
+
First, ensure you have the necessary dependencies installed to handle audio data:
|
| 151 |
+
|
| 152 |
+
```bash
|
| 153 |
+
pip install datasets[audio]
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
**Loading ASR Data**
|
| 157 |
|
| 158 |
To load ASR data, point to the `data/ASR` directory.
|
| 159 |
|
| 160 |
```python
|
| 161 |
+
from datasets import load_dataset, Audio
|
| 162 |
|
| 163 |
# Load Shona (sna) ASR dataset
|
| 164 |
asr_data = load_dataset("google/WaxalNLP", "sna", data_dir="data/ASR")
|
| 165 |
|
| 166 |
# Access splits
|
| 167 |
train = asr_data['train']
|
| 168 |
+
val = asr_data['validation']
|
| 169 |
+
test = asr_data['test']
|
| 170 |
+
|
| 171 |
+
# Example: Accessing audio bytes and other fields
|
| 172 |
+
example = train[0]
|
| 173 |
+
print(f"Transcription: {example['transcription']}")
|
| 174 |
+
print(f"Sampling Rate: {example['audio']['sampling_rate']}")
|
| 175 |
+
# 'array' contains the decoded audio bytes as a numpy array
|
| 176 |
+
print(f"Audio Array Shape: {example['audio']['array'].shape}")
|
| 177 |
```
|
| 178 |
|
| 179 |
**Loading TTS Data**
|
|
|
|
| 244 |
|
| 245 |
### Data Splits
|
| 246 |
|
| 247 |
+
For the **ASR Dataset**, the data with transcriptions is split as follows:
|
| 248 |
+
* **train**: 80% of labeled data.
|
| 249 |
+
* **validation**: 10% of labeled data.
|
| 250 |
+
* **test**: 10% of labeled data.
|
| 251 |
+
|
| 252 |
+
The **unlabeled** split contains all samples that do not have a corresponding
|
| 253 |
+
transcription.
|
| 254 |
+
|
| 255 |
+
The **TTS Dataset** follows a similar structure, with data split into `train`,
|
| 256 |
+
`validation`, and `test` sets.
|
| 257 |
|
| 258 |
## Dataset Curation
|
| 259 |
|