WAXAL: A Large-Scale Multilingual African Language Speech Corpus
Paper β’ 2602.02734 β’ Published β’ 3
ewe)
Fine-tuning-ready checkpoint for Ewe (ewe).
| WAXAL dataset config | google/WaxalNLP β ewe_tts |
| Data provider | University of Ghana |
| WAXAL data license | CC-BY-4.0 |
| Base model | facebook/mms-tts-ewe |
| Model license | CC-BY-NC 4.0 (MMS base; governs fine-tuned model) |
facebook/mms-tts-* Hub checkpoints are inference-only releases that crash run_vits_finetuning.py. This repository applies three patches:
| File | Change |
|---|---|
config.json |
pad_token_id set to 0 (was null) |
tokenizer_config.json |
pad_token entry added |
preprocessor_config.json |
Added β VitsFeatureExtractor config from ylacombe/mms-tts-eng-train |
Model weights are not stored here.
_name_or_pathinconfig.jsonpoints tofacebook/mms-tts-ewe, sorun_vits_finetuning.pyloads weights from that checkpoint at training time.
Downloaded verbatim from ylacombe/mms-tts-eng-train.
Values are VITS architecture constants shared by all MMS-TTS languages.
| Field | Value |
|---|---|
feature_extractor_type |
VitsFeatureExtractor |
feature_size |
80 |
hop_length |
256 |
max_wav_value |
32768.0 |
n_fft |
1024 |
padding_side |
right |
padding_value |
0.0 |
return_attention_mask |
False |
sampling_rate |
16000 |
spec_gain |
1 |
{
"model_name_or_path": "rnjema-unima/mms-tts-ewe-baseline",
"feature_extractor_name": "rnjema-unima/mms-tts-ewe-baseline",
"dataset_name": "google/WaxalNLP",
"dataset_config_name": "ewe_tts",
"audio_column_name": "audio",
"text_column_name": "text",
"train_split_name": "train",
"eval_split_name": "validation"
}
from transformers import VitsModel, VitsTokenizer
import torch, scipy
model = VitsModel.from_pretrained("your-org/your-finetuned-model")
tokenizer = VitsTokenizer.from_pretrained("your-org/your-finetuned-model")
inputs = tokenizer("Your text in Ewe.", return_tensors="pt")
with torch.no_grad():
out = model(**inputs)
scipy.io.wavfile.write("output.wav", model.config.sampling_rate,
out.waveform.squeeze().numpy())
| Architecture | VITS (end-to-end, no separate vocoder) |
| MMS match type | direct |
pad_token_id |
0 |
vocab_size |
51 |
is_uroman |
false |
sampling_rate |
16000 Hz |
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
@article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} }
Base model
facebook/mms-tts-ewe