Voidces / README.md
Translsis's picture
Update README.md
d47be85 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: task_1770352558371/*.parquet
dataset_info:
  features:
    - name: audio
      dtype: binary
    - name: transcription
      dtype: string
    - name: file_name
      dtype: string
  splits:
    - name: train
      num_examples: 2450
task_categories:
  - automatic-speech-recognition
  - text-to-speech
language:
  - vi
size_categories:
  - 1K<n<10K
license: bsd

Voidces

Audio dataset with transcriptions for voice training.

Latest Upload: task_1770352558371

  • Samples: 2450
  • Parquet files: 5
  • ZIP file: task_1770352558371/dataset_audio.zip
  • Metadata: task_1770352558371/metadata.json

Dataset Structure

Files are organized by task ID:

task_1770352558371/
├── train-00000-of-00005.parquet
├── train-00001-of-00005.parquet
├── ...
├── dataset_audio.zip
└── metadata.json

Each parquet file contains:

  • audio: Binary audio data (WAV format)
  • transcription: Text transcription
  • file_name: Reference (format: audio/name_00001.wav)

The metadata.json file contains:

  • Processing parameters
  • Detailed segment information
  • Summary statistics
  • Timestamps and file sizes

Usage

from datasets import load_dataset
import json

# Load dataset
ds = load_dataset("Translsis/Voidces", data_files="task_1770352558371/*.parquet")

# Load metadata
import requests
metadata_url = "https://huggingface.co/datasets/Translsis/Voidces/resolve/main/task_1770352558371/metadata.json"
metadata = requests.get(metadata_url).json()

# Or download ZIP
# https://huggingface.co/datasets/Translsis/Voidces/resolve/main/task_1770352558371/dataset_audio.zip

# Access audio
import io
import soundfile as sf

sample = ds['train'][0]
audio_bytes = sample['audio']
audio_array, sr = sf.read(io.BytesIO(audio_bytes))

Stats

  • Total samples: 2450
  • Parquet files: 5
  • Format: WAV (binary bytes)

Created with automated pipeline: Whisper → YAMNet → BS-RoFormer