Datasets:
Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 712, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1847, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 731, in finalize
self._build_writer(self.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 757, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files list | _fingerprint string | _format_columns null | _format_kwargs dict | _format_type null | _output_all_columns bool | _split null |
|---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] | e0713eac25a45e22 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | a88363fe28102a42 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | e89d1e578fd7374b | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | 8ba7258213985914 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | d9f8d984249988b0 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | 322cd46c7321e741 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | a1aa5101095918b0 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | 83ac62c30c70c31b | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | 96b3bc6f3d98a747 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | 17513783ab61fefd | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | e43fc0e73f27f5f7 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | 9373eb1efc3af738 | null | {} | null | false | null |
[
{
"filename": "data-00000-of-00001.arrow"
}
] | 887770bf163f8505 | null | {} | null | false | null |
null | null | null | null | null | null | null |
Urdu-Munch-Lina
Processed version of zuhri025/Urdu-Munch with LinaCodec encoding.
Dataset Structure
This dataset contains 2 batches of audio data encoded with LinaCodec:
Urdu-Munch-Lina/
├── .meta/ # Metadata (not data files)
│ ├── progress.json # Processing progress
│ └── metadata.json # Dataset metadata
├── batch_0/
│ ├── data-*.arrow # Actual data files
│ └── dataset_info.json
├── batch_1/
└── ...
Available batches: [1, 2]
Features
Each sample contains:
id: Unique identifiertranscript: Urdu text transcriptvoice: Voice typetext: Text contenttimestamp: Recording timestampduration: Audio duration (seconds)speech_tokens: LinaCodec discrete tokensglobal_embedding: LinaCodec global embedding (128-dim)speech_tokens_length: Token sequence lengthprocessing_success: Encoding success flag
Usage
Load a specific batch:
from datasets import load_dataset
# Load batch 1
batch = load_dataset("zuhri025/Urdu-Munch-Lina", data_dir="batch_1", split="train")
print(f"Loaded {len(batch)} samples")
# Access features
sample = batch[0]
print(f"Tokens: {sample['speech_tokens'][:10]}")
print(f"Token length: {sample['speech_tokens_length']}")
Load all batches:
# Load entire dataset
dataset = load_dataset("zuhri025/Urdu-Munch-Lina")
Decode audio with LinaCodec:
import torch
from linacodec.codec import LinaCodec
# Load model
lina = LinaCodec()
# Get tokens and embedding
tokens = torch.tensor(sample['speech_tokens'])
embedding = torch.tensor(sample['global_embedding'])
# Decode to audio (48kHz output)
audio = lina.decode(tokens, embedding)
# Save audio
import soundfile as sf
sf.write("output.wav", audio.cpu().numpy(), 48000)
Statistics
- Total Batches: 2
- Token Rate: ~50 tokens/second
- Embedding Dimension: 128
- Output Sample Rate: 48kHz (LinaCodec default)
Processing
Original audio (22.05kHz WAV) → LinaCodec → Discrete tokens + Global embedding
License
MIT License (same as source dataset)
Citation
@misc{urdu-munch-lina,
title={Urdu-Munch-Lina: LinaCodec Encoded Urdu Speech},
author={zuhri025},
year={2025},
publisher={Hugging Face}
}
- Downloads last month
- 19