Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
$schema: string
title: string
type: string
additionalProperties: bool
properties: struct<id: struct<type: list<item: string>>, content_hash: struct<type: list<item: string>>, video_id: struct<type: list<item: string>>, url: struct<type: list<item: string>, format: string>, position: struct<type: list<item: string>>, published_at: struct<type: list<item: string>>, video_date: struct<type: list<item: string>, pattern: string>, title: struct<type: list<item: string>>, youtube_title: struct<type: list<item: string>>, lang_primary: struct<type: list<item: string>>, lang_secondary: struct<type: list<item: string>>, lang: struct<type: list<item: string>>, channel: struct<type: list<item: string>>, domain: struct<type: list<item: string>>, source: struct<type: list<item: string>>, caption_source: struct<type: list<item: string>>, caption_track_kind: struct<type: list<item: string>>, view_count: struct<type: list<item: string>>, tags: struct<type: list<item: string>>, tags_list: struct<type: string, items: struct<type: string>>, text: struct<type: list<item: string>>, text_with_breaks: struct<type: list<item: string>>, script_es: struct<type: list<item: string>>, script_es_with_breaks: struct<type: list<item: string>>, script_en: struct<type: list<item: string>>, script_en_with_breaks: struct<type: list<item: string>>, srt_es: struct<type: list<item: string>>, srt_en: struct<type: list<item: string>>, original_filename_es: struct<type: list<item: string>>, original_filename_en: struct<type: list<item: string>>, missing_scripts: struct<type: string>>
required: list<item: string>
vs
id: string
content_hash: string
video_id: string
url: string
position: int64
published_at: timestamp[s]
video_date: timestamp[s]
title: string
youtube_title: string
lang_primary: string
lang_secondary: string
lang: string
channel: string
domain: string
source: string
caption_source: string
caption_track_kind: string
view_count: int64
tags: string
tags_list: list<item: string>
text: string
text_with_breaks: string
script_es: string
script_es_with_breaks: string
script_en: string
script_en_with_breaks: string
srt_es: string
srt_en: string
original_filename_es: string
original_filename_en: string
missing_scripts: bool
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 588, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              $schema: string
              title: string
              type: string
              additionalProperties: bool
              properties: struct<id: struct<type: list<item: string>>, content_hash: struct<type: list<item: string>>, video_id: struct<type: list<item: string>>, url: struct<type: list<item: string>, format: string>, position: struct<type: list<item: string>>, published_at: struct<type: list<item: string>>, video_date: struct<type: list<item: string>, pattern: string>, title: struct<type: list<item: string>>, youtube_title: struct<type: list<item: string>>, lang_primary: struct<type: list<item: string>>, lang_secondary: struct<type: list<item: string>>, lang: struct<type: list<item: string>>, channel: struct<type: list<item: string>>, domain: struct<type: list<item: string>>, source: struct<type: list<item: string>>, caption_source: struct<type: list<item: string>>, caption_track_kind: struct<type: list<item: string>>, view_count: struct<type: list<item: string>>, tags: struct<type: list<item: string>>, tags_list: struct<type: string, items: struct<type: string>>, text: struct<type: list<item: string>>, text_with_breaks: struct<type: list<item: string>>, script_es: struct<type: list<item: string>>, script_es_with_breaks: struct<type: list<item: string>>, script_en: struct<type: list<item: string>>, script_en_with_breaks: struct<type: list<item: string>>, srt_es: struct<type: list<item: string>>, srt_en: struct<type: list<item: string>>, original_filename_es: struct<type: list<item: string>>, original_filename_en: struct<type: list<item: string>>, missing_scripts: struct<type: string>>
              required: list<item: string>
              vs
              id: string
              content_hash: string
              video_id: string
              url: string
              position: int64
              published_at: timestamp[s]
              video_date: timestamp[s]
              title: string
              youtube_title: string
              lang_primary: string
              lang_secondary: string
              lang: string
              channel: string
              domain: string
              source: string
              caption_source: string
              caption_track_kind: string
              view_count: int64
              tags: string
              tags_list: list<item: string>
              text: string
              text_with_breaks: string
              script_es: string
              script_es_with_breaks: string
              script_en: string
              script_en_with_breaks: string
              srt_es: string
              srt_en: string
              original_filename_es: string
              original_filename_en: string
              missing_scripts: bool

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

πŸ—£οΈ Samuel y Audrey: Bilingual YouTube Transcript Corpus (ES/EN)

DOI ORCID ORCID GitHub License: CC BY-NC 4.0

πŸ“Œ Dataset Summary

This dataset contains a structured, bilingual parallel corpus of 643 creator-authored YouTube transcripts from the Samuel y Audrey Spanish-language travel channel.

It provides high-fidelity, conversational dialogue in both Spanish (Primary) and English (Secondary), making it an exceptional resource for training Large Language Models (LLMs) on cross-lingual alignment, natural translation, and regional Spanish dialects (specifically Argentine and Latin American variations).

What’s Inside

  • 643 Video Records: Full metadata extracted directly from the YouTube channel.
  • Paired Translations: 637 records feature perfectly paired .es.srt and .en.srt files.
  • Polished Master (2026-02-13): This version applies light, non-destructive normalization (fixing obvious phonetic translation errors like Mercadolibbre β†’ MercadoLibre) while preserving 100% of the underlying conversational flow.

πŸ›οΈ Linguistic Value & Use Cases

Unlike formal news corpora or academic translations, this dataset captures natural, spontaneous, on-camera dialogue regarding global travel, cultural immersion, and expat logistics.

  • Cross-Lingual LLM Alignment: Train models to translate natural, spoken idioms between English and Latin American Spanish.
  • Dialect Tuning: Ground AI models in the specific vocabulary of Argentine culture (e.g., bacha, covacha) and broader South American travel.
  • Conversational AI: Improve the natural cadence and "human feel" of voice agents and text-generation models.

πŸ“‚ Canonical Files & Architecture

Each JSONL/CSV row represents a single video, containing both the raw .srt text and the cleaned conversational payloads.

  • samuel-y-audrey-youtube-transcripts-es-en.jsonl (Recommended for LLMs/RAG)
  • samuel-y-audrey-youtube-transcripts-es-en.csv (Lite version; verbatim SRT payloads omitted)
  • DATA_DICTIONARY.md (Complete schema breakdown defining all fields)
  • llms.txt (Summary index for AI crawlers)

Language Ordering Rule

For every record in this dataset, the Spanish payload is presented first, followed by the English payload:

  1. script_es / srt_es
  2. script_en / srt_en
  3. The combined text field always follows: Spanish first β†’ English second.

πŸ“œ License & Commercial Use

License: Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0)

Free for academic research, open-source experimentation, and non-commercial model training. For commercial LLM training (including translation engines or conversational AI products), enterprise Knowledge Graph deployment, or bulk data licensing inquiries, please contact: nomadicsamuel@gmail.com


πŸŽ“ Citation / Attribution

If you utilize this bilingual corpus for NLP research, translation training, or model alignment, please cite the definitive Zenodo record:

Samuel & Audrey Media Network. (2026). Samuel y Audrey: Bilingual YouTube Transcript Corpus (ES/EN)

@dataset{samuel_audrey_youtube_transcripts_esen_2026,
  title={Samuel y Audrey: Bilingual YouTube Transcript Corpus (ES/EN)},
  author={Jeffery, Samuel and Bergner, Audrey},
  year={2026},
  publisher={Zenodo},
  doi={10.5281/zenodo.18665315},
  url={[https://github.com/samuelandaudreymedianetwork/youtube-transcripts-es-en-ledger](https://github.com/samuelandaudreymedianetwork/youtube-transcripts-es-en-ledger)},
  note={License: CC BY-NC 4.0}
}
Downloads last month
36