Datasets:
Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'license_risk', 'content_category'})
This happened while the json dataset builder was generating data using
hf://datasets/thepowerfuldeez/massive-yt-edu-transcriptions/train-00001.jsonl (at revision 54ad78baed4e66b5a6a8b9f9def2653973df5e3e), [/tmp/hf-datasets-cache/medium/datasets/30933230930908-config-parquet-and-info-thepowerfuldeez-massive-y-77cde49d/hub/datasets--thepowerfuldeez--massive-yt-edu-transcriptions/snapshots/54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00000.jsonl (origin=hf://datasets/thepowerfuldeez/massive-yt-edu-transcriptions@54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00000.jsonl), /tmp/hf-datasets-cache/medium/datasets/30933230930908-config-parquet-and-info-thepowerfuldeez-massive-y-77cde49d/hub/datasets--thepowerfuldeez--massive-yt-edu-transcriptions/snapshots/54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00001.jsonl (origin=hf://datasets/thepowerfuldeez/massive-yt-edu-transcriptions@54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00001.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
video_id: string
title: string
text: string
duration_seconds: int64
source: string
url: string
priority: int64
speed_ratio: double
content_category: string
license_risk: string
to
{'video_id': Value('string'), 'title': Value('string'), 'text': Value('string'), 'duration_seconds': Value('float64'), 'source': Value('string'), 'url': Value('string'), 'priority': Value('int64'), 'speed_ratio': Value('float64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'license_risk', 'content_category'})
This happened while the json dataset builder was generating data using
hf://datasets/thepowerfuldeez/massive-yt-edu-transcriptions/train-00001.jsonl (at revision 54ad78baed4e66b5a6a8b9f9def2653973df5e3e), [/tmp/hf-datasets-cache/medium/datasets/30933230930908-config-parquet-and-info-thepowerfuldeez-massive-y-77cde49d/hub/datasets--thepowerfuldeez--massive-yt-edu-transcriptions/snapshots/54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00000.jsonl (origin=hf://datasets/thepowerfuldeez/massive-yt-edu-transcriptions@54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00000.jsonl), /tmp/hf-datasets-cache/medium/datasets/30933230930908-config-parquet-and-info-thepowerfuldeez-massive-y-77cde49d/hub/datasets--thepowerfuldeez--massive-yt-edu-transcriptions/snapshots/54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00001.jsonl (origin=hf://datasets/thepowerfuldeez/massive-yt-edu-transcriptions@54ad78baed4e66b5a6a8b9f9def2653973df5e3e/train-00001.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
video_id string | title string | text string | duration_seconds float64 | source string | url string | priority int64 | speed_ratio float64 |
|---|---|---|---|---|---|---|---|
vidCX_dMCu0 | Lec 02. How to Train a Neural Net | Okay, so welcome to lecture two of deep learning. Today we're going to talk about how you actually train a neural network. And I imagine that some of this might be a review for some of you but hopefully kind of the perspective or the way we look talk about the process of training the neural network might be a slightly ... | 4,773.952 | 9 | 67.6 | ||
8zzfcYIELdo | Lec 15. Generative Models: Representation Learning Meets Generative Modeling | We're going to continue on today and this week with generative models. So today we'll be on variational auto-encoders. And we're going to give a perspective that generative modeling and representation learning are very tightly coupled. If you can solve the generative modeling problem, it should help you solve the repre... | 4,839.317333 | 9 | 63.7 | ||
7hbf4klU3ks | Lec 20. Scaling Laws | Okay, so today we're going to talk about scaling loss. And this lecture is, you know, we're at the final few weeks of the semester. So this is, we're moving into territory, which is much more kind of ongoing science. Like I'm gonna talk about recent papers and results that are empirical in nature for which we don't kno... | 2,302.037333 | 9 | 69.9 | ||
IiHknRHA-Gk | Lec 10. Architectures: Memory | "So today we're going to be doing our last of this mini series on machine learning architectures or (...TRUNCATED) | 4,407.616 | 9 | 70.7 | ||
bxVkZ4M-hIE | Lec 04. Architectures: Grids | "So today we're going to start off the first of a series of lectures on machine learning architectur(...TRUNCATED) | 5,036.309333 | 9 | 65.9 | ||
zaMcHuJwe1w | Lec 16. Generative Models: Conditional Models | "OK, so welcome. Happy Halloween. Nice to see a few little costumes. OK, so today we're going to tal(...TRUNCATED) | 4,891.285333 | 9 | 63.1 | ||
-eC0-5mXHQg | Lec 13. Representation Learning: Theory | "Okay, thank you everyone for coming to the lecture. So this was the example on the, do people remem(...TRUNCATED) | 4,520.106667 | 9 | 73.7 | ||
QxOzQRtd440 | Lec 11. Representation Learning: Reconstruction-Based | "Good. So welcome to today's lecture. And today we're going to talk about representation learning. S(...TRUNCATED) | 4,863.616 | 9 | 65.7 | ||
RUdQMHV-7KM | Lec 19. Transfer Learning: Data | "We're going to finish up our short series on transfer learning today. And the other sort of logisti(...TRUNCATED) | 4,543.104 | 9 | 68.2 | ||
hJlrAHqGOS8 | Lec 14. Generative Models: Basics | "Okay, yeah, welcome everyone. So now we're going to get coming to the, um, another section of the c(...TRUNCATED) | 4,877.909333 | 9 | 67.3 |
End of preview.
Massive YouTube Educational Transcriptions
Large-scale educational content transcribed from YouTube using distil-whisper/distil-large-v3.5.
Stats
- Videos: 59,355
- Characters: 1,539,022,925 (~384M tokens)
- Audio hours: 35,890
- Model: faster-whisper (CTranslate2) with distil-large-v3.5
- Hardware: 2x RTX 5090 + 2x RTX 4090 at 165-185x realtime
Fields
| Field | Description |
|---|---|
video_id |
YouTube video ID |
title |
Video title |
text |
Full transcript |
duration_seconds |
Original audio duration (seconds) |
source |
Discovery source (channel/course/playlist) |
url |
YouTube URL |
priority |
Educational priority (9=university, 8=lecture, 7=documentary, 5=general) |
speed_ratio |
Transcription speed (realtime multiplier) |
content_category |
Content type (university_lecture, conference, individual_educator, etc.) |
license_risk |
License risk level (green/yellow/orange/red) |
Content Categories
- green: CC-licensed or public domain (NPTEL, Khan Academy, MIT OCW, Yale OYC)
- yellow: Standard YouTube license, fair use for research
- orange: Commercial content, needs review
- red: Non-educational, excluded from transcription
Methodology
- Discovery: Channel crawling, related video walking, CC-focused search
- Quality filter: 40+ reject categories, duration >= 15min, 3-tier priority scoring
- Transcription: faster-whisper CTranslate2, 1.2x audio speedup, beam=1, no VAD
- Classification: Channel name -> title regex -> priority fallback
Code
github.com/thepowerfuldeez/massive_yt_edu_scraper
License
MIT
- Downloads last month
- 24