Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Dataset Summary

This is a tokenized Fineweb-Edu (10BT subset) using GPT2 tokenizer (via tiktoken) for pre-training GPT2 model. Data is divided into shards (.npy files) for easier to load with PyTorch IterableDataset.

Each .npy file can be loaded with numpy.load('file_name.npy').

Split # Documents # Shards # Tokens
train 9,575,380 99 9,853,755,380 (9.85B)
val 96,721 1 100,233,964 (0.1B)
Total 9,672,101 100 9,953,989,344 (9.95B)

Example of usage

uvx hf download minhnguyent546/fineweb-edu-10BT-for-gpt2 \
  --repo-type dataset \
  --local-dir fineweb_edu_10bt
import numpy as np

tokens = np.load("fineweb_edu_10bt/train/fineweb_edu_10bt_train_000000-of-000099.npy")

print(tokens.shape)
# (100000000,)

print(tokens.dtype)
# uint16

print(tokens[:20])
# [50256    12  3363    11   428   318   257   922   640   284  4618  6868
#  8701  9403   287   262  2323    13   921   743]
Downloads last month
149