H̶e̶a̶v̶y̶ ̶a̶m̶o̶u̶n̶t̶ ̶o̶f̶ ̶d̶u̶p̶l̶i̶c̶a̶t̶e̶s̶ I didn't read...
I noticed that this dataset has a lot of duplicates (up to 6-7x repetitions). E.g. allenai/dolma3_mix-6T/data/common_crawl-crime_and_law-0019/shard_00000079.jsonl.zst is literally repeated 6 times so 83% of the file are duplicates. The same is true for many other files. I gave it to claude and it estimated the following:
Method: Randomly sampled 200 files (out of 63,911 total) from allenai/dolma3_mix-6T, downloaded each, counted total vs unique IDs per file.
Results:
- 45% of files (90/200) contain duplicates
- 2.80x overall inflation — only 35.7% of rows are unique (4.28M unique out of 11.99M total rows sampled)
- Worst offenders: common_crawl-*-0019 sources (5-7x), common_crawl-*-0018 (2-4x), stack_edu-* (2-3x)
- olmocr_science_pdfs-* and older common_crawl epochs (-0016, -0017) are mostly clean
Using all 63,911 shards with dedup would give ~480B unique tokens.
The allenai/dolma3_mix-6T-1025-7B variant has the same problem, possibly worse (sources that were clean in 6T show dupes in the 7B variant).
Verification: Independently confirmed with raw shell tools (zstd -d + jq + sort -u, no Python) that the duplicates exist in the upstream JSONL.zst files themselves — consecutive identical rows baked into the compressed files.
Run the following script to reproduce:
"""
uv run python preprocessing/report_upstream_dupes.py \
--dataset allenai/dolma3_mix-6T \
--file data/common_crawl-crime_and_law-0019/shard_00000079.jsonl.zst
"""
import argparse
import io
import json
from collections import Counter
import zstandard
from huggingface_hub import hf_hub_download
def main() -> None:
p = argparse.ArgumentParser(description="Report upstream duplicates in a HuggingFace dataset shard.")
p.add_argument("--dataset", required=True, help="HuggingFace dataset ID")
p.add_argument("--file", required=True, help="Path to file within the dataset repo")
p.add_argument("--id-column", default="id", help="ID column name (default: id)")
args = p.parse_args()
print(f"Downloading {args.dataset}/{args.file} ...")
local_path = hf_hub_download(args.dataset, args.file, repo_type="dataset")
print(f"Reading {local_path} ...")
dctx = zstandard.ZstdDecompressor()
rows_by_id: dict[str, list[int]] = {}
n_rows = 0
with open(local_path, "rb") as fh:
with dctx.stream_reader(fh) as reader:
for i, line in enumerate(io.TextIOWrapper(reader, encoding="utf-8")):
record = json.loads(line)
rid = str(record[args.id_column])
rows_by_id.setdefault(rid, []).append(i)
n_rows += 1
n_unique = len(rows_by_id)
occurrence_counts = Counter(len(lines) for lines in rows_by_id.values())
print(f"\nFile: {args.file}")
print(f"Total rows: {n_rows:,}")
print(f"Unique IDs: {n_unique:,}")
print(f"Duplicates: {n_rows - n_unique:,} ({100 * (1 - n_unique / n_rows):.1f}% of rows)")
print(f"\nOccurrence distribution:")
for count, n_ids in sorted(occurrence_counts.items()):
label = "unique" if count == 1 else "duplicated"
print(f" {count}x: {n_ids:,} IDs ({label})")
# Check whether duplicates are consecutive and identical
duped_ids = [(rid, lines) for rid, lines in rows_by_id.items() if len(lines) > 1]
if duped_ids:
sample_id, sample_lines = duped_ids[0]
consecutive = all(sample_lines[i + 1] - sample_lines[i] == 1 for i in range(len(sample_lines) - 1))
print(f"\nSample duplicated ID: {sample_id}")
print(f" Appears at lines: {sample_lines}")
print(f" Consecutive: {consecutive}")
if __name__ == "__main__":
main()
which will produce
File: data/common_crawl-crime_and_law-0019/shard_00000079.jsonl.zst
Total rows: 189,780
Unique IDs: 31,630
Duplicates: 158,150 (83.3% of rows)
Occurrence distribution:
6x: 31,630 IDs (duplicated)
Sample duplicated ID: 36c9ec53-cee5-41db-9fe3-2898c653d398
Appears at lines: [0, 1, 2, 3, 4, 5]
Consecutive: True
hi @jkminder , that's correct and by design! This is the dolma mix, which contains controlled upsampling of high quality documents as described in the Olmo 3 paper. It's described in section 3.4.4, and the figure below gives a sketch of how quality-aware upsampling work:
If you want the pre-upsampling version of Dolma, allenai/dolma3_pool is the repository you are looking for.
Lol thanks for answering so quickly and sorry for my oversight... should have checked the paper again before using the data. Keep up the great work!
