Datasets:
too much duplication in the dataset
not good work
Thanks for your report. We used FineWe v1.0, and we also found this issue. We are planning to use the latest version of FineWeb to re-process Ultra-FineWeb. I believe this will be finished soon
is this still present on v1.4?
Cross-lingual dedup is a harder subset of this problem that's worth flagging specifically.
Even after standard within-language deduplication, if the source corpus includes multilingual sites (common with European domains, government sites, etc.), the same content often appears in 2-3 languages. Standard hash-based dedup misses these entirely because the content hashes are different.
In our work building a Swiss web corpus (.ch domains, DE/FR/EN/IT), we ran dedup in two stages: first within-language using exact content hashing and URL normalization, then cross-language using fuzzy matching on translated content. The within-language pass handled obvious duplicates; the cross-lingual pass removed a significant additional fraction—content that was genuinely the same article in German vs French, or the same product page localized.
On the v1.4 question: whether duplication persists depends on whether the underlying FineWeb source data was re-deduplicated before re-processing. If the same source documents are present in the new version (just re-scored), duplicates that weren't removed upstream will likely still be there.
One practical signal worth checking: documents sharing the same source URL or very similar URL patterns are often near-duplicates even when content hashes differ slightly (e.g., different nav elements rendered on different days). Per-document URL tracking catches a meaningful fraction of duplication that content-only dedup misses.