Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Inquiry regarding intra-document quality filtering

#55
by AshleyLL - opened

Hi, thank you for the incredible contribution!

I am specifically interested in how your pipeline handles intra-document noise within very long contexts. For example, if an extensively long scraped document contains 90% high-quality text but 10% auto-generated boilerplate or SEO spam in the middle:

Does your pipeline actively slice/mask out/clean the specific flagged chunks (intra-document removal) and stitch the remaining benign tokens back together?

Or is the primary philosophy still strictly document-level dropping (if the ratio of flagged spans exceeds a threshold, the entire document is discarded)?

Thanks!

Good question, and honestly the binary framing undersells how tricky this actually is in practice.

Slice-and-stitch sounds appealing but it breaks more than it fixes in most cases. When you surgically remove a 200-token boilerplate block from the middle of a 4000-token document and restitch, you're left with a coherence discontinuity that's invisible to downstream quality scorers but very visible to a model during training. The surrounding text often references or builds on what got cut -- so you've created a non-obvious confound where the text passes quality filters but is structurally broken.

What we found more useful is treating this as two separate problems. For document-level corpora, strict document-level dropping is the right call -- if a document has meaningful boilerplate contamination, the signal-to-noise tradeoff rarely justifies keeping it. Corpus is big enough that you're not losing information, just instances.

The more interesting solution is building a parallel chunk-level representation where chunks are scored and filtered independently, completely separate from the document corpus. That way you recover the high-quality 90% of your 90/10 example without stitching artifacts -- chunks stand alone and don't need contextual continuity.

The other variable your example glosses over is where in the document the bad 10% sits. Footer boilerplate in the tail is basically harmless. Mid-document injection is genuinely corrosive to coherence. A span-position-weighted score can be more informative than a flat contamination ratio.

What's the downstream task you're building toward? That shapes whether document-level or chunk-level is the right unit of analysis.

Sign up or log in to comment