LLM training

#26
by IvanTheTerriblest - opened

Can I use the data that that I find in a specific language for training an LLM? Is it cleaned enough to fine tune on an already existing model to learn a specific language?

CulturaX is generally usable for language-specific LLM training, but "clean enough" varies considerably by language. The dataset is built on OSCAR and mC4, which apply language identification and basic deduplication but relatively light quality filtering overall. For high-resource languages with strong web presence — French, German, Spanish — you get reasonable volume and acceptable baseline quality. For lower-resource languages, coverage and quality can be substantially more variable, so it's worth sampling and inspecting before committing.

One important distinction: the quality bar for fine-tuning is higher than for pretraining. In our work building a multilingual Swiss web corpus, noise that's tolerable at pretraining scale — boilerplate, template text, low-information-density content — can meaningfully affect a fine-tuned model, particularly at smaller scales where there's less signal to absorb the noise. A filtered subset, even a small one, often outperforms using the full unfiltered corpus for fine-tuning tasks.

Practically: sample a few thousand documents from your target language, do a quick manual review, and consider filtering on basic signals (sentence length, repetition ratio, rough perplexity estimate) before using it for fine-tuning. What language are you working with?

Sign up or log in to comment