SozKZ GEC: Kazakh Grammar Error Correction
Collection
Grammar error correction models and datasets for Kazakh — Llama GEC (300M, 600M), mT5 GEC, morphology models • 10 items • Updated
Казақ тіліне арналған грамматикалық қателерді түзету моделі (Grammar Error Correction). 600M параметр.
| Parameter | Value |
|---|---|
| Architecture | LlamaForCausalLM (decoder-only) |
| Parameters | 587M |
| Base model | stukenov/sozkz-core-llama-600m-kk-base-v1 |
| Training data | sozkz-corpus-synthetic-kk-gec-v1 |
| Training | 3 epochs, LR=1.5e-5, BS=128, cosine schedule |
| Clean ratio | 80% |
| Data filter | word edit distance ≤ 2 |
| Tag | Single <грамматика> (unified) |
| Format | Thinking (💭 diff → corrected) |
| Hardware | 2× RTX 4090 (vast.ai), ~1.7h |
| License | MIT (gated access) |
<грамматика> ошибочный текст
💭 слово1→слово2 (описание)
→ исправленный текст
from transformers import AutoModelForCausalLM, GPT2TokenizerFast
from huggingface_hub import hf_hub_download
import torch
model_id = "stukenov/sozkz-core-llama-600m-kk-gec-v1"
tok_file = hf_hub_download(repo_id=model_id, filename="tokenizer.json")
tokenizer = GPT2TokenizerFast(tokenizer_file=tok_file)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
model.eval()
def correct(text):
prompt = f"<грамматика> {text}\n"
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=480)
with torch.no_grad():
out = model.generate(**inputs, max_new_tokens=len(inputs["input_ids"][0])+60,
temperature=0.3, top_p=0.9, do_sample=True, repetition_penalty=1.1,
pad_token_id=tokenizer.eos_token_id)
result = tokenizer.decode(out[0], skip_special_tokens=True)
if "→ " in result: result = result.split("→ ", 1)[1]
for stop in ["\n<", "\n\n"]:
if stop in result: result = result[:result.index(stop)]
return result.strip() or text
print(correct("Студенттар университетте оқиды."))
This model is part of the SozKZ project for Kazakh language AI.