Model Card for tartuNLP/Llammas-base-p1-GPT-4o-human-error-mix-paragraph-GEC

The user’s input text, i.e., a paragraph, is passed to the first model M1 (this model) as a whole, which then outputs the corrected text.

image/png

Citation

https://aclanthology.org/2025.bea-1.72/

BibTeX:

@inproceedings{vainikko-etal-2025-paragraph,
    title = "Paragraph-level Error Correction and Explanation Generation: Case Study for {E}stonian",
    author = "Vainikko, Martin  and
      Kamarik, Taavi  and
      Kert, Karina  and
      Liin, Krista  and
      Maine, Silvia  and
      Allkivi, Kais  and
      Kaivapalu, Annekatrin  and
      Fishel, Mark",
    editor = {Kochmar, Ekaterina  and
      Alhafni, Bashar  and
      Bexte, Marie  and
      Burstein, Jill  and
      Horbach, Andrea  and
      Laarmann-Quante, Ronja  and
      Tack, Ana{\"i}s  and
      Yaneva, Victoria  and
      Yuan, Zheng},
    booktitle = "Proceedings of the 20th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2025)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.bea-1.72/",
    doi = "10.18653/v1/2025.bea-1.72",
    pages = "953--967",
    ISBN = "979-8-89176-270-1",
    abstract = "We present a case study on building task-specific models for grammatical error correction and explanation generation tailored to learners of Estonian. Our approach handles whole paragraphs instead of sentences and leverages prompting proprietary large language models for generating synthetic training data, addressing the limited availability of error correction data and the complete absence of correction justification/explanation data in Estonian. We describe the chosen approach and pipeline and provide technical details for the experimental part. The final outcome is a set of open-weight models, which are released with a permissive license along with the generated synthetic error correction and explanation data."
}
Downloads last month
35,047
Safetensors
Model size
7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for tartuNLP/Llammas-base-p1-GPT-4o-human-error-mix-paragraph-GEC

Finetuned
(7)
this model