COMBO-NLP Model for Korean

Model Description

This is a Korean-language model based on COMBO-NLP, an open-source natural language preprocessing system. It performs:

  • sentence segmentation (via LAMBO)
  • tokenisation (via LAMBO)
  • part-of-speech tagging
  • morphological analysis
  • lemmatisation
  • dependency parsing

The Korean model uses FacebookAI/xlm-roberta-base as its base encoder and is trained on UD_Korean-Kaist (UD v2.17).

Evaluation

Evaluation was performed on the UD_Korean-Kaist test split using the standard CoNLL 2018 eval script.

Two evaluation rows are reported:

  • Full-text (F1): raw text is segmented by LAMBO, then parsed and compared against gold — measures end-to-end pipeline performance including segmentation quality.
  • Aligned accuracy: accuracy on correctly segmented (aligned) tokens — measures parsing quality on tokens that were correctly identified by the segmenter.

Morphosyntactic Tagging

Metric Tokens Sentences Words UPOS XPOS UFeats AllTags Lemmas
Full-text (F1) 99.91 99.93 99.91 96.58 89.25 99.91 89.06 93.16
Aligned accuracy 0.00 0.00 0.00 96.67 89.33 100.00 89.14 93.24

Dependency Parsing

Metric UAS LAS CLAS MLAS BLEX
Full-text (F1) 90.16 88.22 86.96 84.47 80.66
Aligned accuracy 90.24 88.30 87.04 84.55 80.73

Usage

Install the library from PyPI (assuming you have a virtual environment created):

pip install combo-nlp

Install the Lambo segmenter - only needed when passing raw text strings to COMBO:

pip install --index-url https://pypi.clarin-pl.eu/ lambo
from combo import COMBO

# Load a pre-trained model with corresponding Lambo segmenter
nlp = COMBO("Korean")

# Parse raw text (handles sentence splitting + tokenization)
result = nlp("빠른 갈색 여우가 게으른 개를 뛰어넘는다.")

# Inspect results
for sentence in result:
    for token in sentence:
        print(f"{token.form:<15} {token.lemma:<15} {token.upos:<8} head={token.head}  {token.deprel}")

Refer to the COMBO-NLP documentation for installation and usage instructions:

License

The training data license: cc-by-sa-4.0 is derived from the Universal Dependencies treebank. For the full license terms of each treebank, please refer to the corresponding LICENSE.txt file in the treebank repository:

Citation

If you use this model, please cite:

Ulewicz, M., Jabłońska, M., Klimaszewski, M., Przybyła, P., Pszenny, Ł., Rybak, P., Wiącek, M., & Wróblewska, A. (2026). COMBO-NLP Models Trained on UD v2.17. Zenodo. https://doi.org/10.5281/zenodo.19650523

@software{combo_nlp_2026,
  author    = {Ulewicz, Michał and Jabłońska, Maja and Klimaszewski, Mateusz and Przybyła, Piotr and Pszenny, Łukasz and Rybak, Piotr and Wiącek, Martyna and Wróblewska, Alina},
  title     = {{COMBO-NLP} Models Trained on {UD} v2.17},
  year      = {2026},
  publisher = {Zenodo},
  doi       = {10.5281/zenodo.19650523},
  url       = {https://doi.org/10.5281/zenodo.19650523}
}

Resources

Downloads last month
21
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train clarin-pl/combo-nlp-xlm-roberta-base-korean-kaist-ud2.17

Collection including clarin-pl/combo-nlp-xlm-roberta-base-korean-kaist-ud2.17