Configuration Parsing Warning:Config file tokenizer_config.json cannot be fetched (too big)

latte-mc-bert-base-chinese-ws (ONNX)

This is an ONNX version of yacht/latte-mc-bert-base-chinese-ws. It was automatically converted and uploaded using this Hugging Face Space.

Usage with Transformers.js

See the pipeline documentation for token-classification: https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TokenClassificationPipeline


Multi-criteria BERT base Chinese with Lattice for Word Segmentation

This is a variant of the pre-trained model BERT model. The model was pre-trained on texts in the Chinese language and fine-tuned for word segmentation based on bert-base-chinese. This version of the model processes input texts with character-level with word-level incorporated with a lattice structure.

The scripts for the pre-training are available at tchayintr/latte-ptm-ws.

The LATTE scripts are available at tchayintr/latte-ws.

Model architecture

The model architecture is described in this paper.

Training Data

The model is trained on multiple Chinese word segmented datasets, including ctb6, sighan2005 (as, cityu, msra, pku), sighan2008 (sxu), and cnc. The datasets can be accessed from here.

Licenses

The pre-trained model is distributed under the terms of the Creative Commons Attribution-ShareAlike 4.0.

Acknowledgments

This model was trained with GPU servers provided by Okumura-Funakoshi NLP Group.

Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for onnx-community/latte-mc-bert-base-chinese-ws-ONNX

Quantized
(1)
this model