Token Classification
Transformers
PyTorch
TensorBoard
bert
Generated from Trainer
Eval Results (legacy)
Instructions to use muibk/bert-finetuned-ner_TEST_HFCOURSE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use muibk/bert-finetuned-ner_TEST_HFCOURSE with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="muibk/bert-finetuned-ner_TEST_HFCOURSE")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("muibk/bert-finetuned-ner_TEST_HFCOURSE") model = AutoModelForTokenClassification.from_pretrained("muibk/bert-finetuned-ner_TEST_HFCOURSE") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- ad8c6b1cf8fa33563e9418cc0f78efc678841d855ac5b29a06664b5930fdbf0a
- Size of remote file:
- 431 MB
- SHA256:
- 23f102ba4a99acb99d75f554e59cc8273b9a7939201ee8fd783e75cc78dc3b9d
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.