| --- |
| language: |
| - tr |
| library_name: transformers |
| license: mit |
| metrics: |
| - f1 |
| - accuracy |
| - recall |
| tags: |
| - ner |
| - token-classification |
| - turkish |
| --- |
| |
| # Model Card for Turkish Named Entity Recognition Model |
|
|
| <!-- Provide a quick summary of what the model is/does. --> |
|
|
| This model performs Named Entity Recognition (NER) for Turkish text, identifying and classifying entities such as person names, locations, and organizations. Model got 0.9599 F1 on validation set. |
|
|
| ## Model Details |
|
|
| ### Model Description |
|
|
| <!-- Provide a longer summary of what this model is. --> |
|
|
| This is a fine-tuned BERT model for Turkish Named Entity Recognition (NER). It is based on the `dbmdz/bert-base-turkish-uncased` model and has been trained on a custom Turkish NER dataset. |
|
|
| - **Developed by:** Ezel Bayraktar (ai@bayraktarlar.dev) |
| - **Model type:** Token Classification (Named Entity Recognition) |
| - **Language(s) (NLP):** Turkish |
| - **License:** MIT |
| - **Finetuned from model:** dbmdz/bert-base-turkish-uncased |
|
|
|
|
|
|
| ### Direct Use |
|
|
| <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
| This model can be used directly for Named Entity Recognition tasks in Turkish text. It identifies and labels entities such as person names (PER), locations (LOC), and organizations (ORG). |
|
|
| ### Downstream Use [optional] |
|
|
| <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
| The model can be integrated into larger natural language processing pipelines for Turkish, such as information extraction systems, question answering, or text summarization. |
|
|
| ### Out-of-Scope Use |
|
|
| <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
|
| This model should not be used for languages other than Turkish or for tasks beyond Named Entity Recognition. It may not perform well on domain-specific text or newly emerging named entities not present in the training data. |
|
|
| ## Bias, Risks, and Limitations |
|
|
| <!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
| The model may inherit biases present in the training data or the pre-trained BERT model it was fine-tuned from. It may not perform consistently across different domains or types of Turkish text. |
|
|
| ### Recommendations |
|
|
| <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
| Users should evaluate the model's performance on their specific domain and use case. For critical applications, human review of the model's outputs is recommended. |
|
|
| ## How to Get Started with the Model |
|
|
| Use the code below to get started with the model. |
|
|
| ```python |
| from transformers import pipeline |
| |
| nert = pipeline('ner', model='TerminatorPower/nerT', tokenizer='TerminatorPower/nerT') |
| answer = nert("Mustafa Kemal Atatürk, 19 Mayıs 1919'da Samsun'a çıktı.") |
| print(answer) |