Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
39 values
tags
sequencelengths
1
4.05k
created_time
timestamp[s]date
2022-03-02 23:29:04
2025-04-07 20:40:27
last_modified
timestamp[s]date
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_task
sequencelengths
1
10
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
fathyshalab/massive_play-roberta-large-v1-2-0.64
fathyshalab
text-classification
[ "sentence-transformers", "pytorch", "roberta", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-02-08T16:17:52
2023-02-08T16:18:14
8
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # fathyshalab/massive_play-roberta-large-v1-2-0.64 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an ef...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# fathyshalab/massive_play-roberta-large-v1-2-0.64 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with con...
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
LoneStriker/gemma-7b-4.0bpw-h6-exl2
LoneStriker
text-generation
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:2305.14314", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:...
2024-02-22T15:55:08
2024-02-22T15:57:48
6
0
--- library_name: transformers license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms tags: [] extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, plea...
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
Non_BioNLP
# Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B...
{"library_name": "transformers", "license": "other", "license_name": "gemma-terms-of-use", "license_link": "https://ai.google.dev/gemma/terms", "tags": [], "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage...
ravimehta/Test
ravimehta
summarization
[ "asteroid", "summarization", "en", "dataset:togethercomputer/RedPajama-Data-1T", "region:us" ]
2023-06-22T17:34:38
2023-06-22T17:35:55
0
0
--- datasets: - togethercomputer/RedPajama-Data-1T language: - en library_name: asteroid metrics: - bleurt pipeline_tag: summarization ---
[ "SUMMARIZATION" ]
Non_BioNLP
{"datasets": ["togethercomputer/RedPajama-Data-1T"], "language": ["en"], "library_name": "asteroid", "metrics": ["bleurt"], "pipeline_tag": "summarization"}
Ahmed107/nllb200-ar-en_v11.1
Ahmed107
translation
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "translation", "generated_from_trainer", "base_model:Ahmed107/nllb200-ar-en_v8", "base_model:finetune:Ahmed107/nllb200-ar-en_v8", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:u...
2023-12-07T06:57:33
2023-12-07T08:02:05
7
1
--- base_model: Ahmed107/nllb200-ar-en_v8 license: cc-by-nc-4.0 metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: nllb200-ar-en_v11.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proof...
[ "TRANSLATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb200-ar-en_v11.1 This model is a fine-tuned version of [Ahmed107/nllb200-ar-en_v8](https://huggingface.co/Ahmed107/nllb200-ar...
{"base_model": "Ahmed107/nllb200-ar-en_v8", "license": "cc-by-nc-4.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "nllb200-ar-en_v11.1", "results": []}]}
satish860/distilbert-base-uncased-finetuned-emotion
satish860
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-04-12T09:35:34
2022-08-11T12:44:06
47
0
--- datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion type: emotion args: default...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co...
{"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion...
muhtasham/medium-mlm-imdb-target-tweet
muhtasham
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-11T07:07:40
2022-12-11T07:10:48
114
0
--- datasets: - tweet_eval license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: medium-mlm-imdb-target-tweet results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: emotion ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medium-mlm-imdb-target-tweet This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/...
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "medium-mlm-imdb-target-tweet", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "...
ericzzz/falcon-rw-1b-instruct-openorca
ericzzz
text-generation
[ "transformers", "safetensors", "falcon", "text-generation", "text-generation-inference", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "model-index", "autotrain_compatible", "region:us" ]
2023-11-24T20:50:32
2024-03-05T00:49:13
2,405
11
--- datasets: - Open-Orca/SlimOrca language: - en license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference inference: false model-index: - name: falcon-rw-1b-instruct-openorca results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning C...
[ "TRANSLATION" ]
Non_BioNLP
# 🌟 Falcon-RW-1B-Instruct-OpenOrca Falcon-RW-1B-Instruct-OpenOrca is a 1B parameter, causal decoder-only model based on [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) and finetuned on the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset. **✨Check out our new conversationa...
{"datasets": ["Open-Orca/SlimOrca"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference"], "inference": false, "model-index": [{"name": "falcon-rw-1b-instruct-openorca", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset...
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", ...
2024-05-23T10:26:10
2024-05-23T10:26:22
9
0
--- datasets: - fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742 - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.c...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.co/BAAI/bge-base-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. ...
{"datasets": ["fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
PragmaticPete/tinyqwen
PragmaticPete
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-06-17T19:15:42
2024-06-17T19:19:41
14
0
--- language: - en license: apache-2.0 pipeline_tag: text-generation tags: - pretrained --- # Qwen2-0.5B ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, inclu...
[ "QUESTION_ANSWERING", "TRANSLATION" ]
Non_BioNLP
# Qwen2-0.5B ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model. Com...
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pretrained"]}
Pclanglais/Larth-Mistral
Pclanglais
text-generation
[ "transformers", "pytorch", "mistral", "text-generation", "fr", "license:cc-by-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
2023-10-10T12:36:53
2023-10-21T21:16:07
20
5
--- language: - fr library_name: transformers license: cc-by-4.0 pipeline_tag: text-generation widget: - text: 'Answer in Etruscan: Who is the father of Lars?' example_title: Lars inference: parameters: temperature: 0.7 repetition_penalty: 1.2 --- Larth-Mistral is the first LLM based on the Etruscan langua...
[ "TRANSLATION" ]
Non_BioNLP
Larth-Mistral is the first LLM based on the Etruscan language, fine-tuned on 1087 original inscriptions. Larth-Mistral supports cross-linguistic instructions (question in English, answer in Etruscan) and automated translations. The formula to use are: * *Answer in Etruscan: [Instruction in English]* * *Translate in E...
{"language": ["fr"], "library_name": "transformers", "license": "cc-by-4.0", "pipeline_tag": "text-generation", "widget": [{"text": "Answer in Etruscan: Who is the father of Lars?", "example_title": "Lars"}], "inference": {"parameters": {"temperature": 0.7, "repetition_penalty": 1.2}}}
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible",...
2024-05-28T18:54:18
2024-05-28T18:54:49
6
0
--- datasets: - fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241 - allenai/c4 language: - en - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggi...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: None ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. ...
{"datasets": ["fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
pEpOo/catastrophy8
pEpOo
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "region:us" ]
2023-12-18T14:14:04
2023-12-18T14:14:25
50
0
--- base_model: sentence-transformers/all-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: "Rly tragedy in MP: Some live to recount horror: \x89ÛÏWhen I saw coaches\ \...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer e...
{"base_model": "sentence-transformers/all-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Rly tragedy in MP: Some live to recount horror: ‰Û...
Anjaan-Khadka/Nepali-Summarization
Anjaan-Khadka
summarization
[ "transformers", "pytorch", "mt5", "text2text-generation", "summarization", "mT5", "ne", "dataset:csebuetnlp/xlsum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-23T11:44:58
2023-03-17T08:45:04
21
0
--- datasets: - csebuetnlp/xlsum language: - ne tags: - summarization - mT5 widget: - text: तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी ...
[ "SUMMARIZATION" ]
Non_BioNLP
# adaptation of mT5-multilingual-XLSum for Nepali Lnaguage This repository contains adapted version of mT5-multilinguag-XLSum for Single Language (Nepali). View original [mT5-multilinguag-XLSum model](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum) ## Using this model in `transformers` (tested on 4.11.0.de...
{"datasets": ["csebuetnlp/xlsum"], "language": ["ne"], "tags": ["summarization", "mT5"], "widget": [{"text": "तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो ।...
sndsabin/fake-news-classifier
sndsabin
null
[ "license:gpl-3.0", "region:us" ]
2022-03-31T08:53:49
2022-04-07T08:58:17
0
0
--- license: gpl-3.0 --- **Fake News Classifier**: Text classification model to detect fake news articles! **Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
**Fake News Classifier**: Text classification model to detect fake news articles! **Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
{"license": "gpl-3.0"}
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3