pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-conll2003_pos` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging. This adapter was created for usage with the ...
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:pos/conll2003", "adapter-transformers", "token-classification"], "datasets": ["conll2003"]}
AdapterHub/roberta-base-pf-conll2003_pos
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:pos/conll2003", "en", "dataset:conll2003", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base An adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transfor...
[ "# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ad...
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and include...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-copa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the *...
{"language": ["en"], "tags": ["roberta", "adapterhub:comsense/copa", "adapter-transformers"]}
AdapterHub/roberta-base-pf-copa
null
[ "adapter-transformers", "roberta", "adapterhub:comsense/copa", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base An adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transform...
[ "# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ada...
[ "TAGS\n#adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis a...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-cosmos_qa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice. This adapter was created for usa...
{"language": ["en"], "tags": ["roberta", "adapterhub:comsense/cosmosqa", "adapter-transformers"], "datasets": ["cosmos_qa"]}
AdapterHub/roberta-base-pf-cosmos_qa
null
[ "adapter-transformers", "roberta", "adapterhub:comsense/cosmosqa", "en", "dataset:cosmos_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base An adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-...
[ "# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, ins...
[ "TAGS\n#adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction hea...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-cq` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-trans...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/cq", "adapter-transformers"]}
AdapterHub/roberta-base-pf-cq
null
[ "adapter-transformers", "roberta", "question-answering", "adapterhub:qa/cq", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base An adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-tr...
[ "TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nT...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-drop` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapter-tra...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["drop"]}
AdapterHub/roberta-base-pf-drop
null
[ "adapter-transformers", "roberta", "question-answering", "en", "dataset:drop", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base An adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-t...
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-duorc_p` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapte...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["duorc"]}
AdapterHub/roberta-base-pf-duorc_p
null
[ "adapter-transformers", "roberta", "question-answering", "en", "dataset:duorc", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base An adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformer...
[ "# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapt...
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-duorc_s` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapte...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["duorc"]}
AdapterHub/roberta-base-pf-duorc_s
null
[ "adapter-transformers", "roberta", "question-answering", "en", "dataset:duorc", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base An adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformer...
[ "# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapt...
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-emo` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transforme...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["emo"]}
AdapterHub/roberta-base-pf-emo
null
[ "adapter-transformers", "roberta", "text-classification", "en", "dataset:emo", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base An adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _Not...
[ "# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transfo...
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapt...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-emotion` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapte...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["emotion"]}
AdapterHub/roberta-base-pf-emotion
null
[ "adapter-transformers", "roberta", "text-classification", "en", "dataset:emotion", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base An adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers'...
[ "# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter...
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.\n...
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-fce_error_detection` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[ada...
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ged/fce", "adapter-transformers"], "datasets": ["fce_error_detection"]}
AdapterHub/roberta-base-pf-fce_error_detection
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:ged/fce", "en", "dataset:fce_error_detection", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base An adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transfor...
[ "# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ad...
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and inc...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-hellaswag` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice. This adapter was created for u...
{"language": ["en"], "tags": ["roberta", "adapterhub:comsense/hellaswag", "adapter-transformers"], "datasets": ["hellaswag"]}
AdapterHub/roberta-base-pf-hellaswag
null
[ "adapter-transformers", "roberta", "adapterhub:comsense/hellaswag", "en", "dataset:hellaswag", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base An adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter...
[ "# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, in...
[ "TAGS\n#adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction h...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-hotpotqa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the ...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["hotpot_qa"]}
AdapterHub/roberta-base-pf-hotpotqa
null
[ "adapter-transformers", "roberta", "question-answering", "en", "dataset:hotpot_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base An adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transf...
[ "# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install '...
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answ...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-imdb` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification. This adapter was created for usage with the ...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/imdb", "adapter-transformers"], "datasets": ["imdb"]}
AdapterHub/roberta-base-pf-imdb
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:sentiment/imdb", "en", "dataset:imdb", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base An adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transform...
[ "# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ada...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a predictio...
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-mit_movie_trivia` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging. This adapter was created fo...
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ner/mit_movie_trivia", "adapter-transformers"]}
AdapterHub/roberta-base-pf-mit_movie_trivia
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:ner/mit_movie_trivia", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base An adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapte...
[ "# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, i...
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes ...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-mnli` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[a...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/multinli", "adapter-transformers"], "datasets": ["multi_nli"]}
AdapterHub/roberta-base-pf-mnli
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:nli/multinli", "en", "dataset:multi_nli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base An adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformer...
[ "# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapt...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a predicti...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-mrpc` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-t...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sts/mrpc", "adapter-transformers"]}
AdapterHub/roberta-base-pf-mrpc
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:sts/mrpc", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base An adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-t...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification....
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[ad...
{"language": ["en"], "tags": ["text-classification", "adapterhub:rc/multirc", "roberta", "adapter-transformers"]}
AdapterHub/roberta-base-pf-multirc
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:rc/multirc", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base An adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transforme...
[ "# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adap...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classifi...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-newsqa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapt...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["newsqa"]}
AdapterHub/roberta-base-pf-newsqa
null
[ "adapter-transformers", "roberta", "question-answering", "en", "dataset:newsqa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base An adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformer...
[ "# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapt...
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n...
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-pmb_sem_tagging` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[a...
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:semtag/pmb", "adapter-transformers"]}
AdapterHub/roberta-base-pf-pmb_sem_tagging
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:semtag/pmb", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base An adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transform...
[ "# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ada...
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-qnli` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-t...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/qnli", "adapter-transformers"]}
AdapterHub/roberta-base-pf-qnli
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:nli/qnli", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base An adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-t...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification....
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-qqp` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-tran...
{"language": ["en"], "tags": ["text-classification", "adapter-transformers", "adapterhub:sts/qqp", "roberta"]}
AdapterHub/roberta-base-pf-qqp
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:sts/qqp", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base An adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-tra...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-quail` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-tra...
{"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["quail"]}
AdapterHub/roberta-base-pf-quail
null
[ "adapter-transformers", "roberta", "en", "dataset:quail", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base An adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-tr...
[ "TAGS\n#adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-quartz` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-...
{"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["quartz"]}
AdapterHub/roberta-base-pf-quartz
null
[ "adapter-transformers", "roberta", "en", "dataset:quartz", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base An adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers':...
[ "# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-...
[ "TAGS\n#adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was crea...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-quoref` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[adapt...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["quoref"]}
AdapterHub/roberta-base-pf-quoref
null
[ "adapter-transformers", "roberta", "question-answering", "en", "dataset:quoref", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base An adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformer...
[ "# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapt...
[ "TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.\n...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-race` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-tr...
{"language": ["en"], "tags": ["adapterhub:rc/race", "roberta", "adapter-transformers"], "datasets": ["race"]}
AdapterHub/roberta-base-pf-race
null
[ "adapter-transformers", "roberta", "adapterhub:rc/race", "en", "dataset:race", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base An adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-t...
[ "TAGS\n#adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-record` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapt...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:rc/record", "adapter-transformers"]}
AdapterHub/roberta-base-pf-record
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:rc/record", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base An adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers...
[ "# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapte...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classificat...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-rotten_tomatoes` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification. This adapte...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/rotten_tomatoes", "adapter-transformers"], "datasets": ["rotten_tomatoes"]}
AdapterHub/roberta-base-pf-rotten_tomatoes
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:sentiment/rotten_tomatoes", "en", "dataset:rotten_tomatoes", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base An adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, inst...
[ "# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/rott...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-rte` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-tran...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/rte", "adapter-transformers"]}
AdapterHub/roberta-base-pf-rte
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:nli/rte", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base An adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-tra...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-scicite` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapte...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["scicite"]}
AdapterHub/roberta-base-pf-scicite
null
[ "adapter-transformers", "roberta", "text-classification", "en", "dataset:scicite", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base An adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers'...
[ "# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter...
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.\n...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-scitail` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/scitail", "adapter-transformers"], "datasets": ["scitail"]}
AdapterHub/roberta-base-pf-scitail
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:nli/scitail", "en", "dataset:scitail", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base An adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transform...
[ "# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ada...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a predictio...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-sick` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-t...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers", "adapterhub:nli/sick", "text-classification"], "datasets": ["sick"]}
AdapterHub/roberta-base-pf-sick
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:nli/sick", "en", "dataset:sick", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base An adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': ...
[ "# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-t...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for c...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-snli` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transfo...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["snli"]}
AdapterHub/roberta-base-pf-snli
null
[ "adapter-transformers", "roberta", "text-classification", "en", "dataset:snli", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base An adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _N...
[ "# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-trans...
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis ad...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-social_i_qa` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with ...
{"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["social_i_qa"]}
AdapterHub/roberta-base-pf-social_i_qa
null
[ "adapter-transformers", "roberta", "en", "dataset:social_i_qa", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base An adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-tran...
[ "# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install...
[ "TAGS\n#adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis a...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-squad` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **[ad...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/squad1", "adapter-transformers"], "datasets": ["squad"]}
AdapterHub/roberta-base-pf-squad
null
[ "adapter-transformers", "roberta", "question-answering", "adapterhub:qa/squad1", "en", "dataset:squad", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base An adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transform...
[ "# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ada...
[ "TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head fo...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-squad_v2` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering. This adapter was created for usage with the **...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/squad2", "adapter-transformers"], "datasets": ["squad_v2"]}
AdapterHub/roberta-base-pf-squad_v2
null
[ "adapter-transformers", "roberta", "question-answering", "adapterhub:qa/squad2", "en", "dataset:squad_v2", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base An adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transf...
[ "# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install '...
[ "TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction h...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-sst2` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification. This adapter was created for usage with th...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/sst-2", "adapter-transformers"]}
AdapterHub/roberta-base-pf-sst2
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:sentiment/sst-2", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base An adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transfor...
[ "# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'ad...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for c...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-stsb` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sts/sts-b", "adapter-transformers"]}
AdapterHub/roberta-base-pf-stsb
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:sts/sts-b", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base An adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers':...
[ "# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classificatio...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-swag` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice. This adapter was created for usage with the **[adapter-transf...
{"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["swag"]}
AdapterHub/roberta-base-pf-swag
null
[ "adapter-transformers", "roberta", "en", "dataset:swag", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base An adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _...
[ "# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-tran...
[ "TAGS\n#adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created fo...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-trec` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transfo...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["trec"]}
AdapterHub/roberta-base-pf-trec
null
[ "adapter-transformers", "roberta", "text-classification", "en", "dataset:trec", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base An adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _N...
[ "# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-trans...
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis ad...
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-ud_deprel` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[a...
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:deprel/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]}
AdapterHub/roberta-base-pf-ud_deprel
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:deprel/ud_ewt", "en", "dataset:universal_dependencies", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base An adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers...
[ "# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapte...
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset an...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-ud_en_ewt` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing. This adapter was created for usage with the *...
{"language": ["en"], "tags": ["roberta", "adapterhub:dp/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]}
AdapterHub/roberta-base-pf-ud_en_ewt
null
[ "adapter-transformers", "roberta", "adapterhub:dp/ud_ewt", "en", "dataset:universal_dependencies", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us
Adapter 'AdapterHub/roberta-base-pf-ud\_en\_ewt' for roberta-base ================================================================= An adapter for the 'roberta-base' model that was trained on the dp/ud\_ewt dataset and includes a prediction head for dependency parsing. This adapter was created for usage with the ad...
[]
[ "TAGS\n#adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n" ]
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-ud_pos` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-tr...
{"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:pos/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]}
AdapterHub/roberta-base-pf-ud_pos
null
[ "adapter-transformers", "roberta", "token-classification", "adapterhub:pos/ud_ewt", "en", "dataset:universal_dependencies", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base An adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _...
[ "# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-tran...
[ "TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and include...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-wic` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:wordsence/wic", "adapter-transformers"]}
AdapterHub/roberta-base-pf-wic
null
[ "adapter-transformers", "roberta", "text-classification", "adapterhub:wordsence/wic", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base An adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformer...
[ "# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapt...
[ "TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classi...
question-answering
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-wikihop` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering. This adapter was created for usage with the *...
{"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/wikihop", "adapter-transformers"]}
AdapterHub/roberta-base-pf-wikihop
null
[ "adapter-transformers", "roberta", "question-answering", "adapterhub:qa/wikihop", "en", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base An adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transf...
[ "# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install '...
[ "TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question ...
null
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-winogrande` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice. This adapter was created fo...
{"language": ["en"], "tags": ["roberta", "adapterhub:comsense/winogrande", "adapter-transformers"], "datasets": ["winogrande"]}
AdapterHub/roberta-base-pf-winogrande
null
[ "adapter-transformers", "roberta", "adapterhub:comsense/winogrande", "en", "dataset:winogrande", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base An adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapt...
[ "# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, ...
[ "TAGS\n#adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a predicti...
token-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-wnut_17` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging. This adapter was created for usage with the **[adapter-trans...
{"language": ["en"], "tags": ["token-classification", "roberta", "adapter-transformers"], "datasets": ["wnut_17"]}
AdapterHub/roberta-base-pf-wnut_17
null
[ "adapter-transformers", "roberta", "token-classification", "en", "dataset:wnut_17", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base An adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-transformers': _No...
[ "# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, install 'adapter-transf...
[ "TAGS\n#adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis...
text-classification
adapter-transformers
# Adapter `AdapterHub/roberta-base-pf-yelp_polarity` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification. This adapter was created for usage ...
{"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["yelp_polarity"]}
AdapterHub/roberta-base-pf-yelp_polarity
null
[ "adapter-transformers", "roberta", "text-classification", "en", "dataset:yelp_polarity", "arxiv:2104.08247", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.08247" ]
[ "en" ]
TAGS #adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us
# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base An adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification. This adapter was created for usage with the adapter-transformers library. ## Usage First, install 'adapter-t...
[ "# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.", "## Usage\n\nFirst, inst...
[ "TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n", "# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for...
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
AdharshJolly/HarryPotterBot-Model
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
text-classification
transformers
# Model - Problem type: Binary Classification - Model ID: 12592372 ## Validation Metrics - Loss: 0.23033875226974487 - Accuracy: 0.9138655462184874 - Precision: 0.9087136929460581 - Recall: 0.9201680672268907 - AUC: 0.9690346726926065 - F1: 0.9144050104384133 ## Usage You can use cURL to access this model: ``` $...
{"language": "eng", "datasets": ["Adi2K/autonlp-data-Priv-Consent"], "widget": [{"text": "You can control cookies and tracking tools. To learn how to manage how we - and our vendors - use cookies and other tracking tools, please click here."}]}
Adi2K/Priv-Consent
null
[ "transformers", "pytorch", "bert", "text-classification", "eng", "dataset:Adi2K/autonlp-data-Priv-Consent", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "eng" ]
TAGS #transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us
# Model - Problem type: Binary Classification - Model ID: 12592372 ## Validation Metrics - Loss: 0.23033875226974487 - Accuracy: 0.9138655462184874 - Precision: 0.9087136929460581 - Recall: 0.9201680672268907 - AUC: 0.9690346726926065 - F1: 0.9144050104384133 ## Usage You can use cURL to access this model: Or ...
[ "# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372", "## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.9138655462184874\n- Precision: 0.9087136929460581\n- Recall: 0.9201680672268907\n- AUC: 0.9690346726926065\n- F1: 0.9144050104384133", "## Usage\n\nYou can use cURL to a...
[ "TAGS\n#transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us \n", "# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372", "## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.913865...
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
Adil617/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
wav2vec2-base-timit-demo-colab ============================== This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.9314 * Wer: 1.0 Model description ----------------- More information needed Intended uses & limitat...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps...
[ "TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 3...
text-generation
transformers
# Harry Potter DialoGPT model
{"tags": ["conversational"]}
AdrianGzz/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT model
[ "# Harry Potter DialoGPT model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT model" ]
text-generation
transformers
# DialoGPT Trained on the Speech of a Game Character ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines f...
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
Aero/Tsubomi-Haruno
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# DialoGPT Trained on the Speech of a Game Character
[ "# DialoGPT Trained on the Speech of a Game Character" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# DialoGPT Trained on the Speech of a Game Character" ]
text-generation
null
#HAL
{"tags": ["conversational"]}
AetherIT/DialoGPT-small-Hal
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #conversational #region-us
#HAL
[]
[ "TAGS\n#conversational #region-us \n" ]
image-classification
transformers
# Tomato_Leaf_Classifier Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nater...
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
Aftabhussain/Tomato_Leaf_Classifier
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# Tomato_Leaf_Classifier Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### Bacterial_spot !Bacterial_spot #### Healthy !Healthy
[ "# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### Bacterial_spot\n\n!Bacterial_spot", "#### Healthy\n\n!Healthy" ]
[ "TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport a...
text2text-generation
transformers
A monolingual T5 model for Persian trained on OSCAR 21.09 (https://oscar-corpus.com/) corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model. It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task. Exa...
{}
Ahmad/parsT5-base
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
A monolingual T5 model for Persian trained on OSCAR 21.09 (URL corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model. It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task. Example code: Steps:...
[]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
A checkpoint for training Persian T5 model. This repository can be cloned and pre-training can be resumed. This model uses flax and is for training. For more information and getting the training code please refer to: https://github.com/puraminy/parsT5
{}
Ahmad/parsT5
null
[ "transformers", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
A checkpoint for training Persian T5 model. This repository can be cloned and pre-training can be resumed. This model uses flax and is for training. For more information and getting the training code please refer to: URL
[]
[ "TAGS\n#transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks. LABEL_1: Positive LABEL_2: Negative LABEL_0: Neutral This work is an integral component of my Master's degree thesis and represents the culmination of exte...
{"language": ["ar"], "license": "apache-2.0", "tags": ["sentiment analysis", "classification", "arabic dialect", "tunisian dialect"]}
AhmedBou/TuniBert
null
[ "transformers", "pytorch", "bert", "text-classification", "sentiment analysis", "classification", "arabic dialect", "tunisian dialect", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "ar" ]
TAGS #transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks. LABEL_1: Positive LABEL_2: Negative LABEL_0: Neutral This work is an integral component of my Master's degree thesis and represents the culmination of exte...
[]
[ "TAGS\n#transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
``` ``` [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mariancg-a-code-generation-transformer-model/code-generation-on-conala)](https://paperswithcode.com/sota/code-generation-on-conala?p=mariancg-a-code-generation-transformer-model) ``` ``` # MarianCG: a code generation transformer...
{"widget": [{"text": "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"}, {"text": "check if all elements in list `mylist` are identical"}, {"text": "enable debug mode on flask application `app`"}, {"text": "getting the length of `my_tuple`"}, {"text": "find all...
AhmedSSoliman/MarianCG-CoNaLa
null
[ "transformers", "pytorch", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
![PWC](URL # MarianCG: a code generation transformer model inspired by machine translation This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that c...
[ "# MarianCG: a code generation transformer model inspired by machine translation\nThis model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be abl...
[ "TAGS\n#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# MarianCG: a code generation transformer model inspired by machine translation\nThis model is to improve the solving of the code generation problem and implement a transformer model...
text-generation
transformers
# Back to the Future DialoGPT Model
{"tags": ["conversational"]}
AiPorter/DialoGPT-small-Back_to_the_future
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Back to the Future DialoGPT Model
[ "# Back to the Future DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Back to the Future DialoGPT Model" ]
text-generation
transformers
# Rick DialoGPT Model
{"tags": ["conversational"]}
Aibox/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Rick DialoGPT Model
[ "# Rick DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Rick DialoGPT Model" ]
null
null
Trained on Stephen King's top 50 books as .txt files.
{}
Aidan8756/stephenKingModel
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #region-us
Trained on Stephen King's top 50 books as .txt files.
[]
[ "TAGS\n#region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-bashkir-cv7_opt This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingfa...
{"language": ["ba"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_7_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-bashkir-cv7_opt",...
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "ba", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "model-index", "end...
null
2022-03-02T23:29:04+00:00
[]
[ "ba" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #ba #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
# wav2vec2-large-xls-r-300m-bashkir-cv7_opt This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset. It achieves the following results on the evaluation set: - Training Loss: 0.268400 - Validation Loss: 0.088252 - WER without LM: 0.085588 - WER with...
[ "# wav2vec2-large-xls-r-300m-bashkir-cv7_opt\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.\nIt achieves the following results on the evaluation set:\n- Training Loss: 0.268400\n- Validation Loss: 0.088252\n- WER without LM: 0.085588\n-...
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_7_0 #robust-speech-event #ba #dataset-mozilla-foundation/common_voice_7_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n", ...
text2text-generation
transformers
you can use this model with simpletransfomers. ``` !pip install simpletransformers from simpletransformers.t5 import T5Model model = T5Model("mt5", "AimB/mT5-en-kr-natural") print(model.predict(["I feel good today"])) print(model.predict(["우리집 고양이는 세상에서 제일 귀엽습니다"])) ```
{}
AimB/mT5-en-kr-natural
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
you can use this model with simpletransfomers.
[]
[ "TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 35248482 - CO2 Emissions (in grams): 7.989144645413398 ## Validation Metrics - Loss: 0.13783401250839233 - Accuracy: 0.9728654124457308 - Macro F1: 0.949537871674076 - Micro F1: 0.9728654124457308 - Weighted F1: 0.9732422812610365 ...
{"language": "en", "tags": "autonlp", "datasets": ["Aimendo/autonlp-data-triage"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 7.989144645413398}
Aimendo/autonlp-triage-35248482
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:Aimendo/autonlp-data-triage", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 35248482 - CO2 Emissions (in grams): 7.989144645413398 ## Validation Metrics - Loss: 0.13783401250839233 - Accuracy: 0.9728654124457308 - Macro F1: 0.949537871674076 - Micro F1: 0.9728654124457308 - Weighted F1: 0.9732422812610365 ...
[ "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams): 7.989144645413398", "## Validation Metrics\n\n- Loss: 0.13783401250839233\n- Accuracy: 0.9728654124457308\n- Macro F1: 0.949537871674076\n- Micro F1: 0.9728654124457308\n- Weighted F1: 0...
[ "TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams): 7...
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 530014983 - CO2 Emissions (in grams): 55.10196329868386 ## Validation Metrics - Loss: 0.23171618580818176 - Accuracy: 0.9298837645294338 - Precision: 0.9314414866901055 - Recall: 0.9279459594696022 - AUC: 0.979447403984557 - F1: 0.92969...
{"language": "en", "tags": "autonlp", "datasets": ["Ajay191191/autonlp-data-Test"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 55.10196329868386}
Ajay191191/autonlp-Test-530014983
null
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:Ajay191191/autonlp-data-Test", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 530014983 - CO2 Emissions (in grams): 55.10196329868386 ## Validation Metrics - Loss: 0.23171618580818176 - Accuracy: 0.9298837645294338 - Precision: 0.9314414866901055 - Recall: 0.9279459594696022 - AUC: 0.979447403984557 - F1: 0.92969...
[ "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 55.10196329868386", "## Validation Metrics\n\n- Loss: 0.23171618580818176\n- Accuracy: 0.9298837645294338\n- Precision: 0.9314414866901055\n- Recall: 0.9279459594696022\n- AUC: 0.97944740398...
[ "TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 55.1...
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 16122692 ## Validation Metrics - Loss: 1.1877621412277222 - Rouge1: 42.0713 - Rouge2: 23.3043 - RougeL: 37.3755 - RougeLsum: 37.8961 - Gen Len: 60.7117 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bea...
{"language": "unk", "tags": "autonlp", "datasets": ["Ajaykannan6/autonlp-data-manthan"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
Ajaykannan6/autonlp-manthan-16122692
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autonlp", "unk", "dataset:Ajaykannan6/autonlp-data-manthan", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "unk" ]
TAGS #transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 16122692 ## Validation Metrics - Loss: 1.1877621412277222 - Rouge1: 42.0713 - Rouge2: 23.3043 - RougeL: 37.3755 - RougeLsum: 37.8961 - Gen Len: 60.7117 ## Usage You can use cURL to access this model:
[ "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692", "## Validation Metrics\n\n- Loss: 1.1877621412277222\n- Rouge1: 42.0713\n- Rouge2: 23.3043\n- RougeL: 37.3755\n- RougeLsum: 37.8961\n- Gen Len: 60.7117", "## Usage\n\nYou can use cURL to access this model:" ]
[ "TAGS\n#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692", "## Validation Metrics\n\n- Loss: 1.18776214122772...
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-base-v2-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on ...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "albert-base-v2-finetuned-squad", "results": []}]}
Akari/albert-base-v2-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "albert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
albert-base-v2-finetuned-squad ============================== This model is a fine-tuned version of albert-base-v2 on the squad\_v2 dataset. It achieves the following results on the evaluation set: * Loss: 0.9492 Model description ----------------- More information needed Intended uses & limitations ---------...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_si...
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-wikitext2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]}
Akash7897/bert-base-cased-wikitext2
null
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-base-cased-wikitext2 ========================= This model is a fine-tuned version of bert-base-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 6.8544 Model description ----------------- More information needed Intended uses & limitations -----------------------...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n...
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "ar...
Akash7897/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-cola ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 1.0789 * Matthews Correlation: 0.5222 Model description ----------------- More informa...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning...
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}...
Akash7897/distilbert-base-uncased-finetuned-sst2
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-sst2 ====================================== This model is a fine-tuned version of distilbert-base-uncased on the glue dataset. It achieves the following results on the evaluation set: * Loss: 0.3010 * Accuracy: 0.9037 Model description ----------------- More information needed ...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning...
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the fo...
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-wikitext2", "results": []}]}
Akash7897/gpt2-wikitext2
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
gpt2-wikitext2 ============== This model is a fine-tuned version of gpt2 on the None dataset. It achieves the following results on the evaluation set: * Loss: 6.1079 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed ...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n*...
automatic-speech-recognition
transformers
# Akashpb13/Central_kurdish_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with inv...
{"language": ["ckb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ckb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Central...
Akashpb13/Central_kurdish_xlsr
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ckb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", ...
null
2022-03-02T23:29:04+00:00
[]
[ "ckb" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #regi...
Akashpb13/Central\_kurdish\_xlsr ================================ This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, repo...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_...
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space...
automatic-speech-recognition
transformers
# Akashpb13/Galician_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invali...
{"language": ["gl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "gl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Galician_...
Akashpb13/Galician_xlsr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "gl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "...
null
2022-03-02T23:29:04+00:00
[]
[ "gl" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Akashpb13/Galician\_xlsr ======================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other,...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps:...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### T...
automatic-speech-recognition
transformers
# Akashpb13/Hausa_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets): - Loss: 0.2...
{"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "ha", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Hausa_xls...
Akashpb13/Hausa_xlsr
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "ha", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "...
null
2022-03-02T23:29:04+00:00
[]
[ "ha" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
Akashpb13/Hausa\_xlsr ===================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets): * Loss: 0.275118 * Wer: 0.329955 Model des...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps:...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n...
automatic-speech-recognition
transformers
# Akashpb13/Kabyle_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev data...
{"language": ["kab"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Kabyle_x...
Akashpb13/Kabyle_xlsr
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "kab", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-...
null
2022-03-02T23:29:04+00:00
[]
[ "kab" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Akashpb13/Kabyle\_xlsr ====================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets): * Loss: 0.159032 * We...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 4\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* ...
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #regio...
automatic-speech-recognition
transformers
# Akashpb13/Swahili_xlsr This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev dat...
{"language": ["sw"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sw"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Swahili_x...
Akashpb13/Swahili_xlsr
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sw", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "...
null
2022-03-02T23:29:04+00:00
[]
[ "sw" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #regio...
Akashpb13/Swahili\_xlsr ======================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets): * Loss: 0.159032 * ...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps:...
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space ...
automatic-speech-recognition
transformers
# Akashpb13/xlsr_hungarian_new This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - hu dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with inval...
{"language": ["hu"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/xlsr_hung...
Akashpb13/xlsr_hungarian_new
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-2.0", "model-index", "...
null
2022-03-02T23:29:04+00:00
[]
[ "hu" ]
TAGS #transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
Akashpb13/xlsr\_hungarian\_new ============================== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - hu dataset. It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\...
[ "TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### T...
automatic-speech-recognition
transformers
# Akashpb13/xlsr_kurmanji_kurdish This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged wit...
{"language": ["kmr", "ku"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kmr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/x...
Akashpb13/xlsr_kurmanji_kurdish
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kmr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "ku", "dataset:mozilla-foundation/common_voice_8_0", "license:apache-...
null
2022-03-02T23:29:04+00:00
[]
[ "kmr", "ku" ]
TAGS #transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #...
Akashpb13/xlsr\_kurmanji\_kurdish ================================= This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset. It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps...
[ "TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_s...
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Maltese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Maltese using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be u...
{"language": "mt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Maltese by Akash PB", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "data...
Akashpb13/xlsr_maltese_wav2vec2
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "mt", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "mt" ]
TAGS #transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
# Wav2Vec2-Large-XLSR-53-Maltese Fine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: Test Result: 29.42 %
[ "# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.", "## Usage\nThe model can be used directly (without a language model) as follows:\n\nTest Result: 29.42 %" ]
[ "TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Commo...
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Akjder/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Harry Potter DialoGPT Model
[ "# Harry Potter DialoGPT Model" ]
[ "TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Harry Potter DialoGPT Model" ]
image-classification
transformers
# BEiT for Face Mask Detection BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. ## Model description The BEiT mo...
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/BEiT-FaceMask-Finetuned
null
[ "transformers", "pytorch", "beit", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# BEiT for Face Mask Detection BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei. ## Model description The BEiT mo...
[ "# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.", "## Model description\r\n\r\n...
[ "TAGS\n#transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resol...
image-classification
transformers
# ConvNeXt for Face Mask Detection ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al. ## Training Metrics epoch = ...
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/ConvNeXt-FaceMask-Finetuned
null
[ "transformers", "pytorch", "safetensors", "convnext", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# ConvNeXt for Face Mask Detection ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al. ## Training Metrics epoch = ...
[ "# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.", "## Training Metrics\r\n epoch ...
[ "TAGS\n#transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Data...
image-classification
transformers
# Distilled Data-efficient Image Transformer for Face Mask Detection Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image tran...
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/DeiT-FaceMask-Finetuned
null
[ "transformers", "pytorch", "deit", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Distilled Data-efficient Image Transformer for Face Mask Detection Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image tran...
[ "# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient ima...
[ "TAGS\n#transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-traine...
text-classification
transformers
# Clinical BERT for ICD-10 Prediction The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. --- ## ...
{"license": "apache-2.0", "tags": ["text-classification"]}
AkshatSurolia/ICD-10-Code-Prediction
null
[ "transformers", "pytorch", "bert", "text-classification", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us
# Clinical BERT for ICD-10 Prediction The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. --- ## ...
[ "# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. \n \n...
[ "TAGS\n#transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us \n", "# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12)...
image-classification
transformers
# Vision Transformer (ViT) for Face Mask Detection Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention b...
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]}
AkshatSurolia/ViT-FaceMask-Finetuned
null
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "dataset:Face-Mask18K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Vision Transformer (ViT) for Face Mask Detection Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention b...
[ "# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through atte...
[ "TAGS\n#transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom F...
null
null
# Spoken Language Identification Model ## Model description The model can classify a speech utterance according to the language spoken. It covers following different languages ( English, Indonesian, Japanese, Korean, Thai, Vietnamese, Mandarin Chinese).
{"language": "multilingual", "license": "apache-2.0", "tags": ["LID", "spoken language recognition"], "datasets": ["VoxLingua107"], "metrics": ["ER"], "inference": false}
AkshaySg/LanguageIdentification
null
[ "LID", "spoken language recognition", "multilingual", "dataset:VoxLingua107", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "multilingual" ]
TAGS #LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us
# Spoken Language Identification Model ## Model description The model can classify a speech utterance according to the language spoken. It covers following different languages ( English, Indonesian, Japanese, Korean, Thai, Vietnamese, Mandarin Chinese).
[ "# Spoken Language Identification Model", "## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEnglish, \r\nIndonesian, \r\nJapanese, \r\nKorean, \r\nThai, \r\nVietnamese, \r\nMandarin Chinese)." ]
[ "TAGS\n#LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n", "# Spoken Language Identification Model", "## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEngl...
audio-classification
speechbrain
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model ## Model description This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain. The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. The model can classify a speech utt...
{"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-me...
AkshaySg/langid
null
[ "speechbrain", "audio-classification", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107", "multilingual", "dataset:VoxLingua107", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "multilingual" ]
TAGS #speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model ## Model description This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain. The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. The model can classify a speech utt...
[ "# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model", "## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.\n\nThe model can classify...
[ "TAGS\n#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n", "# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model", "## Model description\n\nThis is a spoken language re...
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-srb-base-cased-oscar This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model descr...
{"tags": ["generated_from_trainer"], "model_index": [{"name": "bert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]}
Aleksandar/bert-srb-base-cased-oscar
null
[ "transformers", "pytorch", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
# bert-srb-base-cased-oscar This model is a fine-tuned version of [](URL on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The fo...
[ "# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Tr...
[ "TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitati...
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-srb-ner-setimes This model was trained from scratch on the None dataset. It achieves the following results on the evaluatio...
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9645112274185379}}]}]}
Aleksandar/bert-srb-ner-setimes
null
[ "transformers", "pytorch", "bert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
bert-srb-ner-setimes ==================== This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1955 * Precision: 0.8229 * Recall: 0.8465 * F1: 0.8345 * Accuracy: 0.9645 Model description ----------------- More information needed Intended u...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Traini...
[ "TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size...
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set...
{"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {...
Aleksandar/bert-srb-ner
null
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "generated_from_trainer", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
bert-srb-ner ============ This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.3561 * Precision: 0.8909 * Recall: 0.9082 * F1: 0.8995 * Accuracy: 0.9547 Model description ----------------- More information needed Intended uses & limitat...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Traini...
[ "TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_s...
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-base-cased-oscar This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model...
{"tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]}
Aleksandar/distilbert-srb-base-cased-oscar
null
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
# distilbert-srb-base-cased-oscar This model is a fine-tuned version of [](URL on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters ...
[ "# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "...
[ "TAGS\n#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.", "## Model description\n\nMore information needed", "## Intended use...
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-ner-setimes This model was trained from scratch on the None dataset. It achieves the following results on the eva...
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9665376552169005}}]}]}
Aleksandar/distilbert-srb-ner-setimes
null
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
distilbert-srb-ner-setimes ========================== This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1838 * Precision: 0.8370 * Recall: 0.8617 * F1: 0.8492 * Accuracy: 0.9665 Model description ----------------- More information needed ...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Traini...
[ "TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* ...
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluati...
{"language": ["sr"], "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", ...
Aleksandar/distilbert-srb-ner
null
[ "transformers", "pytorch", "distilbert", "token-classification", "generated_from_trainer", "sr", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "sr" ]
TAGS #transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
distilbert-srb-ner ================== This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.2972 * Precision: 0.8871 * Recall: 0.9100 * F1: 0.8984 * Accuracy: 0.9577 Model description ----------------- More information needed Intended us...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Traini...
[ "TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size...
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-ner-setimes This model was trained from scratch on the None dataset. It achieves the following results on the evalua...
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9546789604788638}}]}]}
Aleksandar/electra-srb-ner-setimes
null
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
electra-srb-ner-setimes ======================= This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2804 * Precision: 0.8286 * Recall: 0.8081 * F1: 0.8182 * Accuracy: 0.9547 Model description ----------------- More information needed Inte...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Traini...
[ "TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eva...
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-ner This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation ...
{"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric"...
Aleksandar/electra-srb-ner
null
[ "transformers", "pytorch", "safetensors", "electra", "token-classification", "generated_from_trainer", "dataset:wikiann", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
electra-srb-ner =============== This model was trained from scratch on the wikiann dataset. It achieves the following results on the evaluation set: * Loss: 0.3406 * Precision: 0.8934 * Recall: 0.9087 * F1: 0.9010 * Accuracy: 0.9568 Model description ----------------- More information needed Intended uses & l...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Traini...
[ "TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\...
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-srb-oscar This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. ## Model description ...
{"tags": ["generated_from_trainer"], "model_index": [{"name": "electra-srb-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]}
Aleksandar/electra-srb-oscar
null
[ "transformers", "pytorch", "electra", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
# electra-srb-oscar This model is a fine-tuned version of [](URL on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following ...
[ "# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training h...
[ "TAGS\n#transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n...
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # herbert-base-cased-finetuned-squad This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/all...
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "herbert-base-cased-finetuned-squad", "results": []}]}
Aleksandra/herbert-base-cased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us
herbert-base-cased-finetuned-squad ================================== This model is a fine-tuned version of allegro/herbert-base-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.2071 Model description ----------------- More information needed Intended uses & limita...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Traini...
[ "TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batc...
text-classification
transformers
# xlm-roberta-en-ru-emoji - Problem type: Multi-class Classification
{"language": ["en", "ru"], "datasets": ["tweet_eval"], "model_index": [{"name": "xlm-roberta-en-ru-emoji", "results": [{"task": {"name": "Sentiment Analysis", "type": "sentiment-analysis"}, "dataset": {"name": "Tweet Eval", "type": "tweet_eval", "args": "emoji"}}]}], "widget": [{"text": "\u041e\u0442\u043b\u0438\u0447\...
adorkin/xlm-roberta-en-ru-emoji
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "en", "ru", "dataset:tweet_eval", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "en", "ru" ]
TAGS #transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us
# xlm-roberta-en-ru-emoji - Problem type: Multi-class Classification
[ "# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification" ]
[ "TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us \n", "# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkno...
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "bert", "results": []}]}
AlekseyKorshuk/bert
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert ==== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.5316 * Accuracy: 0.2936 Model description ----------------- More information needed Intended uses & limitations --------------------------- More i...
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 64\n* ...
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: ...
text2text-generation
transformers
**Usage HuggingFace Transformers for header generation task** ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-HeaderGeneration") tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large') input_text # your text input_ ...
{}
AlekseyKulnevich/Pegasus-HeaderGeneration
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
Usage HuggingFace Transformers for header generation task Decoder configuration examples: Input text you can see here output: 1. *the impact of climate change on tropical cyclones* 2. *the impact of human induced climate change on tropical cyclones* 3. *the impact of climate change on tropical cyclone formation ...
[]
[ "TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n" ]