sha null | last_modified null | library_name stringclasses 154
values | text stringlengths 1 900k | metadata stringlengths 2 348k | pipeline_tag stringclasses 45
values | id stringlengths 5 122 | tags listlengths 1 1.84k | created_at stringlengths 25 25 | arxiv listlengths 0 201 | languages listlengths 0 1.83k | tags_str stringlengths 17 9.34k | text_str stringlengths 0 389k | text_lists listlengths 0 722 | processed_texts listlengths 1 723 | tokens_length listlengths 1 723 | input_texts listlengths 1 61 | embeddings listlengths 768 768 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-conll2003_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:pos/conll2003", "adapter-transformers", "token-classification"], "datasets": ["conll2003"]} | token-classification | AdapterHub/roberta-base-pf-conll2003_pos | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:pos/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base
An adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transfor... | [
"# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ad... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and include... | [
46,
76,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2003_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/conll2003 dataset and incl... | [
-0.043948668986558914,
0.01802964136004448,
-0.003105320269241929,
0.022806229069828987,
0.15216729044914246,
0.006474620662629604,
0.1448189914226532,
0.04975610598921776,
-0.06333141773939133,
0.0387282557785511,
0.04701131954789162,
0.11034628748893738,
0.05508072301745415,
0.0846876725... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-copa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the *... | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/copa", "adapter-transformers"]} | null | AdapterHub/roberta-base-pf-copa | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/copa",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transform... | [
"# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ada... | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis a... | [
33,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-copa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThi... | [
-0.008716974407434464,
-0.04016172140836716,
-0.0019283785950392485,
-0.0019328031921759248,
0.1650552749633789,
0.03851282224059105,
0.12057289481163025,
0.063542939722538,
-0.049409788101911545,
0.04107871279120445,
0.05178316310048103,
0.08020345866680145,
0.07424464821815491,
0.0177498... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-cosmos_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usa... | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/cosmosqa", "adapter-transformers"], "datasets": ["cosmos_qa"]} | null | AdapterHub/roberta-base-pf-cosmos_qa | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/cosmosqa",
"en",
"dataset:cosmos_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction hea... | [
42,
76,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-cosmos_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/cosmosqa dataset and includes a prediction ... | [
-0.044632285833358765,
-0.04525630548596382,
-0.004071544390171766,
-0.0015429808991029859,
0.15837766230106354,
0.01825846917927265,
0.18707533180713654,
0.039466846734285355,
-0.014202900230884552,
0.02982119843363762,
0.031039344146847725,
0.060952167958021164,
0.08040110021829605,
0.06... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-cq` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-trans... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/cq", "adapter-transformers"]} | question-answering | AdapterHub/roberta-base-pf-cq | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/cq",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tr... | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nT... | [
38,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-cq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n... | [
-0.020962191745638847,
-0.05840178206562996,
-0.0030021904967725277,
0.0030917106196284294,
0.16381868720054626,
0.024782631546258926,
0.10428442060947418,
0.07458756119012833,
0.0392560139298439,
0.038310591131448746,
0.05643270164728165,
0.07990466058254242,
0.060688409954309464,
-0.0059... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-drop` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapter-tra... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["drop"]} | question-answering | AdapterHub/roberta-base-pf-drop | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:drop",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base
An adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-t... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis... | [
35,
69,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-drop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nT... | [
-0.0138475326821208,
-0.035716716200113297,
-0.0014781502541154623,
0.03484481945633888,
0.16884437203407288,
0.035412922501564026,
0.1033608689904213,
0.09068247675895691,
0.007386094890534878,
0.04454350098967552,
0.056035179644823074,
0.11129125952720642,
0.05366238206624985,
0.06091539... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-duorc_p` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapte... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["duorc"]} | question-answering | AdapterHub/roberta-base-pf-duorc_p | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base
An adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformer... | [
"# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapt... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\... | [
36,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-duorc_p' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.... | [
-0.008611299097537994,
-0.04173707962036133,
-0.002677804557606578,
0.01633201725780964,
0.16485603153705597,
0.04007403925061226,
0.13655927777290344,
0.07242334634065628,
0.010370380245149136,
0.04739737510681152,
0.05666838213801384,
0.08655732870101929,
0.04893830791115761,
0.013964028... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-duorc_s` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapte... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["duorc"]} | question-answering | AdapterHub/roberta-base-pf-duorc_s | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base
An adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformer... | [
"# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapt... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\... | [
36,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-duorc_s' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the duorc dataset and includes a prediction head for question answering.... | [
-0.00481071975082159,
-0.04272483289241791,
-0.002640079241245985,
0.01632136106491089,
0.16406293213367462,
0.04008276015520096,
0.1412823498249054,
0.07153601944446564,
0.012464865110814571,
0.04945991933345795,
0.056674856692552567,
0.08084693551063538,
0.047309037297964096,
0.012177404... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-emo` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transforme... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["emo"]} | text-classification | AdapterHub/roberta-base-pf-emo | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:emo",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base
An adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Not... | [
"# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transfo... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapt... | [
34,
68,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-emo' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis ad... | [
0.005780903156846762,
-0.056366436183452606,
-0.0009721050737425685,
0.01033096481114626,
0.18823878467082977,
0.05155060440301895,
0.11101506650447845,
0.06578058749437332,
-0.006984957028180361,
0.03897727653384209,
0.051187049597501755,
0.10267329961061478,
0.05964289605617523,
0.070757... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-emotion` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapte... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["emotion"]} | text-classification | AdapterHub/roberta-base-pf-emotion | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:emotion",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base
An adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers'... | [
"# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification.\n... | [
35,
69,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-emotion' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the emotion dataset and includes a prediction head for classification... | [
-0.011216636747121811,
-0.03488176688551903,
-0.001568232080899179,
0.04607433080673218,
0.18705955147743225,
0.05378225818276405,
0.07580342888832092,
0.08000285923480988,
0.009519657120108604,
0.05954272672533989,
0.01952759362757206,
0.07569186389446259,
0.07551824301481247,
0.023485248... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-fce_error_detection` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[ada... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ged/fce", "adapter-transformers"], "datasets": ["fce_error_detection"]} | token-classification | AdapterHub/roberta-base-pf-fce_error_detection | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:ged/fce",
"en",
"dataset:fce_error_detection",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base
An adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transfor... | [
"# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ad... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and inc... | [
49,
77,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-fce_error_detection' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ged/fce dataset and ... | [
-0.04751047119498253,
-0.014360396191477776,
-0.0038960366509854794,
0.021517615765333176,
0.18184635043144226,
0.0036263030488044024,
0.15028493106365204,
0.06297118216753006,
0.018266864120960236,
0.05144447088241577,
0.04610197991132736,
0.08923410624265671,
0.04412956163287163,
0.07248... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-hellaswag` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice.
This adapter was created for u... | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/hellaswag", "adapter-transformers"], "datasets": ["hellaswag"]} | null | AdapterHub/roberta-base-pf-hellaswag | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/hellaswag",
"en",
"dataset:hellaswag",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a prediction h... | [
41,
75,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-hellaswag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/hellaswag dataset and includes a predictio... | [
-0.05138939619064331,
-0.0642830953001976,
-0.004071996547281742,
0.02587205544114113,
0.1531020700931549,
0.04859946668148041,
0.1241527795791626,
0.02133350633084774,
-0.012300568632781506,
0.04876057058572769,
0.025715045630931854,
0.08116699010133743,
0.06753947585821152,
-0.0175647214... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-hotpotqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["hotpot_qa"]} | question-answering | AdapterHub/roberta-base-pf-hotpotqa | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:hotpot_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transf... | [
"# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install '... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question answ... | [
38,
74,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-hotpotqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the hotpot_qa dataset and includes a prediction head for question a... | [
-0.0480048768222332,
-0.06288055330514908,
-0.0024133562110364437,
0.014983291737735271,
0.1543145775794983,
0.046416133642196655,
0.11627000570297241,
0.08230654150247574,
0.047058846801519394,
0.02541574276983738,
0.04788028821349144,
0.09724138677120209,
0.07491078227758408,
0.018985435... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-imdb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/imdb", "adapter-transformers"], "datasets": ["imdb"]} | text-classification | AdapterHub/roberta-base-pf-imdb | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sentiment/imdb",
"en",
"dataset:imdb",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transform... | [
"# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ada... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a predictio... | [
44,
72,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-imdb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/imdb dataset and includes a predic... | [
-0.03420480340719223,
-0.025213222950696945,
-0.0034018531441688538,
0.016089772805571556,
0.16310888528823853,
0.025288155302405357,
0.17705631256103516,
0.06547302007675171,
0.0316743440926075,
0.03994009643793106,
0.030849456787109375,
0.08797778189182281,
0.06414255499839783,
0.0265704... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-mit_movie_trivia` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter was created fo... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ner/mit_movie_trivia", "adapter-transformers"]} | token-classification | AdapterHub/roberta-base-pf-mit_movie_trivia | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:ner/mit_movie_trivia",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base
An adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapte... | [
"# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, i... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includes ... | [
42,
80,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-mit_movie_trivia' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/mit_movie_trivia dataset and includ... | [
-0.038823310285806656,
0.009532298892736435,
-0.0027339092921465635,
0.023420926183462143,
0.18067604303359985,
0.005524511449038982,
0.15842857956886292,
0.07800382375717163,
-0.02419940009713173,
0.04249173775315285,
-0.006466439925134182,
0.09992831945419312,
0.05545767396688461,
0.0818... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-mnli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[a... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/multinli", "adapter-transformers"], "datasets": ["multi_nli"]} | text-classification | AdapterHub/roberta-base-pf-mnli | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/multinli",
"en",
"dataset:multi_nli",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformer... | [
"# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapt... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a predicti... | [
47,
74,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-mnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/multinli dataset and includes a predi... | [
-0.030410490930080414,
-0.011202246882021427,
-0.003099509049206972,
0.03057175502181053,
0.17475202679634094,
0.01685047522187233,
0.19651632010936737,
0.04596211388707161,
-0.027113687247037888,
0.0375717394053936,
0.032154880464076996,
0.1160830706357956,
0.03234454244375229,
0.05826123... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-mrpc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-t... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sts/mrpc", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-mrpc | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sts/mrpc",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-t... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.... | [
38,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-mrpc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/mrpc dataset and includes a prediction head for classificati... | [
-0.009231743402779102,
-0.05796130374073982,
-0.002543895971029997,
0.015124908648431301,
0.18539340794086456,
0.03595678508281708,
0.13350611925125122,
0.04891502112150192,
-0.0019687884487211704,
0.03843500092625618,
0.07753948122262955,
0.10453016310930252,
0.04457315430045128,
0.020553... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-multirc` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[ad... | {"language": ["en"], "tags": ["text-classification", "adapterhub:rc/multirc", "roberta", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-multirc | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:rc/multirc",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base
An adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transforme... | [
"# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adap... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for classifi... | [
37,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-multirc' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/multirc dataset and includes a prediction head for class... | [
0.006832391023635864,
-0.06989918649196625,
-0.0023248151410371065,
0.019112691283226013,
0.18379496037960052,
0.04707161709666252,
0.1785050332546234,
0.04394559562206268,
-0.009555811993777752,
0.03518064692616463,
0.05737925320863724,
0.1047343984246254,
0.041047900915145874,
0.00485007... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-newsqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapt... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["newsqa"]} | question-answering | AdapterHub/roberta-base-pf-newsqa | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:newsqa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformer... | [
"# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapt... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n... | [
36,
71,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-newsqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the newsqa dataset and includes a prediction head for question answering... | [
-0.008089032024145126,
-0.027613723650574684,
-0.001246224739588797,
-0.0018283905228599906,
0.16823545098304749,
0.034456755965948105,
0.14173848927021027,
0.0738297700881958,
-0.025673432275652885,
0.029850037768483162,
0.07065872102975845,
0.06246456876397133,
0.05618852376937866,
0.026... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-pmb_sem_tagging` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[a... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:semtag/pmb", "adapter-transformers"]} | token-classification | AdapterHub/roberta-base-pf-pmb_sem_tagging | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:semtag/pmb",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base
An adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transform... | [
"# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ada... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head for... | [
39,
78,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-pmb_sem_tagging' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the semtag/pmb dataset and includes a prediction head ... | [
-0.034241002053022385,
-0.06115744635462761,
-0.0038422567304223776,
0.004200601018965244,
0.16100287437438965,
0.015382197685539722,
0.14727114140987396,
0.04408016428351402,
-0.012358834967017174,
0.03123442269861698,
0.03452705591917038,
0.10187627375125885,
0.04378974810242653,
0.05902... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-qnli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-t... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/qnli", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-qnli | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/qnli",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-t... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classification.... | [
39,
75,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-qnli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/qnli dataset and includes a prediction head for classificati... | [
-0.03647895157337189,
0.0030619886238127947,
-0.0030377365183085203,
0.03672005236148834,
0.16520316898822784,
0.021325770765542984,
0.1351974606513977,
0.07476302236318588,
0.02714061178267002,
0.03603527322411537,
0.04053560271859169,
0.08510041981935501,
0.05784233286976814,
0.031984496... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-qqp` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-tran... | {"language": ["en"], "tags": ["text-classification", "adapter-transformers", "adapterhub:sts/qqp", "roberta"]} | text-classification | AdapterHub/roberta-base-pf-qqp | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sts/qqp",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tra... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\... | [
38,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-qqp' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/qqp dataset and includes a prediction head for classification.... | [
-0.018834134563803673,
-0.013948853127658367,
-0.003081903327256441,
0.016939667984843254,
0.1688196212053299,
0.0271593127399683,
0.11319603770971298,
0.08220400661230087,
0.034044474363327026,
0.02969488874077797,
0.05396273732185364,
0.08106765151023865,
0.07029146701097488,
0.019045447... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-quail` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-tra... | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["quail"]} | null | AdapterHub/roberta-base-pf-quail | [
"adapter-transformers",
"roberta",
"en",
"dataset:quail",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base
An adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tr... | [
"TAGS\n#adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created... | [
30,
70,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #en #dataset-quail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-quail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was crea... | [
-0.0279268529266119,
-0.02192535623908043,
-0.001098011271096766,
0.04115699976682663,
0.1831543743610382,
0.029271339997649193,
0.10209868103265762,
0.06994922459125519,
-0.0339035727083683,
0.031284406781196594,
0.04153454676270485,
0.07296537607908249,
0.061969444155693054,
0.0506228096... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-quartz` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-... | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["quartz"]} | null | AdapterHub/roberta-base-pf-quartz | [
"adapter-transformers",
"roberta",
"en",
"dataset:quartz",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base
An adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':... | [
"# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-... | [
"TAGS\n#adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was crea... | [
30,
70,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #en #dataset-quartz #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-quartz' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was c... | [
-0.03675505891442299,
-0.0503058023750782,
-0.0010201472323387861,
0.04621147736907005,
0.1784212440252304,
0.03727839142084122,
0.11956595629453659,
0.061316490173339844,
-0.00011120131966890767,
0.05935072898864746,
0.049758296459913254,
0.08044235408306122,
0.05370068922638893,
0.062574... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-quoref` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapt... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["quoref"]} | question-answering | AdapterHub/roberta-base-pf-quoref | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:quoref",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base
An adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformer... | [
"# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapt... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering.\n... | [
36,
71,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-quoref' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the quoref dataset and includes a prediction head for question answering... | [
-0.01568557508289814,
-0.022191304713487625,
-0.0018247166881337762,
0.018843114376068115,
0.18081703782081604,
0.02769619971513748,
0.11665190756320953,
0.07581041753292084,
-0.012428362853825092,
0.04453860595822334,
0.05519481748342514,
0.07376810163259506,
0.05496501550078392,
0.026445... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-race` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-tr... | {"language": ["en"], "tags": ["adapterhub:rc/race", "roberta", "adapter-transformers"], "datasets": ["race"]} | null | AdapterHub/roberta-base-pf-race | [
"adapter-transformers",
"roberta",
"adapterhub:rc/race",
"en",
"dataset:race",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base
An adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-t... | [
"TAGS\n#adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis... | [
36,
71,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-race' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nT... | [
-0.03921445086598396,
-0.017390388995409012,
-0.000847073330078274,
0.04298999533057213,
0.17760440707206726,
0.04642597213387489,
0.11572335660457611,
0.0723191648721695,
0.000521853391546756,
0.035350605845451355,
0.07187475264072418,
0.07188805192708969,
0.06934116780757904,
0.024004220... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-record` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapt... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:rc/record", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-record | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:rc/record",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base
An adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers... | [
"# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapte... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classificat... | [
37,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-record' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the rc/record dataset and includes a prediction head for classifi... | [
0.005091841798275709,
-0.029833823442459106,
-0.002219181042164564,
0.011851341463625431,
0.17335113883018494,
0.036670390516519547,
0.1341918259859085,
0.06645317375659943,
0.010507189668715,
0.029626570641994476,
0.046669963747262955,
0.09486322849988937,
0.04837030917406082,
0.001581190... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-rotten_tomatoes` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classification.
This adapte... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/rotten_tomatoes", "adapter-transformers"], "datasets": ["rotten_tomatoes"]} | text-classification | AdapterHub/roberta-base-pf-rotten_tomatoes | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sentiment/rotten_tomatoes",
"en",
"dataset:rotten_tomatoes",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, inst... | [
"# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/rott... | [
52,
80,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-rotten_tomatoes' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/r... | [
-0.056776709854602814,
0.031251586973667145,
-0.0025419413577765226,
0.022768495604395866,
0.1758066713809967,
0.015929769724607468,
0.14690326154232025,
0.10061287134885788,
0.09501338750123978,
0.06240867078304291,
-0.03104848600924015,
0.1505468636751175,
0.02753474935889244,
0.05961076... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-rte` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-tran... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/rte", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-rte | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/rte",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tra... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\... | [
37,
71,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-rte' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/rte dataset and includes a prediction head for classification.... | [
0.013908701948821545,
-0.05661620944738388,
-0.0018603047356009483,
0.03499875217676163,
0.17349492013454437,
0.033344630151987076,
0.13143926858901978,
0.07611972838640213,
0.0024844000581651926,
0.024829911068081856,
0.031080175191164017,
0.09056609869003296,
0.05028456449508667,
0.01157... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-scicite` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapte... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["scicite"]} | text-classification | AdapterHub/roberta-base-pf-scicite | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:scicite",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base
An adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers'... | [
"# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification.\n... | [
35,
70,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-scicite' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the scicite dataset and includes a prediction head for classification... | [
-0.011074619367718697,
-0.029579687863588333,
-0.0009928061626851559,
0.025598295032978058,
0.18326453864574432,
0.032930728048086166,
0.11480241268873215,
0.07402931898832321,
0.0263887420296669,
0.05243537575006485,
0.045686230063438416,
0.08588717132806778,
0.07342972606420517,
0.048378... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-scitail` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:nli/scitail", "adapter-transformers"], "datasets": ["scitail"]} | text-classification | AdapterHub/roberta-base-pf-scitail | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/scitail",
"en",
"dataset:scitail",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transform... | [
"# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ada... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a predictio... | [
44,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-scitail' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/scitail dataset and includes a predic... | [
-0.02855381742119789,
0.0011239417362958193,
-0.003356394823640585,
0.028986865654587746,
0.15810517966747284,
0.014207957312464714,
0.1636318564414978,
0.05612774193286896,
0.01769687794148922,
0.049658771604299545,
0.03458862751722336,
0.09233562648296356,
0.06012435257434845,
0.06957429... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-sick` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-t... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers", "adapterhub:nli/sick", "text-classification"], "datasets": ["sick"]} | text-classification | AdapterHub/roberta-base-pf-sick | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:nli/sick",
"en",
"dataset:sick",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base
An adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-t... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head for c... | [
44,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-sick' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the nli/sick dataset and includes a prediction head fo... | [
-0.03124866634607315,
0.005669876933097839,
-0.0032742698676884174,
0.03564939275383949,
0.1724139302968979,
0.03579516336321831,
0.14429491758346558,
0.07053933292627335,
0.027209991589188576,
0.04929657280445099,
0.03757406026124954,
0.10321053862571716,
0.04587479308247566,
0.0330190509... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-snli` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transfo... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["snli"]} | text-classification | AdapterHub/roberta-base-pf-snli | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:snli",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base
An adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_N... | [
"# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-trans... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis ad... | [
36,
72,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-snli' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis... | [
-0.009134767577052116,
-0.005725212395191193,
-0.0020180961582809687,
0.026427563279867172,
0.18287557363510132,
0.017449719831347466,
0.16953448951244354,
0.06117807328701019,
0.0014547620667144656,
0.03145623579621315,
0.03272230923175812,
0.10951774567365646,
0.06816897541284561,
0.0469... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-social_i_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with ... | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["social_i_qa"]} | null | AdapterHub/roberta-base-pf-social_i_qa | [
"adapter-transformers",
"roberta",
"en",
"dataset:social_i_qa",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tran... | [
"# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install... | [
"TAGS\n#adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis a... | [
33,
76,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-social_i_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThi... | [
-0.03480982035398483,
-0.05014116317033768,
-0.0023512912448495626,
0.0055746641010046005,
0.17223918437957764,
0.024389762431383133,
0.11237190663814545,
0.05359305441379547,
0.04233111813664436,
0.028085561469197273,
0.041336026042699814,
0.07723869383335114,
0.07086986303329468,
0.04366... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-squad` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[ad... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/squad1", "adapter-transformers"], "datasets": ["squad"]} | question-answering | AdapterHub/roberta-base-pf-squad | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/squad1",
"en",
"dataset:squad",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transform... | [
"# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ada... | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head fo... | [
45,
74,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-squad' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad1 dataset and includes a prediction head... | [
-0.05116724595427513,
-0.06724002957344055,
-0.0032425527460873127,
0.012055721133947372,
0.16007451713085175,
0.01914631389081478,
0.13745667040348053,
0.06822013854980469,
0.013547330163419247,
0.02658088319003582,
0.02870596945285797,
0.06776178628206253,
0.06405726820230484,
0.02762253... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-squad_v2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/squad2", "adapter-transformers"], "datasets": ["squad_v2"]} | question-answering | AdapterHub/roberta-base-pf-squad_v2 | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/squad2",
"en",
"dataset:squad_v2",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transf... | [
"# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install '... | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a prediction h... | [
48,
77,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-squad_v2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/squad2 dataset and includes a predictio... | [
-0.05830870196223259,
-0.05113339051604271,
-0.003254043171182275,
0.004080718848854303,
0.17125891149044037,
0.0058030323125422,
0.13010689616203308,
0.07696202397346497,
0.007603692356497049,
0.04396606609225273,
0.023453157395124435,
0.08120989799499512,
0.06271430104970932,
0.024162996... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-sst2` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created for usage with th... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sentiment/sst-2", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-sst2 | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sentiment/sst-2",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transfor... | [
"# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ad... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for c... | [
39,
74,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-sst2' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sentiment/sst-2 dataset and includes a prediction head fo... | [
-0.017207402735948563,
-0.02884112298488617,
-0.00330716697499156,
0.01919851452112198,
0.16673244535923004,
0.02068001590669155,
0.15289463102817535,
0.06766863912343979,
0.016153953969478607,
0.0460086427628994,
0.050410568714141846,
0.07884678244590759,
0.06178365647792816,
0.0460000485... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-stsb` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:sts/sts-b", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-stsb | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:sts/sts-b",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base
An adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':... | [
"# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classificatio... | [
40,
76,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-stsb' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the sts/sts-b dataset and includes a prediction head for classifica... | [
-0.03957574814558029,
-0.029437730088829994,
-0.003169379895552993,
0.013927968218922615,
0.17239756882190704,
0.012020844034850597,
0.1666736751794815,
0.043570034205913544,
-0.005700479261577129,
0.046880193054676056,
0.06594444066286087,
0.058027271181344986,
0.05500643700361252,
0.0980... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-swag` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transf... | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["swag"]} | null | AdapterHub/roberta-base-pf-swag | [
"adapter-transformers",
"roberta",
"en",
"dataset:swag",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base
An adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_... | [
"# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tran... | [
"TAGS\n#adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created fo... | [
30,
70,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #en #dataset-swag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-swag' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created... | [
-0.026348058134317398,
-0.06111007183790207,
-0.0005645398050546646,
0.031088069081306458,
0.18790292739868164,
0.03761287406086922,
0.11584214121103287,
0.05607065185904503,
-0.020646488294005394,
0.040788739919662476,
0.04944857582449913,
0.06519422680139542,
0.06015452370047569,
0.08464... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-trec` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-transfo... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["trec"]} | text-classification | AdapterHub/roberta-base-pf-trec | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:trec",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base
An adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_N... | [
"# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-trans... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis ad... | [
35,
69,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-trec' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis... | [
-0.0038354170974344015,
-0.04706171900033951,
-0.0012548088561743498,
0.03387600556015968,
0.1846037209033966,
0.04057685285806656,
0.10734637826681137,
0.07244566082954407,
-0.025874190032482147,
0.03113313764333725,
0.04913603514432907,
0.08992499113082886,
0.05389329418540001,
0.0479241... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-ud_deprel` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[a... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:deprel/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | token-classification | AdapterHub/roberta-base-pf-ud_deprel | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:deprel/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base
An adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers... | [
"# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapte... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset an... | [
52,
79,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-ud_deprel' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the deprel/ud_ewt dataset... | [
-0.04732377454638481,
-0.04566820338368416,
-0.00374534516595304,
0.008135460317134857,
0.17801734805107117,
0.0369805246591568,
0.16535387933254242,
0.05858597904443741,
0.06963144987821579,
0.03633706644177437,
-0.004452820401638746,
0.11728827655315399,
0.03662888705730438,
0.0478251650... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-ud_en_ewt` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing.
This adapter was created for usage with the *... | {"language": ["en"], "tags": ["roberta", "adapterhub:dp/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | null | AdapterHub/roberta-base-pf-ud_en_ewt | [
"adapter-transformers",
"roberta",
"adapterhub:dp/ud_ewt",
"en",
"dataset:universal_dependencies",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us
| Adapter 'AdapterHub/roberta-base-pf-ud\_en\_ewt' for roberta-base
=================================================================
An adapter for the 'roberta-base' model that was trained on the dp/ud\_ewt dataset and includes a prediction head for dependency parsing.
This adapter was created for usage with the ad... | [] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] | [
37
] | [
"passage: TAGS\n#adapter-transformers #roberta #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] | [
-0.06725307554006577,
-0.011487703770399094,
-0.008721841499209404,
-0.032460909336805344,
0.10634306073188782,
0.06844311952590942,
0.13033300638198853,
0.02153526060283184,
0.13955111801624298,
-0.044246748089790344,
0.11320311576128006,
0.12457157671451569,
-0.027961114421486855,
0.0181... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-ud_pos` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-tr... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:pos/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | token-classification | AdapterHub/roberta-base-pf-ud_pos | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:pos/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base
An adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_... | [
"# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tran... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and include... | [
50,
75,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-ud_pos' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the pos/ud_ewt dataset and incl... | [
-0.05115986615419388,
-0.02189791575074196,
-0.0039767674170434475,
0.0033311734441667795,
0.17874211072921753,
0.016848664730787277,
0.15151801705360413,
0.054975226521492004,
0.018640393391251564,
0.02251451089978218,
0.025800835341215134,
0.10039979219436646,
0.05122502148151398,
0.0618... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-wic` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:wordsence/wic", "adapter-transformers"]} | text-classification | AdapterHub/roberta-base-pf-wic | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:wordsence/wic",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base
An adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformer... | [
"# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapt... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for classi... | [
38,
71,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-wic' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wordsence/wic dataset and includes a prediction head for cla... | [
-0.0056435358710587025,
-0.02434997819364071,
-0.0022243193816393614,
0.016545282676815987,
0.15332730114459991,
0.024709712713956833,
0.12727570533752441,
0.057436492294073105,
0.022030344232916832,
0.033210765570402145,
0.05228009074926376,
0.0720653086900711,
0.06256622076034546,
0.0087... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-wikihop` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the *... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapterhub:qa/wikihop", "adapter-transformers"]} | question-answering | AdapterHub/roberta-base-pf-wikihop | [
"adapter-transformers",
"roberta",
"question-answering",
"adapterhub:qa/wikihop",
"en",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transf... | [
"# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install '... | [
"TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for question ... | [
38,
73,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-wikihop' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/wikihop dataset and includes a prediction head for questi... | [
-0.005640930961817503,
-0.06655193120241165,
-0.0028061680495738983,
0.012421898543834686,
0.14608751237392426,
0.019804511219263077,
0.11091896891593933,
0.08620105683803558,
0.0494319312274456,
0.029829086735844612,
0.046664487570524216,
0.08647583425045013,
0.07303152978420258,
-0.00308... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-winogrande` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice.
This adapter was created fo... | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/winogrande", "adapter-transformers"], "datasets": ["winogrande"]} | null | AdapterHub/roberta-base-pf-winogrande | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/winogrande",
"en",
"dataset:winogrande",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a predicti... | [
41,
75,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-winogrande' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/winogrande dataset and includes a predi... | [
-0.025608930736780167,
-0.018116923049092293,
-0.003572807414457202,
0.026478195562958717,
0.14685600996017456,
0.022515784949064255,
0.160408154129982,
0.04767243564128876,
-0.018841566517949104,
0.03512119501829147,
0.04424302652478218,
0.07633846253156662,
0.056353095918893814,
0.032860... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-wnut_17` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[adapter-trans... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapter-transformers"], "datasets": ["wnut_17"]} | token-classification | AdapterHub/roberta-base-pf-wnut_17 | [
"adapter-transformers",
"roberta",
"token-classification",
"en",
"dataset:wnut_17",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base
An adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_No... | [
"# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transf... | [
"TAGS\n#adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis... | [
38,
74,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-wnut_17' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nT... | [
-0.031006742268800735,
-0.037914618849754333,
-0.0021539691369980574,
0.038745537400245667,
0.16743484139442444,
0.021787654608488083,
0.10719364136457443,
0.05572668835520744,
0.023881131783127785,
0.021393368020653725,
0.061668574810028076,
0.1092328131198883,
0.04987775534391403,
0.0406... |
null | null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-yelp_polarity` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification.
This adapter was created for usage ... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["yelp_polarity"]} | text-classification | AdapterHub/roberta-base-pf-yelp_polarity | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:yelp_polarity",
"arxiv:2104.08247",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base
An adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-t... | [
"# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, inst... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head for... | [
39,
78,
57,
30,
45
] | [
"passage: TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-yelp_polarity' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the yelp_polarity dataset and includes a prediction head ... | [
-0.028682369738817215,
-0.01789894327521324,
-0.003785238368436694,
0.019508330151438713,
0.17595501244068146,
0.026918645948171616,
0.17233337461948395,
0.035488829016685486,
-0.0007185092545114458,
0.03460479900240898,
0.06367865949869156,
0.08650841563940048,
0.06022535637021065,
0.0613... |
null | null | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | text-generation | AdharshJolly/HarryPotterBot-Model | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
51,
8
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] | [
-0.0009023238671943545,
0.07815738022327423,
-0.006546166725456715,
0.07792752981185913,
0.10655936598777771,
0.048972971737384796,
0.17639793455600739,
0.12185695022344589,
0.016568755730986595,
-0.04774167761206627,
0.11647630482912064,
0.2130284160375595,
-0.002118367003276944,
0.024608... |
null | null | transformers |
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
```
$... | {"language": "eng", "datasets": ["Adi2K/autonlp-data-Priv-Consent"], "widget": [{"text": "You can control cookies and tracking tools. To learn how to manage how we - and our vendors - use cookies and other tracking tools, please click here."}]} | text-classification | Adi2K/Priv-Consent | [
"transformers",
"pytorch",
"bert",
"text-classification",
"eng",
"dataset:Adi2K/autonlp-data-Priv-Consent",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"eng"
] | TAGS
#transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us
|
# Model
- Problem type: Binary Classification
- Model ID: 12592372
## Validation Metrics
- Loss: 0.23033875226974487
- Accuracy: 0.9138655462184874
- Precision: 0.9087136929460581
- Recall: 0.9201680672268907
- AUC: 0.9690346726926065
- F1: 0.9144050104384133
## Usage
You can use cURL to access this model:
Or ... | [
"# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372",
"## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.9138655462184874\n- Precision: 0.9087136929460581\n- Recall: 0.9201680672268907\n- AUC: 0.9690346726926065\n- F1: 0.9144050104384133",
"## Usage\n\nYou can use cURL to a... | [
"TAGS\n#transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372",
"## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.913865... | [
58,
17,
79,
17
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #eng #dataset-Adi2K/autonlp-data-Priv-Consent #autotrain_compatible #endpoints_compatible #region-us \n# Model\n\n- Problem type: Binary Classification\n- Model ID: 12592372## Validation Metrics\n\n- Loss: 0.23033875226974487\n- Accuracy: 0.913865546... | [
-0.14357443153858185,
0.17749594151973724,
0.00027984727057628334,
0.07699862122535706,
0.12558935582637787,
0.034409280866384506,
0.05013192072510719,
0.08820591121912003,
0.05101976916193962,
0.06834909319877625,
0.16073763370513916,
0.17799730598926544,
0.04975612461566925,
0.1106514930... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | automatic-speech-recognition | Adil617/wav2vec2-base-timit-demo-colab | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9314
* Wer: 1.0
Model description
-----------------
More information needed
Intended uses & limitat... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps... | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 3... | [
56,
130,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size... | [
-0.10582812875509262,
0.09784380346536636,
-0.003416720312088728,
0.06521322578191757,
0.10915904492139816,
-0.0191994346678257,
0.12969271838665009,
0.15058748424053192,
-0.09168218076229095,
0.07436150312423706,
0.12649132311344147,
0.1514039784669876,
0.04190778359770775,
0.145701751112... |
null | null | transformers |
# Harry Potter DialoGPT model | {"tags": ["conversational"]} | text-generation | AdrianGzz/DialoGPT-small-harrypotter | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT model | [
"# Harry Potter DialoGPT model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT model"
] | [
51,
8
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT model"
] | [
-0.0007423716015182436,
0.07901087403297424,
-0.006403876468539238,
0.07870237529277802,
0.10491720587015152,
0.049147240817546844,
0.17843516170978546,
0.12238198518753052,
0.016599085181951523,
-0.04870329797267914,
0.11620716750621796,
0.21275456249713898,
-0.003188240109011531,
0.02853... |
null | null | transformers | # DialoGPT Trained on the Speech of a Game Character
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
# Let's chat for 4 lines
f... | {"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"} | text-generation | Aero/Tsubomi-Haruno | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # DialoGPT Trained on the Speech of a Game Character
| [
"# DialoGPT Trained on the Speech of a Game Character"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DialoGPT Trained on the Speech of a Game Character"
] | [
56,
16
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# DialoGPT Trained on the Speech of a Game Character"
] | [
0.012904342263936996,
0.11703895032405853,
-0.005028963554650545,
0.08144676685333252,
0.1670924574136734,
-0.0212992113083601,
0.11028972268104553,
0.12470067292451859,
-0.01663978397846222,
-0.053953252732753754,
0.1154475063085556,
0.14575247466564178,
0.017180223017930984,
0.0941658541... |
null | null | null |
#HAL | {"tags": ["conversational"]} | text-generation | AetherIT/DialoGPT-small-Hal | [
"conversational",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#conversational #region-us
|
#HAL | [] | [
"TAGS\n#conversational #region-us \n"
] | [
10
] | [
"passage: TAGS\n#conversational #region-us \n"
] | [
0.04639962688088417,
0.010033470578491688,
-0.010517087765038013,
-0.09196841716766357,
0.07825888693332672,
0.025966141372919083,
0.0816626027226448,
0.03981694206595421,
0.1679982990026474,
-0.043665021657943726,
0.11948301643133163,
0.05959230661392212,
-0.03424782678484917,
-0.03417851... |
null | null | transformers |
# Tomato_Leaf_Classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nater... | {"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]} | image-classification | Aftabhussain/Tomato_Leaf_Classifier | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# Tomato_Leaf_Classifier
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### Bacterial_spot
!Bacterial_spot
#### Healthy
!Healthy | [
"# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Bacterial_spot\n\n!Bacterial_spot",
"#### Healthy\n\n!Healthy"
] | [
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport a... | [
49,
47,
4,
13,
7
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n# Tomato_Leaf_Classifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nRepor... | [
-0.11861062794923782,
0.19037409126758575,
-0.0029194499365985394,
0.07021021097898483,
0.16489437222480774,
-0.0008586541516706347,
0.06718330085277557,
0.18514618277549744,
0.19943994283676147,
0.0902823731303215,
0.09235995262861252,
0.2049889862537384,
-0.019493239000439644,
0.22069668... |
null | null | transformers | A monolingual T5 model for Persian trained on OSCAR 21.09 (https://oscar-corpus.com/) corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Exa... | {} | text2text-generation | Ahmad/parsT5-base | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| A monolingual T5 model for Persian trained on OSCAR 21.09 (URL corpus with self-supervised method. 35 Gig deduplicated version of Persian data was used for pre-training the model.
It's similar to the English T5 model but just for Persian. You may need to fine-tune it on your specific task.
Example code:
Steps:... | [] | [
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
48
] | [
"passage: TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
-0.01584368571639061,
0.001455417019315064,
-0.00658801756799221,
0.0177968367934227,
0.18000324070453644,
0.01899094320833683,
0.1102970764040947,
0.13923293352127075,
-0.029492201283574104,
-0.031411342322826385,
0.1258108913898468,
0.215000182390213,
-0.002026807749643922,
0.09281328320... |
null | null | transformers | A checkpoint for training Persian T5 model. This repository can be cloned and pre-training can be resumed. This model uses flax and is for training.
For more information and getting the training code please refer to:
https://github.com/puraminy/parsT5
| {} | text2text-generation | Ahmad/parsT5 | [
"transformers",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| A checkpoint for training Persian T5 model. This repository can be cloned and pre-training can be resumed. This model uses flax and is for training.
For more information and getting the training code please refer to:
URL
| [] | [
"TAGS\n#transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
47
] | [
"passage: TAGS\n#transformers #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
0.014135517179965973,
0.004881597124040127,
-0.005808737128973007,
0.00795044656842947,
0.16288866102695465,
0.025697359815239906,
0.1293317824602127,
0.14211171865463257,
-0.02072332613170147,
-0.031602196395397186,
0.1321851909160614,
0.18756985664367676,
-0.0028659093659371138,
0.096532... |
null | null | transformers |
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks.
LABEL_1: Positive
LABEL_2: Negative
LABEL_0: Neutral
This work is an integral component of my Master's degree thesis and represents the culmination of exte... | {"language": ["ar"], "license": "apache-2.0", "tags": ["sentiment analysis", "classification", "arabic dialect", "tunisian dialect"]} | text-classification | AhmedBou/TuniBert | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sentiment analysis",
"classification",
"arabic dialect",
"tunisian dialect",
"ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
This is a fineTued Bert model on Tunisian dialect text (Used dataset: AhmedBou/Tunisian-Dialect-Corpus), ready for sentiment analysis and classification tasks.
LABEL_1: Positive
LABEL_2: Negative
LABEL_0: Neutral
This work is an integral component of my Master's degree thesis and represents the culmination of exte... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
62
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #sentiment analysis #classification #arabic dialect #tunisian dialect #ar #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
-0.01288282498717308,
0.12405360490083694,
-0.0062357098795473576,
0.018763892352581024,
0.10131507366895676,
0.008365754038095474,
0.10392386466264725,
0.11306393146514893,
0.08616697788238525,
-0.0773349478840828,
0.11928797513246536,
0.10722370445728302,
0.031146937981247902,
0.00479723... |
null | null | transformers |
```
```
[](https://paperswithcode.com/sota/code-generation-on-conala?p=mariancg-a-code-generation-transformer-model)
```
```
# MarianCG: a code generation transformer... | {"widget": [{"text": "create array containing the maximum value of respective elements of array `[2, 3, 4]` and array `[1, 5, 2]"}, {"text": "check if all elements in list `mylist` are identical"}, {"text": "enable debug mode on flask application `app`"}, {"text": "getting the length of `my_tuple`"}, {"text": "find all... | text2text-generation | AhmedSSoliman/MarianCG-CoNaLa | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #marian #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #region-us
|

print(model.predict(["I feel good today"]))
print(model.predict(["우리집 고양이는 세상에서 제일 귀엽습니다"]))
``` | {} | text2text-generation | AimB/mT5-en-kr-natural | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| you can use this model with simpletransfomers.
| [] | [
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
49
] | [
"passage: TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
-0.03247205540537834,
0.008802899159491062,
-0.006354454439133406,
0.010667198337614536,
0.17799293994903564,
0.015382261015474796,
0.11927555501461029,
0.12881627678871155,
0.005648191086947918,
-0.017856856808066368,
0.15311330556869507,
0.2192111611366272,
-0.01395078282803297,
0.091042... |
null | null | transformers |
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35248482
- CO2 Emissions (in grams): 7.989144645413398
## Validation Metrics
- Loss: 0.13783401250839233
- Accuracy: 0.9728654124457308
- Macro F1: 0.949537871674076
- Micro F1: 0.9728654124457308
- Weighted F1: 0.9732422812610365
... | {"language": "en", "tags": "autonlp", "datasets": ["Aimendo/autonlp-data-triage"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 7.989144645413398} | text-classification | Aimendo/autonlp-triage-35248482 | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Aimendo/autonlp-data-triage",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 35248482
- CO2 Emissions (in grams): 7.989144645413398
## Validation Metrics
- Loss: 0.13783401250839233
- Accuracy: 0.9728654124457308
- Macro F1: 0.949537871674076
- Micro F1: 0.9728654124457308
- Weighted F1: 0.9732422812610365
... | [
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams): 7.989144645413398",
"## Validation Metrics\n\n- Loss: 0.13783401250839233\n- Accuracy: 0.9728654124457308\n- Macro F1: 0.949537871674076\n- Micro F1: 0.9728654124457308\n- Weighted F1: 0... | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams): 7... | [
67,
43,
156,
17
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Aimendo/autonlp-data-triage #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 35248482\n- CO2 Emissions (in grams)... | [
-0.11002016067504883,
0.19207601249217987,
-0.004242255352437496,
0.09488816559314728,
0.12137606739997864,
0.05517380312085152,
0.05274789780378342,
0.1394151896238327,
0.01965229958295822,
0.16112123429775238,
0.08876322954893112,
0.19368289411067963,
0.06867331266403198,
0.1400571763515... |
null | null | transformers |
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.92969... | {"language": "en", "tags": "autonlp", "datasets": ["Ajay191191/autonlp-data-Test"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 55.10196329868386} | text-classification | Ajay191191/autonlp-Test-530014983 | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:Ajay191191/autonlp-data-Test",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 530014983
- CO2 Emissions (in grams): 55.10196329868386
## Validation Metrics
- Loss: 0.23171618580818176
- Accuracy: 0.9298837645294338
- Precision: 0.9314414866901055
- Recall: 0.9279459594696022
- AUC: 0.979447403984557
- F1: 0.92969... | [
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 55.10196329868386",
"## Validation Metrics\n\n- Loss: 0.23171618580818176\n- Accuracy: 0.9298837645294338\n- Precision: 0.9314414866901055\n- Recall: 0.9279459594696022\n- AUC: 0.97944740398... | [
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 55.1... | [
68,
42,
79,
17
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-Ajay191191/autonlp-data-Test #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 530014983\n- CO2 Emissions (in grams): 5... | [
-0.16912712156772614,
0.13294516503810883,
-0.0005381121882237494,
0.07494839280843735,
0.11645413935184479,
0.03112068772315979,
0.05018383264541626,
0.10168210417032242,
0.03718405216932297,
0.0599658228456974,
0.1648073047399521,
0.19051752984523773,
0.015336073003709316,
0.135453268885... |
null | null | transformers |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 16122692
## Validation Metrics
- Loss: 1.1877621412277222
- Rouge1: 42.0713
- Rouge2: 23.3043
- RougeL: 37.3755
- RougeLsum: 37.8961
- Gen Len: 60.7117
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bea... | {"language": "unk", "tags": "autonlp", "datasets": ["Ajaykannan6/autonlp-data-manthan"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]} | text2text-generation | Ajaykannan6/autonlp-manthan-16122692 | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autonlp",
"unk",
"dataset:Ajaykannan6/autonlp-data-manthan",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"unk"
] | TAGS
#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 16122692
## Validation Metrics
- Loss: 1.1877621412277222
- Rouge1: 42.0713
- Rouge2: 23.3043
- RougeL: 37.3755
- RougeLsum: 37.8961
- Gen Len: 60.7117
## Usage
You can use cURL to access this model:
| [
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692",
"## Validation Metrics\n\n- Loss: 1.1877621412277222\n- Rouge1: 42.0713\n- Rouge2: 23.3043\n- RougeL: 37.3755\n- RougeLsum: 37.8961\n- Gen Len: 60.7117",
"## Usage\n\nYou can use cURL to access this model:"
] | [
"TAGS\n#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692",
"## Validation Metrics\n\n- Loss: 1.18776214122772... | [
62,
24,
56,
13
] | [
"passage: TAGS\n#transformers #pytorch #bart #text2text-generation #autonlp #unk #dataset-Ajaykannan6/autonlp-data-manthan #autotrain_compatible #endpoints_compatible #region-us \n# Model Trained Using AutoNLP\n\n- Problem type: Summarization\n- Model ID: 16122692## Validation Metrics\n\n- Loss: 1.1877621412277222\... | [
-0.16867810487747192,
0.1733558028936386,
-0.0011004398111253977,
0.0944652110338211,
0.1205955296754837,
0.009452027268707752,
0.08397797495126724,
0.06120523065328598,
0.027940578758716583,
0.03469341993331909,
0.1801198571920395,
0.1712404191493988,
0.039117470383644104,
0.1722399145364... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on ... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "albert-base-v2-finetuned-squad", "results": []}]} | question-answering | Akari/albert-base-v2-finetuned-squad | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us
| albert-base-v2-finetuned-squad
==============================
This model is a fine-tuned version of albert-base-v2 on the squad\_v2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9492
Model description
-----------------
More information needed
Intended uses & limitations
---------... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_si... | [
58,
98,
4,
30
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad_v2 #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\... | [
-0.10188356041908264,
0.07559505105018616,
-0.0016841115429997444,
0.12020843476057053,
0.15611854195594788,
0.02474975399672985,
0.11135903000831604,
0.12692929804325104,
-0.10180055350065231,
0.016955764964222908,
0.1357985883951187,
0.160633385181427,
0.0032500848174095154,
0.0724877044... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-cased-wikitext2", "results": []}]} | fill-mask | Akash7897/bert-base-cased-wikitext2 | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-base-cased-wikitext2
=========================
This model is a fine-tuned version of bert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.8544
Model description
-----------------
More information needed
Intended uses & limitations
-----------------------... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n... | [
55,
98,
4,
35
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: ... | [
-0.1182388886809349,
0.04186995327472687,
-0.0019299992127344012,
0.12735961377620697,
0.16604167222976685,
0.028583558276295662,
0.1113106906414032,
0.11817040294408798,
-0.09425613284111023,
0.024642299860715866,
0.1394529789686203,
0.17348450422286987,
0.009888887405395508,
0.1298822164... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "ar... | text-classification | Akash7897/distilbert-base-uncased-finetuned-cola | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0789
* Matthews Correlation: 0.5222
Model description
-----------------
More informa... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning... | [
67,
98,
4,
35
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... | [
-0.10394919663667679,
0.08985844999551773,
-0.002289725001901388,
0.12414448708295822,
0.16572530567646027,
0.03007124550640583,
0.11671684682369232,
0.12958166003227234,
-0.08714556694030762,
0.025326035916805267,
0.12572716176509857,
0.16294439136981964,
0.02047770842909813,
0.1208245605... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}... | text-classification | Akash7897/distilbert-base-uncased-finetuned-sst2 | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-sst2
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3010
* Accuracy: 0.9037
Model description
-----------------
More information needed
... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning... | [
67,
98,
4,
35
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... | [
-0.10313968360424042,
0.09008915722370148,
-0.0023171266075223684,
0.1241375058889389,
0.16539828479290009,
0.03005148656666279,
0.11738370358943939,
0.12998001277446747,
-0.08612484484910965,
0.025518635287880898,
0.12624506652355194,
0.16279219090938568,
0.020160207524895668,
0.120295718... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the fo... | {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-wikitext2", "results": []}]} | text-generation | Akash7897/gpt2-wikitext2 | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| gpt2-wikitext2
==============
This model is a fine-tuned version of gpt2 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 6.1079
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n*... | [
63,
98,
4,
35
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05... | [
-0.09162196516990662,
0.04601273313164711,
-0.0021800538524985313,
0.11196960508823395,
0.16906164586544037,
0.030266452580690384,
0.13115845620632172,
0.13041538000106812,
-0.11338057368993759,
0.035940200090408325,
0.1379547268152237,
0.17065271735191345,
0.014516468159854412,
0.10980288... |
null | null | transformers |
# Akashpb13/Central_kurdish_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with inv... | {"language": ["ckb"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "ckb", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Central... | automatic-speech-recognition | Akashpb13/Central_kurdish_xlsr | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"ckb",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
... | 2022-03-02T23:29:04+00:00 | [] | [
"ckb"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #regi... | Akashpb13/Central\_kurdish\_xlsr
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, repo... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_... | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space... | [
127,
138,
4,
38,
36
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #ckb #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #... | [
-0.12594576179981232,
0.06699497252702713,
-0.004699729382991791,
0.03198195621371269,
0.11835694313049316,
0.026486769318580627,
0.1562015861272812,
0.1364767700433731,
-0.06324998289346695,
0.1138169914484024,
0.03640428185462952,
0.0714731439948082,
0.08500310033559799,
0.12344451248645... |
null | null | transformers |
# Akashpb13/Galician_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invali... | {"language": ["gl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "gl", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Galician_... | automatic-speech-recognition | Akashpb13/Galician_xlsr | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"gl",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"... | 2022-03-02T23:29:04+00:00 | [] | [
"gl"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Akashpb13/Galician\_xlsr
========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other,... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps:... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### T... | [
117,
132,
4,
40,
36
] | [
"passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #gl #robust-speech-event #model_for_talk #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n##... | [
-0.12526337802410126,
0.1231512650847435,
-0.006688437890261412,
0.03145918995141983,
0.10164907574653625,
0.025004176422953606,
0.09319858253002167,
0.15855273604393005,
-0.044865068048238754,
0.14142951369285583,
0.058217696845531464,
0.08985313773155212,
0.10131131857633591,
0.151887789... |
null | null | transformers |
# Akashpb13/Hausa_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m)
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
- Loss: 0.2... | {"language": ["ha"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "ha", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Hausa_xls... | automatic-speech-recognition | Akashpb13/Hausa_xlsr | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"ha",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"... | 2022-03-02T23:29:04+00:00 | [] | [
"ha"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
| Akashpb13/Hausa\_xlsr
=====================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data, reported, other, and dev datasets):
* Loss: 0.275118
* Wer: 0.329955
Model des... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps:... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n... | [
121,
132,
4,
40,
36
] | [
"passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ha #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #reg... | [
-0.10782694816589355,
0.11373207718133926,
-0.0071241469122469425,
0.03746609762310982,
0.10427255183458328,
0.01256661955267191,
0.10017112642526627,
0.1585090011358261,
-0.06634099781513214,
0.12368293106555939,
0.05097830295562744,
0.08818885684013367,
0.10304611176252365,
0.14996236562... |
null | null | transformers |
# Akashpb13/Kabyle_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev data... | {"language": ["kab"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Kabyle_x... | automatic-speech-recognition | Akashpb13/Kabyle_xlsr | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"sw",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"kab",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-... | 2022-03-02T23:29:04+00:00 | [] | [
"kab"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Akashpb13/Kabyle\_xlsr
======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
* Loss: 0.159032
* We... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 8\n* seed: 13\n* gradient\\_accumulation\\_steps: 4\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* ... | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #regio... | [
125,
120,
4,
40,
36
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #sw #robust-speech-event #model_for_talk #hf-asr-leaderboard #kab #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatib... | [
-0.11144691705703735,
0.0925993099808693,
-0.004455330781638622,
0.004960178397595882,
0.12795405089855194,
0.03626727685332298,
0.1413661539554596,
0.12080994248390198,
-0.05064387246966362,
0.14110229909420013,
0.05179037153720856,
0.10072662681341171,
0.10652312636375427,
0.158846735954... |
null | null | transformers |
# Akashpb13/Swahili_xlsr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev dat... | {"language": ["sw"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "sw"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/Swahili_x... | automatic-speech-recognition | Akashpb13/Swahili_xlsr | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"sw",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"... | 2022-03-02T23:29:04+00:00 | [] | [
"sw"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #regio... | Akashpb13/Swahili\_xlsr
=======================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with dev datasets):
* Loss: 0.159032
* ... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 2\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps:... | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space ... | [
127,
132,
4,
40,
36
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #sw #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #h... | [
-0.10971909761428833,
0.059084463864564896,
-0.006249066907912493,
0.04750148206949234,
0.1292930543422699,
0.021808581426739693,
0.1456700563430786,
0.1386193186044693,
-0.08575477451086044,
0.07752733677625656,
0.02802065759897232,
0.09635860472917557,
0.07747520506381989,
0.097979143261... |
null | null | transformers |
# Akashpb13/xlsr_hungarian_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with inval... | {"language": ["hu"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "hu", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/xlsr_hung... | automatic-speech-recognition | Akashpb13/xlsr_hungarian_new | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"hu",
"model_for_talk",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"... | 2022-03-02T23:29:04+00:00 | [] | [
"hu"
] | TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Akashpb13/xlsr\_hungarian\_new
==============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - hu dataset.
It achieves the following results on evaluation set (which is 10 percent of train data set merged with invalidated data, reported... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000095637994662983496\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\... | [
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### T... | [
117,
138,
4,
40,
36
] | [
"passage: TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #hu #model_for_talk #mozilla-foundation/common_voice_8_0 #robust-speech-event #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n##... | [
-0.11402564495801926,
0.11704681813716888,
-0.006620184984058142,
0.03629700466990471,
0.09771525114774704,
0.019043780863285065,
0.1006350889801979,
0.15515558421611786,
-0.05340997129678726,
0.12364360690116882,
0.0459945909678936,
0.08444870263338089,
0.10503978282213211,
0.139665842056... |
null | null | transformers |
# Akashpb13/xlsr_kurmanji_kurdish
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged wit... | {"language": ["kmr", "ku"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "kmr", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "Akashpb13/x... | automatic-speech-recognition | Akashpb13/xlsr_kurmanji_kurdish | [
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"kmr",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"ku",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-... | 2022-03-02T23:29:04+00:00 | [] | [
"kmr",
"ku"
] | TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #... | Akashpb13/xlsr\_kurmanji\_kurdish
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_7\_0 - hu dataset.
It achieves the following results on the evaluation set (which is 10 percent of train data set merged with invalidated data... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.000096\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 13\n* gradient\\_accumulation\\_steps: 16\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps... | [
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_s... | [
129,
132,
4,
38,
36
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #kmr #robust-speech-event #model_for_talk #hf-asr-leaderboard #ku #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatib... | [
-0.09517058730125427,
0.05441342294216156,
-0.004841960966587067,
0.050700593739748,
0.14785130321979523,
0.024962984025478363,
0.14717212319374084,
0.13896390795707703,
-0.07616184651851654,
0.08396860957145691,
0.023989612236618996,
0.0898221805691719,
0.0808437392115593,
0.1033204793930... |
null | null | transformers | # Wav2Vec2-Large-XLSR-53-Maltese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Maltese using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be u... | {"language": "mt", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Maltese by Akash PB", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "data... | automatic-speech-recognition | Akashpb13/xlsr_maltese_wav2vec2 | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mt",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"mt"
] | TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
| # Wav2Vec2-Large-XLSR-53-Maltese
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
Test Result: 29.42 %
| [
"# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:\n\nTest Result: 29.42 %"
] | [
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Commo... | [
81,
62,
26
] | [
"passage: TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mt #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n# Wav2Vec2-Large-XLSR-53-Maltese\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Maltese using the Co... | [
-0.18720687925815582,
0.029445722699165344,
-0.002672052476555109,
-0.05262922868132591,
0.058213673532009125,
-0.07868067175149918,
0.1509665995836258,
0.057426534593105316,
0.021495720371603966,
0.040460508316755295,
0.04075153172016144,
0.15563543140888214,
0.05497140437364578,
0.133466... |
null | null | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | text-generation | Akjder/DialoGPT-small-harrypotter | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model | [
"# Harry Potter DialoGPT Model"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] | [
51,
8
] | [
"passage: TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Harry Potter DialoGPT Model"
] | [
-0.0009023238671943545,
0.07815738022327423,
-0.006546166725456715,
0.07792752981185913,
0.10655936598777771,
0.048972971737384796,
0.17639793455600739,
0.12185695022344589,
0.016568755730986595,
-0.04774167761206627,
0.11647630482912064,
0.2130284160375595,
-0.002118367003276944,
0.024608... |
null | null | transformers |
# BEiT for Face Mask Detection
BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.
## Model description
The BEiT mo... | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | image-classification | AkshatSurolia/BEiT-FaceMask-Finetuned | [
"transformers",
"pytorch",
"beit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BEiT for Face Mask Detection
BEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.
## Model description
The BEiT mo... | [
"# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper BEIT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong and Furu Wei.",
"## Model description\r\n\r\n... | [
"TAGS\n#transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resol... | [
56,
83,
422,
62,
67
] | [
"passage: TAGS\n#transformers #pytorch #beit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# BEiT for Face Mask Detection\r\n\r\nBEiT model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at re... | [
-0.11645087599754333,
-0.08511195331811905,
-0.0005422681570053101,
0.08485542982816696,
0.10081285238265991,
-0.017099885269999504,
0.23824000358581543,
0.06861436367034912,
0.11060522496700287,
-0.039393890649080276,
0.08594918251037598,
-0.023969700559973717,
0.061397112905979156,
0.217... |
null | null | transformers |
# ConvNeXt for Face Mask Detection
ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.
## Training Metrics
epoch = ... | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | image-classification | AkshatSurolia/ConvNeXt-FaceMask-Finetuned | [
"transformers",
"pytorch",
"safetensors",
"convnext",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# ConvNeXt for Face Mask Detection
ConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.
## Training Metrics
epoch = ... | [
"# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was introduced in the paper A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao et al.",
"## Training Metrics\r\n epoch ... | [
"TAGS\n#transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Data... | [
66,
81,
63,
67
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #convnext #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# ConvNeXt for Face Mask Detection\r\n\r\nConvNeXt model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K D... | [
-0.18450477719306946,
0.1081475242972374,
-0.0041149877943098545,
0.04943931847810745,
0.021060174331068993,
-0.017380382865667343,
0.18322314321994781,
0.09209630638360977,
-0.025648223236203194,
0.09855273365974426,
0.17709986865520477,
0.005610685795545578,
0.05751306563615799,
0.222044... |
null | null | transformers |
# Distilled Data-efficient Image Transformer for Face Mask Detection
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image tran... | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | image-classification | AkshatSurolia/DeiT-FaceMask-Finetuned | [
"transformers",
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Distilled Data-efficient Image Transformer for Face Mask Detection
Distilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image tran... | [
"# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient ima... | [
"TAGS\n#transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-traine... | [
60,
100,
127,
60,
66
] | [
"passage: TAGS\n#transformers #pytorch #deit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# Distilled Data-efficient Image Transformer for Face Mask Detection\r\n\r\nDistilled data-efficient Image Transformer (DeiT) model pre-tra... | [
-0.10595948249101639,
0.08468689024448395,
-0.005720397457480431,
0.09097554534673691,
0.0765141099691391,
-0.008398712612688541,
0.10744410753250122,
0.11856885999441147,
-0.05729149281978607,
0.07745472341775894,
0.09389550238847733,
0.05503438413143158,
0.07157699763774872,
0.1219578683... |
null | null | transformers |
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## ... | {"license": "apache-2.0", "tags": ["text-classification"]} | text-classification | AkshatSurolia/ICD-10-Code-Prediction | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# Clinical BERT for ICD-10 Prediction
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries.
---
## ... | [
"# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12) or BioBERT (BioBERT-Base v1.0 + PubMed 200K + PMC 270K) & trained on either all MIMIC notes or only discharge summaries. \n \n... | [
"TAGS\n#transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-12)... | [
40,
98,
232
] | [
"passage: TAGS\n#transformers #pytorch #bert #text-classification #license-apache-2.0 #endpoints_compatible #has_space #region-us \n# Clinical BERT for ICD-10 Prediction\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base (cased_L-12_H-768_A-... | [
0.039867665618658066,
0.20785032212734222,
-0.008290743455290794,
-0.033989161252975464,
0.0214229803532362,
0.025569163262844086,
0.11671572923660278,
0.11676473170518875,
0.016804860904812813,
0.18284505605697632,
0.056599587202072144,
-0.0076972972601652145,
0.06361041218042374,
0.10605... |
null | null | transformers |
# Vision Transformer (ViT) for Face Mask Detection
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention b... | {"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["Face-Mask18K"]} | image-classification | AkshatSurolia/ViT-FaceMask-Finetuned | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"dataset:Face-Mask18K",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Vision Transformer (ViT) for Face Mask Detection
Vision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through attention b... | [
"# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom Face-Mask18K Dataset (18k images, 2 classes) at resolution 224x224. It was first introduced in the paper Training data-efficient image transformers & distillation through atte... | [
"TAGS\n#transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custom F... | [
60,
169,
313,
63,
69
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #vit #image-classification #dataset-Face-Mask18K #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Vision Transformer (ViT) for Face Mask Detection\r\n\r\nVision Transformer (ViT) model pre-trained and fine-tuned on Self Currated Custo... | [
-0.09831704944372177,
-0.03793856129050255,
-0.0027231830172240734,
0.03533155843615532,
0.13414408266544342,
0.010018743574619293,
0.167586088180542,
0.05828621983528137,
-0.0026692228857427835,
0.004053276497870684,
0.09867370128631592,
0.005720819812268019,
0.06529121100902557,
0.164960... |
null | null | null |
# Spoken Language Identification Model
## Model description
The model can classify a speech utterance according to the language spoken.
It covers following different languages (
English,
Indonesian,
Japanese,
Korean,
Thai,
Vietnamese,
Mandarin Chinese).
| {"language": "multilingual", "license": "apache-2.0", "tags": ["LID", "spoken language recognition"], "datasets": ["VoxLingua107"], "metrics": ["ER"], "inference": false} | null | AkshaySg/LanguageIdentification | [
"LID",
"spoken language recognition",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us
|
# Spoken Language Identification Model
## Model description
The model can classify a speech utterance according to the language spoken.
It covers following different languages (
English,
Indonesian,
Japanese,
Korean,
Thai,
Vietnamese,
Mandarin Chinese).
| [
"# Spoken Language Identification Model",
"## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEnglish, \r\nIndonesian, \r\nJapanese, \r\nKorean, \r\nThai, \r\nVietnamese, \r\nMandarin Chinese)."
] | [
"TAGS\n#LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n",
"# Spoken Language Identification Model",
"## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEngl... | [
36,
7,
46
] | [
"passage: TAGS\n#LID #spoken language recognition #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n# Spoken Language Identification Model## Model description\r\n\r\nThe model can classify a speech utterance according to the language spoken.\r\nIt covers following different languages (\r\nEnglish... | [
-0.10274317860603333,
0.10924999415874481,
-0.002139477524906397,
-0.010771607980132103,
0.08353158086538315,
-0.05123700574040413,
0.17427876591682434,
0.09189902245998383,
0.10429120808839798,
-0.07397846132516861,
-0.013636074028909206,
0.01338825561106205,
0.04283532872796059,
0.108198... |
null | null | speechbrain |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utt... | {"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-me... | audio-classification | AkshaySg/langid | [
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"multilingual"
] | TAGS
#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us
|
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utt... | [
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model",
"## Model description\n\nThis is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.\nThe model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.\n\nThe model can classify... | [
"TAGS\n#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n",
"# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model",
"## Model description\n\nThis is a spoken language re... | [
71,
18,
369,
78,
5,
122,
166,
20,
13,
11
] | [
"passage: TAGS\n#speechbrain #audio-classification #embeddings #Language #Identification #pytorch #ECAPA-TDNN #TDNN #VoxLingua107 #multilingual #dataset-VoxLingua107 #license-apache-2.0 #region-us \n# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model## Model description\n\nThis is a spoken language recog... | [
-0.17394231259822845,
0.07825440913438797,
0.0004281003202777356,
0.0351618267595768,
0.11055560410022736,
-0.013047732412815094,
0.039839476346969604,
0.06746874749660492,
0.2179347723722458,
0.03232981637120247,
-0.03384881466627121,
-0.040282029658555984,
0.07837425917387009,
0.08666571... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-base-cased-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model descr... | {"tags": ["generated_from_trainer"], "model_index": [{"name": "bert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | fill-mask | Aleksandar/bert-srb-base-cased-oscar | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# bert-srb-base-cased-oscar
This model is a fine-tuned version of [](URL on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The fo... | [
"# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Tr... | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitati... | [
43,
35,
6,
12,
8,
3,
90,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n# bert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMo... | [
-0.12536224722862244,
0.14647288620471954,
-0.0018954836996272206,
0.10421301424503326,
0.16008436679840088,
0.018732599914073944,
0.08408620953559875,
0.1458176225423813,
-0.10784420371055603,
0.05053891986608505,
0.11313962936401367,
0.058632854372262955,
0.03582911193370819,
0.180604979... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluatio... | {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9645112274185379}}]}]} | token-classification | Aleksandar/bert-srb-ner-setimes | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| bert-srb-ner-setimes
====================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1955
* Precision: 0.8229
* Recall: 0.8465
* F1: 0.8345
* Accuracy: 0.9645
Model description
-----------------
More information needed
Intended u... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Traini... | [
"TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size... | [
44,
98,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #bert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_s... | [
-0.08895184844732285,
0.03521723300218582,
-0.0020090483594685793,
0.1104963943362236,
0.21936267614364624,
0.03745502606034279,
0.09922479093074799,
0.08892221748828888,
-0.12529592216014862,
0.013918996788561344,
0.10982727259397507,
0.1829705834388733,
-0.013699759729206562,
0.099380023... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set... | {"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "bert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric": {... | token-classification | Aleksandar/bert-srb-ner | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
| bert-srb-ner
============
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3561
* Precision: 0.8909
* Recall: 0.9082
* F1: 0.8995
* Accuracy: 0.9547
Model description
-----------------
More information needed
Intended uses & limitat... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Traini... | [
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_s... | [
55,
98,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #bert #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\... | [
-0.11998849362134933,
0.07349695265293121,
-0.001568026258610189,
0.11735563725233078,
0.19122302532196045,
0.023486193269491196,
0.09906211495399475,
0.09703238308429718,
-0.08852508664131165,
0.010771628469228745,
0.13091214001178741,
0.1861344575881958,
-0.012197519652545452,
0.13328440... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-base-cased-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model... | {"tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-srb-base-cased-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | fill-mask | Aleksandar/distilbert-srb-base-cased-oscar | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-srb-base-cased-oscar
This model is a fine-tuned version of [](URL on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
... | [
"# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"... | [
"TAGS\n#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended use... | [
45,
36,
6,
12,
8,
3,
90,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #distilbert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n# distilbert-srb-base-cased-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.## Model description\n\nMore information needed## Intended uses & limit... | [
-0.13579519093036652,
0.12526506185531616,
-0.0022549473214894533,
0.11094828695058823,
0.16067780554294586,
0.035012442618608475,
0.09925741702318192,
0.1317533403635025,
-0.101173534989357,
0.03503095358610153,
0.10042141377925873,
0.06700993329286575,
0.032172273844480515,
0.14707620441... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the eva... | {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9665376552169005}}]}]} | token-classification | Aleksandar/distilbert-srb-ner-setimes | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| distilbert-srb-ner-setimes
==========================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1838
* Precision: 0.8370
* Recall: 0.8617
* F1: 0.8492
* Accuracy: 0.9665
Model description
-----------------
More information needed
... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Traini... | [
"TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* ... | [
51,
98,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #distilbert #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\... | [
-0.1166515052318573,
0.0646442174911499,
-0.0017129264306277037,
0.11655726283788681,
0.19825993478298187,
0.018857207149267197,
0.10414183139801025,
0.09229511022567749,
-0.09952376782894135,
0.017805561423301697,
0.12720786035060883,
0.18190211057662964,
-0.012243755161762238,
0.13141031... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluati... | {"language": ["sr"], "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", ... | token-classification | Aleksandar/distilbert-srb-ner | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"generated_from_trainer",
"sr",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"sr"
] | TAGS
#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
| distilbert-srb-ner
==================
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2972
* Precision: 0.8871
* Recall: 0.9100
* F1: 0.8984
* Accuracy: 0.9577
Model description
-----------------
More information needed
Intended us... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Traini... | [
"TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size... | [
55,
98,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #distilbert #token-classification #generated_from_trainer #sr #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_s... | [
-0.10514558851718903,
0.07407738268375397,
-0.0015962485922500491,
0.12090826779603958,
0.20048382878303528,
0.03810099884867668,
0.09460730105638504,
0.10983976721763611,
-0.09237676858901978,
0.006141399033367634,
0.13007159531116486,
0.19520176947116852,
-0.006557425484061241,
0.1150881... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner-setimes
This model was trained from scratch on the None dataset.
It achieves the following results on the evalua... | {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner-setimes", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9546789604788638}}]}]} | token-classification | Aleksandar/electra-srb-ner-setimes | [
"transformers",
"pytorch",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| electra-srb-ner-setimes
=======================
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2804
* Precision: 0.8286
* Recall: 0.8081
* F1: 0.8182
* Accuracy: 0.9547
Model description
-----------------
More information needed
Inte... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Traini... | [
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eva... | [
50,
98,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* ... | [
-0.10811026394367218,
0.06570591032505035,
-0.0016648718155920506,
0.10982353985309601,
0.22198599576950073,
0.019794320687651634,
0.0867212787270546,
0.08342278748750687,
-0.10358785837888718,
0.02452504262328148,
0.12231972068548203,
0.19436128437519073,
-0.00423048809170723,
0.135427683... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-ner
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation ... | {"tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "electra-srb-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "sr"}, "metric"... | token-classification | Aleksandar/electra-srb-ner | [
"transformers",
"pytorch",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us
| electra-srb-ner
===============
This model was trained from scratch on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3406
* Precision: 0.8934
* Recall: 0.9087
* F1: 0.9010
* Accuracy: 0.9568
Model description
-----------------
More information needed
Intended uses & l... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20",
"### Traini... | [
"TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\... | [
56,
98,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #electra #token-classification #generated_from_trainer #dataset-wikiann #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_bat... | [
-0.1187610998749733,
0.09241653233766556,
-0.0018923794850707054,
0.11439786106348038,
0.20177459716796875,
0.01854664832353592,
0.08357545733451843,
0.09628311544656754,
-0.08458331227302551,
0.018844809383153915,
0.13125167787075043,
0.1962995082139969,
-0.005004273261874914,
0.141631662... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-srb-oscar
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
... | {"tags": ["generated_from_trainer"], "model_index": [{"name": "electra-srb-oscar", "results": [{"task": {"name": "Masked Language Modeling", "type": "fill-mask"}}]}]} | fill-mask | Aleksandar/electra-srb-oscar | [
"transformers",
"pytorch",
"electra",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# electra-srb-oscar
This model is a fine-tuned version of [](URL on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following ... | [
"# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training h... | [
"TAGS\n#transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n... | [
44,
30,
6,
12,
8,
3,
90,
4,
31
] | [
"passage: TAGS\n#transformers #pytorch #electra #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n# electra-srb-oscar\n\nThis model is a fine-tuned version of [](URL on the None dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore in... | [
-0.11812357604503632,
0.13820581138134003,
-0.0024661910720169544,
0.0974673181772232,
0.16599193215370178,
0.012541926465928555,
0.08842062950134277,
0.13467855751514435,
-0.12196382135152817,
0.059657420963048935,
0.1193217858672142,
0.10533806681632996,
0.033710185438394547,
0.207880452... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# herbert-base-cased-finetuned-squad
This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/all... | {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "herbert-base-cased-finetuned-squad", "results": []}]} | question-answering | Aleksandra/herbert-base-cased-finetuned-squad | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us
| herbert-base-cased-finetuned-squad
==================================
This model is a fine-tuned version of allegro/herbert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2071
Model description
-----------------
More information needed
Intended uses & limita... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batc... | [
49,
98,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #license-cc-by-4.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_b... | [
-0.09660293161869049,
0.04550648853182793,
-0.0016026218654587865,
0.11438999325037003,
0.1798904538154602,
0.03308841958642006,
0.1085251122713089,
0.10300865769386292,
-0.09379427880048752,
0.04019957780838013,
0.12596173584461212,
0.1606403887271881,
-0.004404374863952398,
0.04455415531... |
null | null | transformers |
# xlm-roberta-en-ru-emoji
- Problem type: Multi-class Classification | {"language": ["en", "ru"], "datasets": ["tweet_eval"], "model_index": [{"name": "xlm-roberta-en-ru-emoji", "results": [{"task": {"name": "Sentiment Analysis", "type": "sentiment-analysis"}, "dataset": {"name": "Tweet Eval", "type": "tweet_eval", "args": "emoji"}}]}], "widget": [{"text": "\u041e\u0442\u043b\u0438\u0447\... | text-classification | adorkin/xlm-roberta-en-ru-emoji | [
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:tweet_eval",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [
"en",
"ru"
] | TAGS
#transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us
|
# xlm-roberta-en-ru-emoji
- Problem type: Multi-class Classification | [
"# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification"
] | [
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us \n",
"# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification"
] | [
57,
22
] | [
"passage: TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #en #ru #dataset-tweet_eval #autotrain_compatible #endpoints_compatible #region-us \n# xlm-roberta-en-ru-emoji \n- Problem type: Multi-class Classification"
] | [
-0.009954913519322872,
0.028151899576187134,
-0.005189618561416864,
0.0841468796133995,
0.1990654021501541,
0.029547255486249924,
0.10141881555318832,
0.09639860689640045,
0.03299819305539131,
0.0243370421230793,
0.13880370557308197,
0.20219172537326813,
-0.0277547724545002,
0.141860648989... |
null | null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkno... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "bert", "results": []}]} | text-classification | AlekseyKorshuk/bert | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert
====
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5316
* Accuracy: 0.2936
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More i... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 64\n* ... | [
"TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: ... | [
53,
147,
4,
33
] | [
"passage: TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_siz... | [
-0.1031300351023674,
0.13475432991981506,
-0.0022868013475090265,
0.10324689745903015,
0.17663830518722534,
0.044504083693027496,
0.12276335060596466,
0.13461938500404358,
-0.11452135443687439,
0.07198300212621689,
0.10187046974897385,
0.11519768089056015,
0.049953874200582504,
0.162658289... |
null | null | transformers | **Usage HuggingFace Transformers for header generation task**
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("AlekseyKulnevich/Pegasus-HeaderGeneration")
tokenizer = PegasusTokenizer.from_pretrained('google/pegasus-large')
input_text # your text
input_ ... | {} | text2text-generation | AlekseyKulnevich/Pegasus-HeaderGeneration | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us
| Usage HuggingFace Transformers for header generation task
Decoder configuration examples:
Input text you can see here
output:
1. *the impact of climate change on tropical cyclones*
2. *the impact of human induced climate change on tropical cyclones*
3. *the impact of climate change on tropical cyclone formation ... | [] | [
"TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
40
] | [
"passage: TAGS\n#transformers #pytorch #pegasus #text2text-generation #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
-0.027585696429014206,
0.007313489448279142,
-0.007987958379089832,
0.028276974335312843,
0.1623227447271347,
0.029489101842045784,
0.14388062059879303,
0.1242440864443779,
0.009850045666098595,
-0.03792005777359009,
0.13300150632858276,
0.18938398361206055,
-0.010199892334640026,
0.117705... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.