pipeline_tag stringclasses 48
values | library_name stringclasses 198
values | text stringlengths 1 900k | metadata stringlengths 2 438k | id stringlengths 5 122 | last_modified null | tags listlengths 1 1.84k | sha null | created_at stringlengths 25 25 | arxiv listlengths 0 201 | languages listlengths 0 1.83k | tags_str stringlengths 17 9.34k | text_str stringlengths 0 389k | text_lists listlengths 0 722 | processed_texts listlengths 1 723 | tokens_length listlengths 1 723 | input_texts listlengths 1 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs10](https://huggingface.co/A... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs15", "results": []}]} | AKulk/wav2vec2-base-timit-epochs15 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### T... | [
"# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"##... | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.",
"## Model descri... | [
47,
50,
7,
9,
9,
4,
133,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n# wav2vec2-base-timit-epochs15\n\nThis model is a fine-tuned version of AKulk/wav2vec2-base-timit-epochs10 on the None dataset.## Model description\n\nMor... |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs5
This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs5", "results": []}]} | AKulk/wav2vec2-base-timit-epochs5 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-epochs5
This model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### ... | [
"# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"#... | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.",
"## Model descr... | [
47,
52,
7,
9,
9,
4,
133,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n# wav2vec2-base-timit-epochs5\n\nThis model is a fine-tuned version of facebook/wav2vec2-lv-60-espeak-cv-ft on the None dataset.## Model description\n\nMo... |
summarization | transformers |
# summarization_fanpage128
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041
... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_fanpage128", "results": []}]} | ARTeLab/it5-summarization-fanpage | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# summarization_fanpage128
This model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041
## Usage
### Training hyperparameters
... | [
"# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti/it5-base on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.5348\n- Rouge1: 34.1882\n- Rouge2: 15.7866\n- Rougel: 25.141\n- Rougelsum: 28.4882\n- Gen Len: 69.3041",
"## Usage",
"### Traini... | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti... | [
73,
92,
3,
95,
54
] | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_fanpage128\n\nThis model is a fine-tuned version of gsarti/it5-b... |
summarization | transformers |
# summarization_ilpost
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.6020
- Rouge1: 33.7802
- Rouge2: 16.2953
- Rougel: 27.4797
- Rougelsum: 30.2273
- Gen Len: 45.3175
## U... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_ilpost", "results": []}]} | ARTeLab/it5-summarization-ilpost | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/ilpost",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# summarization_ilpost
This model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.6020
- Rouge1: 33.7802
- Rouge2: 16.2953
- Rougel: 27.4797
- Rougelsum: 30.2273
- Gen Len: 45.3175
## Usage
### Training hyperparameters
The... | [
"# summarization_ilpost\n\nThis model is a fine-tuned version of gsarti/it5-base on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 1.6020\n- Rouge1: 33.7802\n- Rouge2: 16.2953\n- Rougel: 27.4797\n- Rougelsum: 30.2273\n- Gen Len: 45.3175",
"## Usage",
"### Training h... | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# summarization_ilpost\n\nThis model is a fine-tuned version o... | [
76,
92,
3,
95,
47
] | [
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_ilpost\n\nThis model is a fine-tuned version of gsar... |
summarization | transformers |
# summarization_mlsum
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on MLSum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 2.0190
- Rouge1: 19.3739
- Rouge2: 5.9753
- Rougel: 16.691
- Rougelsum: 16.7862
- Gen Len: 32.5268
## Usage
```... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_mlsum", "results": []}]} | ARTeLab/it5-summarization-mlsum | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/mlsum-it",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# summarization_mlsum
This model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 2.0190
- Rouge1: 19.3739
- Rouge2: 5.9753
- Rougel: 16.691
- Rougelsum: 16.7862
- Gen Len: 32.5268
## Usage
### Training hyperparameters
The following... | [
"# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5-base on MLSum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.0190\n- Rouge1: 19.3739\n- Rouge2: 5.9753\n- Rougel: 16.691\n- Rougelsum: 16.7862\n- Gen Len: 32.5268",
"## Usage",
"### Training hyperparam... | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5... | [
75,
93,
3,
95,
54
] | [
"TAGS\n#transformers #pytorch #safetensors #t5 #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-gsarti/it5-base #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n# summarization_mlsum\n\nThis model is a fine-tuned version of gsarti/it5-base ... |
summarization | transformers |
# mbart-summarization-fanpage
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.1833
- Rouge1: 36.5027
- Rouge2: 17.4428
- Rougel: 26.1734
- Rougelsum: 30.2... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_fanpage4epoch", "results": []}]} | ARTeLab/mbart-summarization-fanpage | null | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mbart-summarization-fanpage
This model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.1833
- Rouge1: 36.5027
- Rouge2: 17.4428
- Rougel: 26.1734
- Rougelsum: 30.2636
- Gen Len: 75.2413
## Usage
### Training hy... | [
"# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on Fanpage dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.1833\n- Rouge1: 36.5027\n- Rouge2: 17.4428\n- Rougel: 26.1734\n- Rougelsum: 30.2636\n- Gen Len: 75.2413",
"## Usage",... | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-la... | [
68,
93,
3,
95,
54
] | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/fanpage #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart-summarization-fanpage\n\nThis model is a fine-tuned version of facebook/mbart-large-cc... |
summarization | transformers |
# mbart_summarization_ilpost
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.3640
- Rouge1: 38.9101
- Rouge2: 21.384
- Rougel: 32.0517
- Rougelsum: 35.0743... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_ilpost", "results": []}]} | ARTeLab/mbart-summarization-ilpost | null | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/ilpost",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mbart_summarization_ilpost
This model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.3640
- Rouge1: 38.9101
- Rouge2: 21.384
- Rougel: 32.0517
- Rougelsum: 35.0743
- Gen Len: 39.8843
## Usage
### Training hyper... | [
"# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on IlPost dataset for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 2.3640\n- Rouge1: 38.9101\n- Rouge2: 21.384\n- Rougel: 32.0517\n- Rougelsum: 35.0743\n- Gen Len: 39.8843",
"## Usage",
... | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-larg... | [
68,
95,
3,
95,
54
] | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/ilpost #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart_summarization_ilpost\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25... |
summarization | transformers |
# mbart_summarization_mlsum
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on mlsum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 3.3336
- Rouge1: 19.3489
- Rouge2: 6.4028
- Rougel: 16.3497
- Rougelsum: 16.5387
- Gen ... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_mlsum", "results": []}]} | ARTeLab/mbart-summarization-mlsum | null | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/mlsum-it",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"it"
] | TAGS
#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# mbart_summarization_mlsum
This model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 3.3336
- Rouge1: 19.3489
- Rouge2: 6.4028
- Rougel: 16.3497
- Rougelsum: 16.5387
- Gen Len: 33.5945
## Usage
### Training hyperparamet... | [
"# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-large-cc25 on mlsum-it for Abstractive Summarization.\n\nIt achieves the following results:\n- Loss: 3.3336\n- Rouge1: 19.3489\n- Rouge2: 6.4028\n- Rougel: 16.3497\n- Rougelsum: 16.5387\n- Gen Len: 33.5945",
"## Usage",
"### Tr... | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-lar... | [
70,
98,
3,
95,
54
] | [
"TAGS\n#transformers #pytorch #safetensors #mbart #text2text-generation #summarization #it #dataset-ARTeLab/mlsum-it #base_model-facebook/mbart-large-cc25 #autotrain_compatible #endpoints_compatible #has_space #region-us \n# mbart_summarization_mlsum\n\nThis model is a fine-tuned version of facebook/mbart-large-cc2... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "PENGMENGJIE-finetuned-emotion", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | ASCCCCCCCC/PENGMENGJIE-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training h... | [
"# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
"## Model ... | [
47,
37,
7,
9,
9,
4,
93,
42
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# PENGMENGJIE-finetuned-emotion\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.## Model description\... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/ber... | {"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-chinese-finetuned-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| bert-base-chinese-finetuned-amazon\_zh\_20000
=============================================
This model is a fine-tuned version of bert-base-chinese on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1683
* Accuracy: 0.5224
* F1: 0.5194
Model description
-----------------
M... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_b... | [
37,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-chinese-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-ba... | {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-chinese-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-chinese-amazon\_zh\_20000
=========================================
This model is a fine-tuned version of bert-base-chinese on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1518
* Accuracy: 0.5092
Model description
-----------------
More information neede... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_... | [
37,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-multilingual-cased](ht... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-multilingual-cased-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-multilingual-cased-amazon\_zh\_20000
====================================================
This model is a fine-tuned version of distilbert-base-multilingual-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3031
* Accuracy: 0.4406
Model description
---... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_b... | [
47,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggin... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-amazon\_zh\_20000
===================================================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3516
* Accuracy: 0.414
Model description
-----------------... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_b... | [
47,
101,
5,
40
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### ... | [
"# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"#... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.",
... | [
47,
42,
7,
9,
9,
4,
93,
42
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# distilbert-base-uncased-finetuned-clinc\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.## Model de... |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilr... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-finetuned-wikitext2", "results": []}]} | AT/distilroberta-base-finetuned-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of distilroberta-base on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Trainin... | [
"# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Train... | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.",
"## Model descriptio... | [
45,
40,
7,
9,
9,
4,
95,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #roberta #fill-mask #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# distilroberta-base-finetuned-wikitext2\n\nThis model is a fine-tuned version of distilroberta-base on the None dataset.## Model description\n\nMore in... |
text-generation | transformers |
#Harry Potter DialoGPT Model | {"tags": ["conversational"]} | ATGdev/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-lar... | {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]} | AVSilva/bertimbau-large-fine-tuned-md | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# result
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation dat... | [
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7458",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Train... | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:... | [
38,
46,
7,
9,
9,
4,
95,
5,
47
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Lo... |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-lar... | {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]} | AVSilva/bertimbau-large-fine-tuned-sd | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# result
This model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation dat... | [
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7570",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Train... | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:... | [
38,
46,
7,
9,
9,
4,
95,
5,
47
] | [
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #region-us \n# result\n\nThis model is a fine-tuned version of neuralmind/bert-large-portuguese-cased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Lo... |
text-generation | transformers |
#Tony Stark DialoGPT model | {"tags": ["conversational"]} | AVeryRealHuman/DialoGPT-small-TonyStark | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
#Tony Stark DialoGPT model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] | [
43
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp_znj9o4r
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## M... | {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "tmp_znj9o4r", "results": []}]} | AWTStress/stress_classifier | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
# tmp_znj9o4r
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
... | [
"# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",... | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nM... | [
38,
34,
7,
9,
9,
4,
32,
5,
38
] | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n# tmp_znj9o4r\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:## Model description\n\nMore informat... |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stress_score
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## ... | {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "stress_score", "results": []}]} | AWTStress/stress_score | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us
|
# stress_score
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure... | [
"# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed"... | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n",
"# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:",
"## Model description\n\n... | [
38,
27,
7,
9,
9,
4,
32,
5,
38
] | [
"TAGS\n#transformers #tf #distilbert #text-classification #generated_from_keras_callback #autotrain_compatible #endpoints_compatible #region-us \n# stress_score\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:## Model description\n\nMore informa... |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | Pinwheel/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
| wav2vec2-base-timit-demo-colab
==============================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4812
* Wer: 0.3557
Model description
-----------------
More information needed
Intended uses & limi... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps... | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 3... | [
47,
128,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* e... |
image-classification | null |
#FashionMNIST
PyTorch Quick Start | {"tags": ["image-classification", "pytorch", "huggingpics", "some_thing"], "metrics": ["accuracy"], "private": false} | Ab0/foo-model | null | [
"pytorch",
"image-classification",
"huggingpics",
"some_thing",
"model-index",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#pytorch #image-classification #huggingpics #some_thing #model-index #region-us
|
#FashionMNIST
PyTorch Quick Start | [] | [
"TAGS\n#pytorch #image-classification #huggingpics #some_thing #model-index #region-us \n"
] | [
26
] | [
"TAGS\n#pytorch #image-classification #huggingpics #some_thing #model-index #region-us \n"
] |
text-classification | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | Abdou/arabert-base-algerian | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
50
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | Abdou/arabert-large-algerian | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
50
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | Abdou/arabert-medium-algerian | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
50
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-classification | transformers | # BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
These are different BERT models (BERT Arabic models are initialized from [AraBERT](https://huggingface.co/aubmindlab/bert-large-arabertv02)) fine-tuned on the [Algerian Dialect Sentiment Analysis](https://huggingface.co/datasets/Abdou/dz-sentiment-yt-comme... | {"language": ["ar"], "license": "mit", "library_name": "transformers", "datasets": ["Abdou/dz-sentiment-yt-comments"], "metrics": ["f1", "accuracy"]} | Abdou/arabert-mini-algerian | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"ar",
"dataset:Abdou/dz-sentiment-yt-comments",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"ar"
] | TAGS
#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us
| BERT Models Fine-tuned on Algerian Dialect Sentiment Analysis
=============================================================
These are different BERT models (BERT Arabic models are initialized from AraBERT) fine-tuned on the Algerian Dialect Sentiment Analysis dataset. The dataset contains 50,016 comments from YouTube... | [] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] | [
50
] | [
"TAGS\n#transformers #pytorch #bert #text-classification #ar #dataset-Abdou/dz-sentiment-yt-comments #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null | Model details available [here](https://github.com/awasthiabhijeet/PIE) | {} | AbhijeetA/PIE | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#region-us
| Model details available here | [] | [
"TAGS\n#region-us \n"
] | [
5
] | [
"TAGS\n#region-us \n"
] |
text-generation | transformers |
#HarryPotter DialoGPT Model | {"tags": ["conversational"]} | AbhinavSaiTheGreat/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#HarryPotter DialoGPT Model | [] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] | [
39
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification | transformers |
## Petrained Model BERT: base model (cased)
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/1810.04805) and first released in this [repository](https://github.com/google-research/bert). This mode... | {} | Abirate/bert_fine_tuned_cola | null | [
"transformers",
"tf",
"bert",
"text-classification",
"arxiv:1810.04805",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"1810.04805"
] | [] | TAGS
#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## Petrained Model BERT: base model (cased)
BERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.
## P... | [
"## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between english and English.",
... | [
"TAGS\n#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was intr... | [
40,
69,
87,
112,
6,
18,
22
] | [
"TAGS\n#transformers #tf #bert #text-classification #arxiv-1810.04805 #autotrain_compatible #endpoints_compatible #has_space #region-us \n## Petrained Model BERT: base model (cased)\nBERT base model (cased) is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced... |
text-generation | transformers |
# jeff's 100% authorized brain scan | {"tags": ["conversational"]} | AccurateIsaiah/DialoGPT-small-jefftastic | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# jeff's 100% authorized brain scan | [
"# jeff's 100% authorized brain scan"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# jeff's 100% authorized brain scan"
] | [
39,
9
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# jeff's 100% authorized brain scan"
] |
text-generation | transformers |
# Mozark's Brain Uploaded to Hugging Face | {"tags": ["conversational"]} | AccurateIsaiah/DialoGPT-small-mozark | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mozark's Brain Uploaded to Hugging Face | [
"# Mozark's Brain Uploaded to Hugging Face"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mozark's Brain Uploaded to Hugging Face"
] | [
39,
11
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Mozark's Brain Uploaded to Hugging Face"
] |
text-generation | transformers |
# Mozark's Brain Uploaded to Hugging Face but v2 | {"tags": ["conversational"]} | AccurateIsaiah/DialoGPT-small-mozarkv2 | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Mozark's Brain Uploaded to Hugging Face but v2 | [
"# Mozark's Brain Uploaded to Hugging Face but v2"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Mozark's Brain Uploaded to Hugging Face but v2"
] | [
39,
14
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Mozark's Brain Uploaded to Hugging Face but v2"
] |
text-generation | transformers |
# Un Filtered brain upload of sinclair | {"tags": ["conversational"]} | AccurateIsaiah/DialoGPT-small-sinclair | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Un Filtered brain upload of sinclair | [
"# Un Filtered brain upload of sinclair"
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Un Filtered brain upload of sinclair"
] | [
39,
8
] | [
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Un Filtered brain upload of sinclair"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... | ActivationAI/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| distilbert-base-uncased-finetuned-emotion
=========================================
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2128
* Accuracy: 0.928
* F1: 0.9280
Model description
-----------------
Mor... | [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Traini... | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learn... | [
56,
101,
5,
44
] | [
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-emotion #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-anli_r3` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["anli"]} | AdapterHub/bert-base-uncased-pf-anli_r3 | null | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:anli",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-t... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, inst... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classificat... | [
35,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-anli_r3' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the anli dataset and includes a prediction head for classification.\n... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-art` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[ad... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["art"]} | AdapterHub/bert-base-uncased-pf-art | null | [
"adapter-transformers",
"bert",
"en",
"dataset:art",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-trans... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install ... | [
"TAGS\n#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was c... | [
30,
74,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #en #dataset-art #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-art' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-boolq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification.
This adapter was created for usage with ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:qa/boolq", "adapter-transformers"], "datasets": ["boolq"]} | AdapterHub/bert-base-uncased-pf-boolq | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:qa/boolq",
"en",
"dataset:boolq",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a predict... | [
48,
80,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-boolq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/boolq dataset and includes a prediction he... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-cola` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification.
This adapter was created fo... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:lingaccept/cola", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-cola | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:lingaccept/cola",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a predictio... | [
41,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cola' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the lingaccept/cola dataset and includes a prediction head... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-commonsense_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice.
This adapter was cre... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/csqa", "adapter-transformers"], "datasets": ["commonsense_qa"]} | AdapterHub/bert-base-uncased-pf-commonsense_qa | null | [
"adapter-transformers",
"bert",
"adapterhub:comsense/csqa",
"en",
"dataset:commonsense_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, i... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usa... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a ... | [
46,
83,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-commonsense_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/csqa dataset and includes a predic... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-comqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["com_qa"]} | AdapterHub/bert-base-uncased-pf-comqa | null | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:com_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question a... | [
37,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-comqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the com_qa dataset and includes a prediction head for question answeri... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2000` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging.
This adapter was created for ... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:chunk/conll2000", "adapter-transformers"], "datasets": ["conll2000"]} | AdapterHub/bert-base-uncased-pf-conll2000 | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:chunk/conll2000",
"en",
"dataset:conll2000",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ada... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset... | [
49,
82,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2000' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the chunk/conll2000 dataset and i... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usag... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/conll2003", "adapter-transformers"], "datasets": ["conll2003"]} | AdapterHub/bert-base-uncased-pf-conll2003 | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:ner/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and... | [
50,
83,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/conll2003 dataset and inclu... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-conll2003_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/conll2003](https://adapterhub.ml/explore/pos/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for ... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:pos/conll2003", "adapter-transformers"], "datasets": ["conll2003"]} | AdapterHub/bert-base-uncased-pf-conll2003_pos | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:pos/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset... | [
50,
86,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-conll2003_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/conll2003 dataset and i... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-copa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/copa](https://adapterhub.ml/explore/comsense/copa/) dataset and includes a prediction head for multiple choice.
This adapter was created for u... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/copa", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-copa | null | [
"adapter-transformers",
"bert",
"adapterhub:comsense/copa",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choic... | [
36,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/copa #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-copa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/copa dataset and includes a prediction head for multiple choice.\n\n... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-cosmos_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/cosmosqa](https://adapterhub.ml/explore/comsense/cosmosqa/) dataset and includes a prediction head for multiple choice.
This adapter was ... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/cosmosqa", "adapter-transformers"], "datasets": ["cosmos_qa"]} | AdapterHub/bert-base-uncased-pf-cosmos_qa | null | [
"adapter-transformers",
"bert",
"adapterhub:comsense/cosmosqa",
"en",
"dataset:cosmos_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, in... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usag... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a pr... | [
45,
82,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/cosmosqa #en #dataset-cosmos_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cosmos_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/cosmosqa dataset and includes a predicti... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-cq` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/cq](https://adapterhub.ml/explore/qa/cq/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the *... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/cq", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-cq | null | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/cq",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-t... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, inst... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question ans... | [
40,
79,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/cq #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-cq' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/cq dataset and includes a prediction head for question answering... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-drop` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [drop](https://huggingface.co/datasets/drop/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["drop"]} | AdapterHub/bert-base-uncased-pf-drop | null | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:drop",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answer... | [
34,
74,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-drop #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-drop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the drop dataset and includes a prediction head for question answering.\n... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_p` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]} | AdapterHub/bert-base-uncased-pf-duorc_p | null | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question a... | [
35,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_p' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answeri... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-duorc_s` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [duorc](https://huggingface.co/datasets/duorc/) dataset and includes a prediction head for question answering.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["duorc"]} | AdapterHub/bert-base-uncased-pf-duorc_s | null | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:duorc",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question a... | [
35,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-duorc #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-duorc_s' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the duorc dataset and includes a prediction head for question answeri... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-emo` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emo](https://huggingface.co/datasets/emo/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[ada... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emo"]} | AdapterHub/bert-base-uncased-pf-emo | null | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:emo",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transf... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install '... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n... | [
35,
75,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emo #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-emo' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emo dataset and includes a prediction head for classification.\n\nThis... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-emotion` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [emotion](https://huggingface.co/datasets/emotion/) dataset and includes a prediction head for classification.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["emotion"]} | AdapterHub/bert-base-uncased-pf-emotion | null | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:emotion",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapte... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, i... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for class... | [
34,
73,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-emotion #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-emotion' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the emotion dataset and includes a prediction head for classificat... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-fce_error_detection` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ged/fce](https://adapterhub.ml/explore/ged/fce/) dataset and includes a prediction head for tagging.
This adapter was created for usage ... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ged/fce", "adapter-transformers"], "datasets": ["fce_error_detection"]} | AdapterHub/bert-base-uncased-pf-fce_error_detection | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:ged/fce",
"en",
"dataset:fce_error_detection",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dat... | [
50,
83,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ged/fce #en #dataset-fce_error_detection #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-fce_error_detection' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ged/fce dataset a... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-hellaswag` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/hellaswag](https://adapterhub.ml/explore/comsense/hellaswag/) dataset and includes a prediction head for multiple choice.
This adapter wa... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/hellaswag", "adapter-transformers"], "datasets": ["hellaswag"]} | AdapterHub/bert-base-uncased-pf-hellaswag | null | [
"adapter-transformers",
"bert",
"adapterhub:comsense/hellaswag",
"en",
"dataset:hellaswag",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, i... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usa... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a ... | [
47,
84,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/hellaswag #en #dataset-hellaswag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-hellaswag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/hellaswag dataset and includes a predic... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-hotpotqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [hotpot_qa](https://huggingface.co/datasets/hotpot_qa/) dataset and includes a prediction head for question answering.
This adapter was created for ... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["hotpot_qa"]} | AdapterHub/bert-base-uncased-pf-hotpotqa | null | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:hotpot_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nF... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for q... | [
38,
80,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-hotpot_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-hotpotqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the hotpot_qa dataset and includes a prediction head for questio... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-imdb` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/imdb](https://adapterhub.ml/explore/sentiment/imdb/) dataset and includes a prediction head for classification.
This adapter was created for ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/imdb", "adapter-transformers"], "datasets": ["imdb"]} | AdapterHub/bert-base-uncased-pf-imdb | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sentiment/imdb",
"en",
"dataset:imdb",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes... | [
45,
77,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/imdb #en #dataset-imdb #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-imdb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/imdb dataset and includes a pre... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-mit_movie_trivia` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [ner/mit_movie_trivia](https://adapterhub.ml/explore/ner/mit_movie_trivia/) dataset and includes a prediction head for tagging.
This adapter... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:ner/mit_movie_trivia", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-mit_movie_trivia | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:ner/mit_movie_trivia",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Us... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset a... | [
44,
87,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-ner/mit_movie_trivia #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mit_movie_trivia' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the ner/mit_movie_trivia dataset and inc... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-mnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/multinli](https://adapterhub.ml/explore/nli/multinli/) dataset and includes a prediction head for classification.
This adapter was created for usag... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/multinli", "adapter-transformers"], "datasets": ["multi_nli"]} | AdapterHub/bert-base-uncased-pf-mnli | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/multinli",
"en",
"dataset:multi_nli",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and include... | [
49,
79,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/multinli #en #dataset-multi_nli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/multinli dataset and includes a pr... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-mrpc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/mrpc](https://adapterhub.ml/explore/sts/mrpc/) dataset and includes a prediction head for classification.
This adapter was created for usage with t... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sts/mrpc", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-mrpc | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sts/mrpc",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for cla... | [
39,
77,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/mrpc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-mrpc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/mrpc dataset and includes a prediction head for classific... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-multirc` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/multirc](https://adapterhub.ml/explore/rc/multirc/) dataset and includes a prediction head for classification.
This adapter was created for usage... | {"language": ["en"], "tags": ["text-classification", "adapterhub:rc/multirc", "bert", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-multirc | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:rc/multirc",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ada... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head ... | [
39,
77,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/multirc #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-multirc' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/multirc dataset and includes a prediction head for cl... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-newsqa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [newsqa](https://huggingface.co/datasets/newsqa/) dataset and includes a prediction head for question answering.
This adapter was created for usage wi... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["newsqa"]} | AdapterHub/bert-base-uncased-pf-newsqa | null | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:newsqa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question ... | [
35,
76,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-newsqa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-newsqa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the newsqa dataset and includes a prediction head for question answer... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-pmb_sem_tagging` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [semtag/pmb](https://adapterhub.ml/explore/semtag/pmb/) dataset and includes a prediction head for tagging.
This adapter was created for usag... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:semtag/pmb", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-pmb_sem_tagging | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:semtag/pmb",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a predict... | [
41,
86,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-semtag/pmb #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-pmb_sem_tagging' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the semtag/pmb dataset and includes a prediction he... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-qnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/qnli](https://adapterhub.ml/explore/nli/qnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with t... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/qnli", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-qnli | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/qnli",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for cla... | [
41,
80,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/qnli #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-qnli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/qnli dataset and includes a prediction head for classific... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-qqp` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/qqp](https://adapterhub.ml/explore/sts/qqp/) dataset and includes a prediction head for classification.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["text-classification", "adapter-transformers", "adapterhub:sts/qqp", "bert"]} | AdapterHub/bert-base-uncased-pf-qqp | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sts/qqp",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tr... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, insta... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classi... | [
40,
79,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/qqp #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-qqp' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/qqp dataset and includes a prediction head for classificati... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-quail` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quail](https://huggingface.co/datasets/quail/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quail"]} | AdapterHub/bert-base-uncased-pf-quail | null | [
"adapter-transformers",
"bert",
"en",
"dataset:quail",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-t... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, inst... | [
"TAGS\n#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter... | [
31,
76,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #en #dataset-quail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quail dataset and includes a prediction head for multiple choice.\n\nThis adapter was c... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-quartz` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quartz](https://huggingface.co/datasets/quartz/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with ... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["quartz"]} | AdapterHub/bert-base-uncased-pf-quartz | null | [
"adapter-transformers",
"bert",
"en",
"dataset:quartz",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adap... | [
30,
74,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #en #dataset-quartz #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quartz' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quartz dataset and includes a prediction head for multiple choice.\n\nThis adapter wa... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-quoref` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [quoref](https://huggingface.co/datasets/quoref/) dataset and includes a prediction head for question answering.
This adapter was created for usage wi... | {"language": ["en"], "tags": ["question-answering", "bert", "adapter-transformers"], "datasets": ["quoref"]} | AdapterHub/bert-base-uncased-pf-quoref | null | [
"adapter-transformers",
"bert",
"question-answering",
"en",
"dataset:quoref",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question ... | [
36,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #en #dataset-quoref #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-quoref' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the quoref dataset and includes a prediction head for question answer... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-race` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/race](https://adapterhub.ml/explore/rc/race/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with th... | {"language": ["en"], "tags": ["adapterhub:rc/race", "bert", "adapter-transformers"], "datasets": ["race"]} | AdapterHub/bert-base-uncased-pf-race | null | [
"adapter-transformers",
"bert",
"adapterhub:rc/race",
"en",
"dataset:race",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple cho... | [
39,
76,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-rc/race #en #dataset-race #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-race' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/race dataset and includes a prediction head for multiple choice.\n... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-record` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rc/record](https://adapterhub.ml/explore/rc/record/) dataset and includes a prediction head for classification.
This adapter was created for usage wi... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:rc/record", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-record | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:rc/record",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for... | [
38,
75,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-rc/record #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-record' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the rc/record dataset and includes a prediction head for class... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-rotten_tomatoes` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/rotten_tomatoes](https://adapterhub.ml/explore/sentiment/rotten_tomatoes/) dataset and includes a prediction head for classificatio... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/rotten_tomatoes", "adapter-transformers"], "datasets": ["rotten_tomatoes"]} | AdapterHub/bert-base-uncased-pf-rotten_tomatoes | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sentiment/rotten_tomatoes",
"en",
"dataset:rotten_tomatoes",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usa... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/rotten_tomatoes dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the se... | [
47,
79,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/rotten_tomatoes #en #dataset-rotten_tomatoes #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-rotten_tomatoes' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentimen... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-rte` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/rte](https://adapterhub.ml/explore/nli/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/rte", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-rte | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/rte",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tr... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, insta... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classi... | [
39,
76,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/rte #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-rte' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/rte dataset and includes a prediction head for classificati... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-scicite` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [scicite](https://huggingface.co/datasets/scicite/) dataset and includes a prediction head for classification.
This adapter was created for usage wit... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["scicite"]} | AdapterHub/bert-base-uncased-pf-scicite | null | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:scicite",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapte... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, i... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for class... | [
35,
75,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-scicite #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-scicite' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the scicite dataset and includes a prediction head for classificat... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-scitail` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/scitail](https://adapterhub.ml/explore/nli/scitail/) dataset and includes a prediction head for classification.
This adapter was created for usa... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:nli/scitail", "adapter-transformers"], "datasets": ["scitail"]} | AdapterHub/bert-base-uncased-pf-scitail | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/scitail",
"en",
"dataset:scitail",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes... | [
46,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/scitail #en #dataset-scitail #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-scitail' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/scitail dataset and includes a pre... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-sick` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [nli/sick](https://adapterhub.ml/explore/nli/sick/) dataset and includes a prediction head for classification.
This adapter was created for usage with t... | {"language": ["en"], "tags": ["text-classification", "adapter-transformers", "bert", "adapterhub:nli/sick"], "datasets": ["sick"]} | AdapterHub/bert-base-uncased-pf-sick | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:nli/sick",
"en",
"dataset:sick",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ins... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a predictio... | [
44,
76,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-nli/sick #en #dataset-sick #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-sick' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the nli/sick dataset and includes a prediction head... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-snli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [snli](https://huggingface.co/datasets/snli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["snli"]} | AdapterHub/bert-base-uncased-pf-snli | null | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:snli",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tran... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification... | [
36,
77,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-snli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-snli' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the snli dataset and includes a prediction head for classification.\n\nT... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-social_i_qa` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [social_i_qa](https://huggingface.co/datasets/social_i_qa/) dataset and includes a prediction head for multiple choice.
This adapter was created ... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["social_i_qa"]} | AdapterHub/bert-base-uncased-pf-social_i_qa | null | [
"adapter-transformers",
"bert",
"en",
"dataset:social_i_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, instal... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\... | [
"TAGS\n#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choic... | [
35,
84,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #en #dataset-social_i_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-social_i_qa' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the social_i_qa dataset and includes a prediction head for multiple choice.\n\n... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-squad` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad1](https://adapterhub.ml/explore/qa/squad1/) dataset and includes a prediction head for question answering.
This adapter was created for usage... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad1", "adapter-transformers"], "datasets": ["squad"]} | AdapterHub/bert-base-uncased-pf-squad | null | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/squad1",
"en",
"dataset:squad",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'ad... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirs... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a predic... | [
45,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad1 #en #dataset-squad #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-squad' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad1 dataset and includes a prediction h... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-squad_v2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/squad2](https://adapterhub.ml/explore/qa/squad2/) dataset and includes a prediction head for question answering.
This adapter was created for us... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/squad2", "adapter-transformers"], "datasets": ["squad_v2"]} | AdapterHub/bert-base-uncased-pf-squad_v2 | null | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/squad2",
"en",
"dataset:squad_v2",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nF... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a ... | [
48,
81,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/squad2 #en #dataset-squad_v2 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-squad_v2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/squad2 dataset and includes a predic... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-sst2` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sentiment/sst-2](https://adapterhub.ml/explore/sentiment/sst-2/) dataset and includes a prediction head for classification.
This adapter was created fo... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sentiment/sst-2", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-sst2 | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sentiment/sst-2",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'a... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFir... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a predictio... | [
41,
80,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sentiment/sst-2 #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-sst2' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sentiment/sst-2 dataset and includes a prediction head... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-stsb` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [sts/sts-b](https://adapterhub.ml/explore/sts/sts-b/) dataset and includes a prediction head for classification.
This adapter was created for usage with... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:sts/sts-b", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-stsb | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:sts/sts-b",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for c... | [
40,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-sts/sts-b #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-stsb' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the sts/sts-b dataset and includes a prediction head for classif... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-swag` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [swag](https://huggingface.co/datasets/swag/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **... | {"language": ["en"], "tags": ["bert", "adapter-transformers"], "datasets": ["swag"]} | AdapterHub/bert-base-uncased-pf-swag | null | [
"adapter-transformers",
"bert",
"en",
"dataset:swag",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tra... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, instal... | [
"TAGS\n#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter wa... | [
31,
76,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #en #dataset-swag #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-swag' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the swag dataset and includes a prediction head for multiple choice.\n\nThis adapter was crea... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-trec` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [trec](https://huggingface.co/datasets/trec/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["trec"]} | AdapterHub/bert-base-uncased-pf-trec | null | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:trec",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tran... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification... | [
35,
75,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-trec #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-trec' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the trec dataset and includes a prediction head for classification.\n\nT... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_deprel` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [deprel/ud_ewt](https://adapterhub.ml/explore/deprel/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usag... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:deprel/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | AdapterHub/bert-base-uncased-pf-ud_deprel | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:deprel/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapt... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, ... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ew... | [
51,
85,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-deprel/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-ud_deprel' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the deprel/ud_ewt data... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_en_ewt` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [dp/ud_ewt](https://adapterhub.ml/explore/dp/ud_ewt/) dataset and includes a prediction head for dependency parsing.
This adapter was created for u... | {"language": ["en"], "tags": ["bert", "adapterhub:dp/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | AdapterHub/bert-base-uncased-pf-ud_en_ewt | null | [
"adapter-transformers",
"bert",
"adapterhub:dp/ud_ewt",
"en",
"dataset:universal_dependencies",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us
| Adapter 'AdapterHub/bert-base-uncased-pf-ud\_en\_ewt' for bert-base-uncased
===========================================================================
An adapter for the 'bert-base-uncased' model that was trained on the dp/ud\_ewt dataset and includes a prediction head for dependency parsing.
This adapter was crea... | [] | [
"TAGS\n#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] | [
35
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-dp/ud_ewt #en #dataset-universal_dependencies #region-us \n"
] |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-ud_pos` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [pos/ud_ewt](https://adapterhub.ml/explore/pos/ud_ewt/) dataset and includes a prediction head for tagging.
This adapter was created for usage with th... | {"language": ["en"], "tags": ["token-classification", "bert", "adapterhub:pos/ud_ewt", "adapter-transformers"], "datasets": ["universal_dependencies"]} | AdapterHub/bert-base-uncased-pf-ud_pos | null | [
"adapter-transformers",
"bert",
"token-classification",
"adapterhub:pos/ud_ewt",
"en",
"dataset:universal_dependencies",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-tra... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, instal... | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset... | [
50,
83,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #adapterhub-pos/ud_ewt #en #dataset-universal_dependencies #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-ud_pos' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the pos/ud_ewt dataset and i... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-wic` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wordsence/wic](https://adapterhub.ml/explore/wordsence/wic/) dataset and includes a prediction head for classification.
This adapter was created for usa... | {"language": ["en"], "tags": ["text-classification", "bert", "adapterhub:wordsence/wic", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-wic | null | [
"adapter-transformers",
"bert",
"text-classification",
"adapterhub:wordsence/wic",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adap... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst,... | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction hea... | [
40,
78,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #adapterhub-wordsence/wic #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wic' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wordsence/wic dataset and includes a prediction head for ... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-wikihop` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [qa/wikihop](https://adapterhub.ml/explore/qa/wikihop/) dataset and includes a prediction head for question answering.
This adapter was created for u... | {"language": ["en"], "tags": ["question-answering", "bert", "adapterhub:qa/wikihop", "adapter-transformers"]} | AdapterHub/bert-base-uncased-pf-wikihop | null | [
"adapter-transformers",
"bert",
"question-answering",
"adapterhub:qa/wikihop",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install ... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nF... | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head f... | [
41,
81,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #question-answering #adapterhub-qa/wikihop #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wikihop' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the qa/wikihop dataset and includes a prediction head for que... |
null | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-winogrande` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [comsense/winogrande](https://adapterhub.ml/explore/comsense/winogrande/) dataset and includes a prediction head for multiple choice.
This adapter... | {"language": ["en"], "tags": ["bert", "adapterhub:comsense/winogrande", "adapter-transformers"], "datasets": ["winogrande"]} | AdapterHub/bert-base-uncased-pf-winogrande | null | [
"adapter-transformers",
"bert",
"adapterhub:comsense/winogrande",
"en",
"dataset:winogrande",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First,... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## U... | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and include... | [
47,
84,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #adapterhub-comsense/winogrande #en #dataset-winogrande #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-winogrande' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the comsense/winogrande dataset and includes a pr... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-wnut_17` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnut_17](https://huggingface.co/datasets/wnut_17/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the *... | {"language": ["en"], "tags": ["token-classification", "bert", "adapter-transformers"], "datasets": ["wnut_17"]} | AdapterHub/bert-base-uncased-pf-wnut_17 | null | [
"adapter-transformers",
"bert",
"token-classification",
"en",
"dataset:wnut_17",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-trans... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install ... | [
"TAGS\n#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagg... | [
37,
80,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #token-classification #en #dataset-wnut_17 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-wnut_17' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the wnut_17 dataset and includes a prediction head for tagging.\n... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/bert-base-uncased-pf-yelp_polarity` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [yelp_polarity](https://huggingface.co/datasets/yelp_polarity/) dataset and includes a prediction head for classification.
This adapter was cre... | {"language": ["en"], "tags": ["text-classification", "bert", "adapter-transformers"], "datasets": ["yelp_polarity"]} | AdapterHub/bert-base-uncased-pf-yelp_polarity | null | [
"adapter-transformers",
"bert",
"text-classification",
"en",
"dataset:yelp_polarity",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased
An adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, ins... | [
"# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage... | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a predict... | [
38,
81,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #bert #text-classification #en #dataset-yelp_polarity #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/bert-base-uncased-pf-yelp_polarity' for bert-base-uncased\n\nAn adapter for the 'bert-base-uncased' model that was trained on the yelp_polarity dataset and includes a prediction he... |
null | adapter-transformers |
# Adapter `AdapterHub/bioASQyesno` for facebook/bart-base
An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/bioasq](https://adapterhub.ml/explore/qa/bioasq/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/a... | {"tags": ["adapterhub:qa/bioasq", "adapter-transformers", "bart"]} | AdapterHub/bioASQyesno | null | [
"adapter-transformers",
"bart",
"adapterhub:qa/bioasq",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#adapter-transformers #bart #adapterhub-qa/bioasq #region-us
|
# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base
An adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of tra... | [
"# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformers... | [
"TAGS\n#adapter-transformers #bart #adapterhub-qa/bioasq #region-us \n",
"# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage... | [
24,
63,
53,
45,
22
] | [
"TAGS\n#adapter-transformers #bart #adapterhub-qa/bioasq #region-us \n# Adapter 'AdapterHub/bioASQyesno' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/bioasq dataset.\n\nThis adapter was created for usage with the adapter-transformers library.## Usage\n\nFirst, i... |
null | adapter-transformers |
# Adapter `hSterz/narrativeqa` for facebook/bart-base
An [adapter](https://adapterhub.ml) for the `facebook/bart-base` model that was trained on the [qa/narrativeqa](https://adapterhub.ml/explore/qa/narrativeqa/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter... | {"tags": ["adapterhub:qa/narrativeqa", "adapter-transformers", "bart"], "datasets": ["narrativeqa"]} | AdapterHub/narrativeqa | null | [
"adapter-transformers",
"bart",
"adapterhub:qa/narrativeqa",
"dataset:narrativeqa",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [] | [] | TAGS
#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us
|
# Adapter 'hSterz/narrativeqa' for facebook/bart-base
An adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_Note: adapter-transformers is a fork of tr... | [
"# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transformers':\n\n\n_Note: adapter-transformer... | [
"TAGS\n#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us \n",
"# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transfor... | [
29,
58,
53,
5,
4
] | [
"TAGS\n#adapter-transformers #bart #adapterhub-qa/narrativeqa #dataset-narrativeqa #region-us \n# Adapter 'hSterz/narrativeqa' for facebook/bart-base\n\nAn adapter for the 'facebook/bart-base' model that was trained on the qa/narrativeqa dataset.\n\nThis adapter was created for usage with the adapter-transformers l... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-anli_r3` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [anli](https://huggingface.co/datasets/anli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-tran... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapter-transformers"], "datasets": ["anli"]} | AdapterHub/roberta-base-pf-anli_r3 | null | [
"adapter-transformers",
"roberta",
"text-classification",
"en",
"dataset:anli",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base
An adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
... | [
"# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-tr... | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis... | [
35,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #en #dataset-anli #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-anli_r3' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the anli dataset and includes a prediction head for classification.\n\nThis adapt... |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-art` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [art](https://huggingface.co/datasets/art/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the **[adapter-transform... | {"language": ["en"], "tags": ["roberta", "adapter-transformers"], "datasets": ["art"]} | AdapterHub/roberta-base-pf-art | null | [
"adapter-transformers",
"roberta",
"en",
"dataset:art",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base
An adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':
_No... | [
"# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-transf... | [
"TAGS\n#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for u... | [
30,
65,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #en #dataset-art #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-art' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the art dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage w... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-boolq` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [qa/boolq](https://adapterhub.ml/explore/qa/boolq/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[adapter-... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:qa/boolq", "adapter-transformers"], "datasets": ["boolq"]} | AdapterHub/roberta-base-pf-boolq | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:qa/boolq",
"en",
"dataset:boolq",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base
An adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers':... | [
"# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapter-... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for... | [
48,
71,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-qa/boolq #en #dataset-boolq #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-boolq' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the qa/boolq dataset and includes a prediction head for class... |
text-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-cola` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [lingaccept/cola](https://adapterhub.ml/explore/lingaccept/cola/) dataset and includes a prediction head for classification.
This adapter was created for usage with th... | {"language": ["en"], "tags": ["text-classification", "roberta", "adapterhub:lingaccept/cola", "adapter-transformers"]} | AdapterHub/roberta-base-pf-cola | null | [
"adapter-transformers",
"roberta",
"text-classification",
"adapterhub:lingaccept/cola",
"en",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base
An adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transfor... | [
"# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classification.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'ad... | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for c... | [
41,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #text-classification #adapterhub-lingaccept/cola #en #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-cola' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the lingaccept/cola dataset and includes a prediction head for classif... |
null | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-commonsense_qa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [comsense/csqa](https://adapterhub.ml/explore/comsense/csqa/) dataset and includes a prediction head for multiple choice.
This adapter was created for usage ... | {"language": ["en"], "tags": ["roberta", "adapterhub:comsense/csqa", "adapter-transformers"], "datasets": ["commonsense_qa"]} | AdapterHub/roberta-base-pf-commonsense_qa | null | [
"adapter-transformers",
"roberta",
"adapterhub:comsense/csqa",
"en",
"dataset:commonsense_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter... | [
"# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head for multiple choice.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, in... | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction h... | [
46,
74,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #adapterhub-comsense/csqa #en #dataset-commonsense_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-commonsense_qa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the comsense/csqa dataset and includes a prediction head fo... |
question-answering | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-comqa` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [com_qa](https://huggingface.co/datasets/com_qa/) dataset and includes a prediction head for question answering.
This adapter was created for usage with the **[adapte... | {"language": ["en"], "tags": ["question-answering", "roberta", "adapter-transformers"], "datasets": ["com_qa"]} | AdapterHub/roberta-base-pf-comqa | null | [
"adapter-transformers",
"roberta",
"question-answering",
"en",
"dataset:com_qa",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base
An adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers... | [
"# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapte... | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\... | [
37,
69,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #question-answering #en #dataset-com_qa #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-comqa' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the com_qa dataset and includes a prediction head for question answering.\n\nThis ... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-conll2000` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [chunk/conll2000](https://adapterhub.ml/explore/chunk/conll2000/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the ... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:chunk/conll2000", "adapter-transformers"], "datasets": ["conll2000"]} | AdapterHub/roberta-base-pf-conll2000 | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:chunk/conll2000",
"en",
"dataset:conll2000",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base
An adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transforme... | [
"# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adap... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and include... | [
49,
73,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-chunk/conll2000 #en #dataset-conll2000 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2000' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the chunk/conll2000 dataset and includes a pr... |
token-classification | adapter-transformers |
# Adapter `AdapterHub/roberta-base-pf-conll2003` for roberta-base
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [ner/conll2003](https://adapterhub.ml/explore/ner/conll2003/) dataset and includes a prediction head for tagging.
This adapter was created for usage with the **[a... | {"language": ["en"], "tags": ["token-classification", "roberta", "adapterhub:ner/conll2003", "adapter-transformers"], "datasets": ["conll2003"]} | AdapterHub/roberta-base-pf-conll2003 | null | [
"adapter-transformers",
"roberta",
"token-classification",
"adapterhub:ner/conll2003",
"en",
"dataset:conll2003",
"arxiv:2104.08247",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | [
"2104.08247"
] | [
"en"
] | TAGS
#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us
|
# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base
An adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.
This adapter was created for usage with the adapter-transformers library.
## Usage
First, install 'adapter-transformers... | [
"# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a prediction head for tagging.\n\nThis adapter was created for usage with the adapter-transformers library.",
"## Usage\n\nFirst, install 'adapte... | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n",
"# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a ... | [
50,
74,
53,
30,
39
] | [
"TAGS\n#adapter-transformers #roberta #token-classification #adapterhub-ner/conll2003 #en #dataset-conll2003 #arxiv-2104.08247 #region-us \n# Adapter 'AdapterHub/roberta-base-pf-conll2003' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the ner/conll2003 dataset and includes a predic... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.