Datasets:
Model Name stringlengths 5 122 | URL stringlengths 28 145 | Crawled Text stringlengths 1 199k ⌀ | text stringlengths 180 199k |
|---|---|---|---|
albert/albert-base-v1 | https://huggingface.co/albert/albert-base-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-base-v1
### Model URL : https://huggingface.co/albert/albert-base-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
... |
albert/albert-base-v2 | https://huggingface.co/albert/albert-base-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-base-v2
### Model URL : https://huggingface.co/albert/albert-base-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
... |
albert/albert-large-v1 | https://huggingface.co/albert/albert-large-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-large-v1
### Model URL : https://huggingface.co/albert/albert-large-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced i... |
albert/albert-large-v2 | https://huggingface.co/albert/albert-large-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-large-v2
### Model URL : https://huggingface.co/albert/albert-large-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced i... |
albert/albert-xlarge-v1 | https://huggingface.co/albert/albert-xlarge-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xlarge-v1
### Model URL : https://huggingface.co/albert/albert-xlarge-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced... |
albert/albert-xlarge-v2 | https://huggingface.co/albert/albert-xlarge-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xlarge-v2
### Model URL : https://huggingface.co/albert/albert-xlarge-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced... |
albert/albert-xxlarge-v1 | https://huggingface.co/albert/albert-xxlarge-v1 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xxlarge-v1
### Model URL : https://huggingface.co/albert/albert-xxlarge-v1
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduc... |
albert/albert-xxlarge-v2 | https://huggingface.co/albert/albert-xxlarge-v2 | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model, as all ALBERT models, is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing ALBERT did not write a mod... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : albert/albert-xxlarge-v2
### Model URL : https://huggingface.co/albert/albert-xxlarge-v2
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduc... |
bert-base-cased-finetuned-mrpc | https://huggingface.co/bert-base-cased-finetuned-mrpc | No model card New: Create and edit this model card directly on the website! | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-cased-finetuned-mrpc
### Model URL : https://huggingface.co/bert-base-cased-finetuned-mrpc
### Model Description : No model card New: Create and edit this model card directly on the website! |
bert-base-cased | https://huggingface.co/bert-base-cased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it makes a difference between
english and English. Disclaimer: The team releasing BERT did not write a model card for this model so ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-cased
### Model URL : https://huggingface.co/bert-base-cased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper a... |
bert-base-chinese | https://huggingface.co/bert-base-chinese | This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper). This model can be used for masked language modeling CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-chinese
### Model URL : https://huggingface.co/bert-base-chinese
### Model Description : This model has been pre-trained for Chinese, training and random input masking has been applied independently to wor... |
bert-base-german-cased | https://huggingface.co/bert-base-german-cased | Language model: bert-base-casedLanguage: GermanTraining data: Wiki, OpenLegalData, News (~ 12GB)Eval data: Conll03 (NER), GermEval14 (NER), GermEval18 (Classification), GNAD (Classification)Infrastructure: 1x TPU v2Published: Jun 14th, 2019 Update April 3rd, 2020: we updated the vocabulary file on deepset's s3 to conf... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-german-cased
### Model URL : https://huggingface.co/bert-base-german-cased
### Model Description : Language model: bert-base-casedLanguage: GermanTraining data: Wiki, OpenLegalData, News (~ 12GB)Eval data... |
bert-base-german-dbmdz-cased | https://huggingface.co/bert-base-german-dbmdz-cased | This model is the same as dbmdz/bert-base-german-cased. See the dbmdz/bert-base-german-cased model card for details on the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-german-dbmdz-cased
### Model URL : https://huggingface.co/bert-base-german-dbmdz-cased
### Model Description : This model is the same as dbmdz/bert-base-german-cased. See the dbmdz/bert-base-german-cased m... |
bert-base-german-dbmdz-uncased | https://huggingface.co/bert-base-german-dbmdz-uncased | This model is the same as dbmdz/bert-base-german-uncased. See the dbmdz/bert-base-german-cased model card for details on the model. | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-german-dbmdz-uncased
### Model URL : https://huggingface.co/bert-base-german-dbmdz-uncased
### Model Description : This model is the same as dbmdz/bert-base-german-uncased. See the dbmdz/bert-base-german-c... |
bert-base-multilingual-cased | https://huggingface.co/bert-base-multilingual-cased | Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is case sensitive: it makes a difference
between english and English. Disclaimer: The team releasing BERT did not write... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-multilingual-cased
### Model URL : https://huggingface.co/bert-base-multilingual-cased
### Model Description : Pretrained model on the top 104 languages with the largest Wikipedia using a masked language m... |
bert-base-multilingual-uncased | https://huggingface.co/bert-base-multilingual-uncased | Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not writ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-multilingual-uncased
### Model URL : https://huggingface.co/bert-base-multilingual-uncased
### Model Description : Pretrained model on the top 102 languages with the largest Wikipedia using a masked langua... |
bert-base-uncased | https://huggingface.co/bert-base-uncased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-base-uncased
### Model URL : https://huggingface.co/bert-base-uncased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this pap... |
bert-large-cased-whole-word-masking-finetuned-squad | https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-cased-whole-word-masking-finetuned-squad
### Model URL : https://huggingface.co/bert-large-cased-whole-word-masking-finetuned-squad
### Model Description : Pretrained model on English language using a mas... |
bert-large-cased-whole-word-masking | https://huggingface.co/bert-large-cased-whole-word-masking | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-cased-whole-word-masking
### Model URL : https://huggingface.co/bert-large-cased-whole-word-masking
### Model Description : Pretrained model on English language using a masked language modeling (MLM) obje... |
bert-large-cased | https://huggingface.co/bert-large-cased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is cased: it makes a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so this mode... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-cased
### Model URL : https://huggingface.co/bert-large-cased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper... |
bert-large-uncased-whole-word-masking-finetuned-squad | https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Differently to other BERT models, this model was trained with a new technique: W... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-uncased-whole-word-masking-finetuned-squad
### Model URL : https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad
### Model Description : Pretrained model on English language using a... |
bert-large-uncased-whole-word-masking | https://huggingface.co/bert-large-uncased-whole-word-masking | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Differently to other BERT models, this model was trained with a new technique: W... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-uncased-whole-word-masking
### Model URL : https://huggingface.co/bert-large-uncased-whole-word-masking
### Model Description : Pretrained model on English language using a masked language modeling (MLM) ... |
bert-large-uncased | https://huggingface.co/bert-large-uncased | Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English. Disclaimer: The team releasing BERT did not write a model card for this model so... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : bert-large-uncased
### Model URL : https://huggingface.co/bert-large-uncased
### Model Description : Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this p... |
almanach/camembert-base | https://huggingface.co/almanach/camembert-base | CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains. For further information or requests, please go to Camembert Website Camem... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : almanach/camembert-base
### Model URL : https://huggingface.co/almanach/camembert-base
### Model Description : CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now availab... |
Salesforce/ctrl | https://huggingface.co/Salesforce/ctrl | The CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar*, Bryan McCann*, Lav R. Varshney, Caiming Xiong and Richard Socher. It's a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : Salesforce/ctrl
### Model URL : https://huggingface.co/Salesforce/ctrl
### Model Description : The CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shir... |
distilbert/distilbert-base-cased-distilled-squad | https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad | Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-cased-distilled-squad
### Model URL : https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad
### Model Description : Model Description: The DistilBERT model was proposed i... |
distilbert/distilbert-base-cased | https://huggingface.co/distilbert/distilbert-base-cased | This model is a distilled version of the BERT base model.
It was introduced in this paper.
The code for the distillation process can be found
here.
This model is cased: it does make a difference between english and English. All the training details on the pre-training, the uses, limitations and potential biases (includ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-cased
### Model URL : https://huggingface.co/distilbert/distilbert-base-cased
### Model Description : This model is a distilled version of the BERT base model.
It was introduced in this pa... |
distilbert/distilbert-base-german-cased | https://huggingface.co/distilbert/distilbert-base-german-cased | null | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-german-cased
### Model URL : https://huggingface.co/distilbert/distilbert-base-german-cased
### Model Description : |
distilbert/distilbert-base-multilingual-cased | https://huggingface.co/distilbert/distilbert-base-multilingual-cased | This model is a distilled version of the BERT base multilingual model. The code for the distillation process can be found here. This model is cased: it does make a difference between english and English. The model is trained on the concatenation of Wikipedia in 104 different languages listed here.
The model has 6 layer... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-multilingual-cased
### Model URL : https://huggingface.co/distilbert/distilbert-base-multilingual-cased
### Model Description : This model is a distilled version of the BERT base multiling... |
distilbert/distilbert-base-uncased-distilled-squad | https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad | Model Description: The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, adistilled version of BERT, and the paper DistilBERT, adistilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-uncased-distilled-squad
### Model URL : https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad
### Model Description : Model Description: The DistilBERT model was propos... |
distilbert/distilbert-base-uncased-finetuned-sst-2-english | https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english | Model Description: This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2.
This model reaches an accuracy of 91.3 on the dev set (for comparison, Bert bert-base-uncased version reaches an accuracy of 92.7). Example of single-label classification:
This model can be used for topic classi... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-uncased-finetuned-sst-2-english
### Model URL : https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english
### Model Description : Model Description: This model is a... |
distilbert/distilbert-base-uncased | https://huggingface.co/distilbert/distilbert-base-uncased | This model is a distilled version of the BERT base model. It was
introduced in this paper. The code for the distillation process can be found
here. This model is uncased: it does
not make a difference between english and English. DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilbert-base-uncased
### Model URL : https://huggingface.co/distilbert/distilbert-base-uncased
### Model Description : This model is a distilled version of the BERT base model. It was
introduced in thi... |
distilbert/distilgpt2 | https://huggingface.co/distilbert/distilgpt2 | DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limi... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilgpt2
### Model URL : https://huggingface.co/distilbert/distilgpt2
### Model Description : DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the s... |
distilbert/distilroberta-base | https://huggingface.co/distilbert/distilroberta-base | This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT.
The code for the distillation process can be found here.
This model is case-sensitive: it makes a difference between english and English. The model has 6 layers, 768 dimension and 12 heads, totalizing 82M ... | Indicators looking for configurations to recommend AI models for configuring AI agents
### Model Name : distilbert/distilroberta-base
### Model URL : https://huggingface.co/distilbert/distilroberta-base
### Model Description : This model is a distilled version of the RoBERTa-base model. It follows the same training pr... |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 2