ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Paper • 1909.11942 • Published • 2
modelId stringlengths 4 112 | lastModified stringlengths 24 24 | tags sequence | pipeline_tag stringclasses 21
values | files sequence | publishedBy stringlengths 2 37 | downloads_last_month int32 0 9.44M | library stringclasses 15
values | modelCard large_stringlengths 0 100k |
|---|---|---|---|---|---|---|---|---|
albert-base-v1 | 2021-01-13T15:08:24.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 7,474 | transformers | ---
tags:
- exbert
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Base v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github... |
albert-base-v2 | 2021-01-13T15:06:44.000Z | [
"pytorch",
"tf",
"rust",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"rust_model.ot",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 218,776 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Base v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-rese... |
albert-large-v1 | 2021-01-13T15:29:06.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 768 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Large v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-res... |
albert-large-v2 | 2021-01-13T15:35:47.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 7,831 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT Large v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-res... |
albert-xlarge-v1 | 2021-01-13T15:30:39.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 242 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-re... |
albert-xlarge-v2 | 2021-01-13T15:34:57.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 4,934 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XLarge v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-re... |
albert-xxlarge-v1 | 2021-01-13T15:32:02.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 498 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XXLarge v1
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://github.com/google-r... |
albert-xxlarge-v2 | 2021-01-13T15:33:03.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
] | huggingface | 33,017 | transformers | ---
tags:
- exbert
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# ALBERT XXLarge v2
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1909.11942) and first released in
[this repository](https://gith... |
bert-base-cased-finetuned-mrpc | 2021-05-18T16:08:38.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 27,304 | transformers | |
bert-base-cased | 2021-05-18T16:12:11.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 1,975,177 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https:... |
bert-base-chinese | 2021-05-18T16:13:18.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"zh",
"transformers",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 1,354,537 | transformers | ---
language: zh
---
|
bert-base-german-cased | 2021-05-18T16:14:28.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"de",
"transformers",
"license:mit",
"exbert",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 95,565 | transformers | ---
language: de
license: mit
thumbnail: https://static.tildacdn.com/tild6438-3730-4164-b266-613634323466/german_bert.png
tags:
- exbert
---
<a href="https://huggingface.co/exbert/?model=bert-base-german-cased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
# German BERT
![bert_im... |
bert-base-german-dbmdz-cased | 2021-05-18T16:15:21.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"de",
"transformers",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 2,198 | transformers | ---
language: de
license: mit
---
|
bert-base-german-dbmdz-uncased | 2021-05-18T16:16:25.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"de",
"transformers",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 39,983 | transformers | ---
language: de
license: mit
---
|
bert-base-multilingual-cased | 2021-05-18T16:18:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 649,885 | transformers | ---
language: multilingual
license: apache-2.0
datasets:
- wikipedia
---
# BERT multilingual base model (cased)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released ... |
bert-base-multilingual-uncased | 2021-05-18T16:19:22.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 236,072 | transformers | ---
language: en
license: apache-2.0
datasets:
- wikipedia
---
# BERT multilingual base model (uncased)
Pretrained model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in
[this... |
bert-base-uncased | 2021-05-18T16:20:13.000Z | [
"pytorch",
"tf",
"jax",
"rust",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"rust_model.ot",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 9,435,580 | transformers | ---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](http... |
bert-large-cased-whole-word-masking-finetuned-squad | 2021-05-18T16:22:37.000Z | [
"pytorch",
"tf",
"jax",
"tfsavedmodel",
"rust",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"rust_model.ot",
"saved_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 2,773 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (cased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
... |
bert-large-cased-whole-word-masking | 2021-05-18T16:30:05.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 8,481 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (cased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](h... |
bert-large-cased | 2021-05-18T16:33:16.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt",
"whole-word-masking.tar.gz"
] | huggingface | 72,330 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/g... |
bert-large-uncased-whole-word-masking-finetuned-squad | 2021-05-18T16:35:27.000Z | [
"pytorch",
"tf",
"jax",
"tfsavedmodel",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"saved_model.tar.gz",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 1,023,669 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased) whole word masking finetuned on SQuAD
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released i... |
bert-large-uncased-whole-word-masking | 2021-05-18T16:37:36.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 14,760 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased) whole word masking
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository]... |
bert-large-uncased | 2021-05-18T16:40:29.000Z | [
"pytorch",
"tf",
"jax",
"bert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt",
"whole-word-masking.tar.gz"
] | huggingface | 384,968 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# BERT large model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com... |
camembert-base | 2021-06-09T00:01:44.000Z | [
"pytorch",
"tf",
"camembert",
"masked-lm",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"sentencepiece.bpe.model",
"tf_model.h5",
"tokenizer.json"
] | huggingface | 66,487 | transformers | ---
language: fr
license: mit
datasets:
- oscar
---
# CamemBERT: a Tasty French Language Model
## Introduction
[CamemBERT](https://arxiv.org/abs/1911.03894) is a state-of-the-art language model for French based on the RoBERTa model.
It is now available on Hugging Face in 6 different versions with varying number of ... |
ctrl | 2021-04-07T15:20:39.000Z | [
"pytorch",
"tf",
"transformers"
] | [
".gitattributes",
"config.json",
"merges.txt",
"pytorch_model.bin",
"tf_model.h5",
"vocab.json"
] | huggingface | 895 | transformers | ||
distilbert-base-cased-distilled-squad | 2020-12-11T21:23:50.000Z | [
"pytorch",
"tf",
"tfsavedmodel",
"rust",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"transformers",
"license:apache-2.0"
] | question-answering | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"rust_model.ot",
"saved_model.tar.gz",
"tf_model.h5",
"tfjs.tar.gz",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 80,434 | transformers | ---
language: "en"
datasets:
- squad
metrics:
- squad
license: apache-2.0
---
# DistilBERT base cased distilled SQuAD
This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1.
This model reac... |
distilbert-base-cased | 2020-12-11T21:23:53.000Z | [
"pytorch",
"tf",
"distilbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"license:apache-2.0"
] | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 1,131,632 | transformers | ---
language: en
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (cased)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-cased).
It was introduced in [this paper](https://arxiv.org/abs/1910.01108).
The code for the distillation process can... | |
distilbert-base-german-cased | 2020-12-11T21:23:57.000Z | [
"pytorch",
"distilbert",
"masked-lm",
"de",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 19,689 | transformers | ---
language: de
license: apache-2.0
---
## distilbert-base-german-cased
|
distilbert-base-multilingual-cased | 2020-12-11T21:24:01.000Z | [
"pytorch",
"tf",
"distilbert",
"masked-lm",
"multilingual",
"dataset:wikipedia",
"transformers",
"license:apache-2.0",
"fill-mask"
] | fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"tf_model.h5",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
] | huggingface | 211,868 | transformers | ---
language: multilingual
license: apache-2.0
datasets:
- wikipedia
---
# DistilBERT base multilingual model (cased)
This model is a distilled version of the [BERT base multilingual model](bert-base-multilingual-cased). The code for the distillation process can be found
[here](https://github.com/huggingface/transfor... |
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Metadata information of all the models uploaded on HuggingFace modelhub
Dataset was last updated on 15th June 2021. Contains information on 10,354 models (v1).
Only train dataset is provided
Same dataset is available in kaggle
from datasets import load_dataset
modelhub_dataset = load_dataset("dk-crazydiv/huggingface-modelhub")
modelhub_dataset["train"] # Access train subset (the only subset available)
modelhub_dataset["train"][0] # Access the dataset elements by index
modelhub_dataset["train"].features # Get the columns present in the dataset.
{
"downloads_last_month": 7474,
"files": [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"spiece.model",
"tf_model.h5",
"tokenizer.json",
"with-prefix-tf_model.h5"
],
"lastModified": "2021-01-13T15:08:24.000Z",
"library": "transformers",
"modelId": "albert-base-v1",
"pipeline_tag": "fill-mask",
"publishedBy": "huggingface",
"tags": [
"pytorch",
"tf",
"albert",
"masked-lm",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"fill-mask"
],
"modelCard": "Readme sample data..."
}
Please report any bugs/improvements to me on twitter