Dataset Viewer
Auto-converted to Parquet Duplicate
sha
null
last_modified
null
library_name
stringclasses
154 values
text
stringlengths
1
900k
metadata
stringlengths
2
348k
pipeline_tag
stringclasses
45 values
id
stringlengths
5
122
tags
sequencelengths
1
1.84k
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
61
embeddings
sequencelengths
768
768
null
null
transformers
# ALBERT Base v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make...
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
fill-mask
albert/albert-base-v1
[ "transformers", "pytorch", "tf", "safetensors", "albert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Base v1 ============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team re...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\...
[ 78, 49, 102, 42, 135, 30 ]
[ "passage: TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modelin...
[ -0.04987791180610657, 0.06891000270843506, -0.0038567720912396908, 0.07576975971460342, 0.060954879969358444, 0.030267491936683655, 0.09114508330821991, 0.056290049105882645, -0.04611608013510704, 0.06996013969182968, 0.020456068217754364, 0.022172600030899048, 0.1111629456281662, 0.130442...
null
null
transformers
# ALBERT Base v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
fill-mask
albert/albert-base-v2
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Base v2 ============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team re...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modelin...
[ 80, 49, 102, 42, 135, 11 ]
[ "passage: TAGS\n#transformers #pytorch #tf #jax #rust #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language mode...
[ -0.05055678263306618, 0.09099909663200378, -0.004384537693113089, 0.07027137279510498, 0.04119673743844032, 0.023917432874441147, 0.09916941821575165, 0.05107157677412033, -0.04390188679099083, 0.05791522189974785, 0.021559549495577812, 0.02768748067319393, 0.12242377549409866, 0.138666346...
null
null
transformers
# ALBERT Large v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not mak...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
fill-mask
albert/albert-large-v1
[ "transformers", "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Large v1 =============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team ...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to u...
[ 70, 49, 102, 42, 135, 11 ]
[ "passage: TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how t...
[ -0.05667823925614357, 0.08347521722316742, -0.003313080407679081, 0.07908394187688828, 0.05151635780930519, 0.008104290813207626, 0.08640990406274796, 0.05556471273303032, -0.08198466897010803, 0.06249178946018219, 0.04316376522183418, 0.04599187523126602, 0.11425124853849411, 0.1482848227...
null
null
transformers
# ALBERT Large v2 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not mak...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
fill-mask
albert/albert-large-v2
[ "transformers", "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT Large v2 =============== Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The team ...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHer...
[ 75, 49, 102, 42, 135, 11 ]
[ "passage: TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n...
[ -0.04551113396883011, 0.0802619680762291, -0.004052036441862583, 0.07860441505908966, 0.04896535351872444, 0.009174984879791737, 0.08155245333909988, 0.04193923622369766, -0.08074910938739777, 0.07420488446950912, 0.03317805007100105, 0.04168064892292023, 0.1137911006808281, 0.128653809428...
null
null
transformers
# ALBERT XLarge v1 Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1909.11942) and first released in [this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not ma...
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
fill-mask
albert/albert-xlarge-v1
[ "transformers", "pytorch", "tf", "safetensors", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
TAGS #transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
ALBERT XLarge v1 ================ Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English. Disclaimer: The tea...
[ "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:", "### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fai...
[ "TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHer...
[ 75, 49, 102, 42, 135, 11 ]
[ "passage: TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1909.11942 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n...
[ -0.04551113396883011, 0.0802619680762291, -0.004052036441862583, 0.07860441505908966, 0.04896535351872444, 0.009174984879791737, 0.08155245333909988, 0.04193923622369766, -0.08074910938739777, 0.07420488446950912, 0.03317805007100105, 0.04168064892292023, 0.1137911006808281, 0.128653809428...
null
null
transformers
"\n# ALBERT XLarge v2\n\nPretrained model on English language using a masked language modeling (MLM)(...TRUNCATED)
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
fill-mask
albert/albert-xlarge-v2
["transformers","pytorch","tf","albert","fill-mask","en","dataset:bookcorpus","dataset:wikipedia","a(...TRUNCATED)
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arx(...TRUNCATED)
"ALBERT XLarge v2\n================\n\n\nPretrained model on English language using a masked languag(...TRUNCATED)
["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED)
["TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #ar(...TRUNCATED)
[ 70, 49, 102, 42, 135, 11 ]
["passage: TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wiki(...TRUNCATED)
[-0.05667823925614357,0.08347521722316742,-0.003313080407679081,0.07908394187688828,0.05151635780930(...TRUNCATED)
null
null
transformers
"\n# ALBERT XXLarge v1\n\nPretrained model on English language using a masked language modeling (MLM(...TRUNCATED)
{"language": "en", "license": "apache-2.0", "datasets": ["bookcorpus", "wikipedia"]}
fill-mask
albert/albert-xxlarge-v1
["transformers","pytorch","tf","albert","fill-mask","en","dataset:bookcorpus","dataset:wikipedia","a(...TRUNCATED)
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #arx(...TRUNCATED)
"ALBERT XXLarge v1\n=================\n\n\nPretrained model on English language using a masked langu(...TRUNCATED)
["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED)
["TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wikipedia #ar(...TRUNCATED)
[ 70, 49, 102, 42, 135, 11 ]
["passage: TAGS\n#transformers #pytorch #tf #albert #fill-mask #en #dataset-bookcorpus #dataset-wiki(...TRUNCATED)
[-0.05667823925614357,0.08347521722316742,-0.003313080407679081,0.07908394187688828,0.05151635780930(...TRUNCATED)
null
null
transformers
"\n# ALBERT XXLarge v2\n\nPretrained model on English language using a masked language modeling (MLM(...TRUNCATED)
"{\"language\": \"en\", \"license\": \"apache-2.0\", \"tags\": [\"exbert\"], \"datasets\": [\"bookco(...TRUNCATED)
fill-mask
albert/albert-xxlarge-v2
["transformers","pytorch","tf","rust","safetensors","albert","fill-mask","exbert","en","dataset:book(...TRUNCATED)
2022-03-02T23:29:04+00:00
[ "1909.11942" ]
[ "en" ]
"TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookcor(...TRUNCATED)
"ALBERT XXLarge v2\n=================\n\n\nPretrained model on English language using a masked langu(...TRUNCATED)
["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED)
["TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #dataset-bookco(...TRUNCATED)
[ 84, 49, 102, 42, 135, 30 ]
["passage: TAGS\n#transformers #pytorch #tf #rust #safetensors #albert #fill-mask #exbert #en #datas(...TRUNCATED)
[-0.05788842961192131,0.1143597811460495,-0.0035046867560595274,0.08419948816299438,0.05134695395827(...TRUNCATED)
null
null
transformers
"\n# BERT base model (cased)\n\nPretrained model on English language using a masked language modelin(...TRUNCATED)
"{\"language\": \"en\", \"license\": \"apache-2.0\", \"tags\": [\"exbert\"], \"datasets\": [\"bookco(...TRUNCATED)
fill-mask
google-bert/bert-base-cased
["transformers","pytorch","tf","jax","safetensors","bert","fill-mask","exbert","en","dataset:bookcor(...TRUNCATED)
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "en" ]
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpus(...TRUNCATED)
"BERT base model (cased)\n=======================\n\n\nPretrained model on English language using a (...TRUNCATED)
["### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\(...TRUNCATED)
["TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-bookcorpu(...TRUNCATED)
[ 85, 49, 101, 218, 163, 30 ]
["passage: TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #exbert #en #dataset-(...TRUNCATED)
[-0.022694578394293785,0.08986902981996536,-0.006014915183186531,0.04828432947397232,-0.008334090001(...TRUNCATED)
null
null
transformers
"\n# Bert-base-chinese\n\n## Table of Contents\n- [Model Details](#model-details)\n- [Uses](#uses)\n(...TRUNCATED)
{"language": "zh"}
fill-mask
google-bert/bert-base-chinese
["transformers","pytorch","tf","jax","safetensors","bert","fill-mask","zh","arxiv:1810.04805","autot(...TRUNCATED)
2022-03-02T23:29:04+00:00
[ "1810.04805" ]
[ "zh" ]
"TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotrai(...TRUNCATED)
"\n# Bert-base-chinese\n\n## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Bi(...TRUNCATED)
["# Bert-base-chinese","## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Bias(...TRUNCATED)
["TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805 #autotra(...TRUNCATED)
[ 62, 7, 35, 3, 93, 10, 3, 15, 85, 2, 32, 4, 3, 3, 9 ]
["passage: TAGS\n#transformers #pytorch #tf #jax #safetensors #bert #fill-mask #zh #arxiv-1810.04805(...TRUNCATED)
[-0.036715470254421234,0.161798894405365,-0.0009507219074293971,0.015533071011304855,0.0713464841246(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3