Dataset Viewer
Auto-converted to Parquet Duplicate
modelId
stringlengths
4
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
74.7M
likes
int64
0
9.67k
library_name
stringlengths
2
84
tags
sequence
pipeline_tag
stringlengths
5
30
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
embedding
sequence
jonatasgrosman/wav2vec2-large-xlsr-53-english
jonatasgrosman
2023-03-25T10:56:55
74,653,058
317
transformers
[ "transformers", "pytorch", "jax", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "en", "hf-asr-leaderboard", "mozilla-foundation/common_voice_6_0", "robust-speech-event", "speech", "xlsr-fine-tuning-week", "dataset:common_voice", "dataset:mozilla-foundation/common_vo...
automatic-speech-recognition
2022-03-02T23:29:05
--- language: en datasets: - common_voice - mozilla-foundation/common_voice_6_0 metrics: - wer - cer tags: - audio - automatic-speech-recognition - en - hf-asr-leaderboard - mozilla-foundation/common_voice_6_0 - robust-speech-event - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 ...
[ -0.2986782491207123, -0.6198410987854004, 0.15314045548439026, 0.2171814888715744, -0.08972680568695068, -0.23627611994743347, -0.350484162569046, -0.6717195510864258, 0.13748575747013092, 0.3199954330921173, -0.6518785357475281, -0.41586291790008545, -0.39903849363327026, 0.01539389509707...
bert-base-uncased
null
2023-06-30T01:42:19
55,618,661
1,219
transformers
[ "transformers", "pytorch", "tf", "jax", "rust", "coreml", "onnx", "safetensors", "bert", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
fill-mask
2022-03-02T23:29:04
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # BERT base model (uncased) Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](http...
[ -0.13803230226039886, -0.6192845106124878, 0.1602371782064438, 0.3106997609138489, -0.5290508270263672, 0.004081550054252148, -0.12402311712503433, -0.2276398241519928, 0.45079508423805237, 0.5585923194885254, -0.5558132529258728, -0.44755035638809204, -0.7645061612129211, 0.14166030287742...
openai/clip-vit-large-patch14
openai
2023-09-15T15:49:35
31,237,964
734
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "clip", "zero-shot-image-classification", "vision", "arxiv:2103.00020", "arxiv:1908.04913", "endpoints_compatible", "has_space", "region:us" ]
zero-shot-image-classification
2022-03-02T23:29:05
--- tags: - vision widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog --- # Model Card: CLIP Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found ...
[ -0.5028195381164551, -0.5711244940757751, 0.16508552432060242, -0.030113739892840385, -0.1611039787530899, -0.251861035823822, 0.021790342405438423, -0.7077082395553589, 0.12783248722553253, 0.38478925824165344, -0.2797425389289856, -0.40631571412086487, -0.6301049590110779, 0.120856180787...
distilbert-base-uncased-finetuned-sst-2-english
null
2023-10-26T16:14:11
30,654,240
355
transformers
[ "transformers", "pytorch", "tf", "rust", "onnx", "safetensors", "distilbert", "text-classification", "en", "dataset:sst2", "dataset:glue", "arxiv:1910.01108", "doi:10.57967/hf/0181", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
text-classification
2022-03-02T23:29:04
--- language: en license: apache-2.0 datasets: - sst2 - glue model-index: - name: distilbert-base-uncased-finetuned-sst-2-english results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: sst2 split: validation metrics: ...
[ -0.39455512166023254, -0.7664994597434998, 0.17849937081336975, 0.16509194672107697, -0.4213687777519226, -0.0027747880667448044, -0.1830003559589386, -0.32761508226394653, 0.10122796148061752, 0.4247645139694214, -0.6012458205223083, -0.6134636402130127, -0.9000230431556702, -0.2018812298...
gpt2
null
2023-06-30T02:19:43
21,850,807
1,516
transformers
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "onnx", "safetensors", "gpt2", "text-generation", "exbert", "en", "doi:10.57967/hf/0039", "license:mit", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:04
--- language: en tags: - exbert license: mit --- # GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better...
[ -0.26800060272216797, -0.7213711738586426, 0.30211585760116577, -0.029406607151031494, -0.25600627064704895, -0.30617034435272217, -0.3942148983478546, -0.5192710161209106, -0.10086898505687714, 0.30821824073791504, -0.47000324726104736, -0.2688843607902527, -0.7264645099639893, -0.0329642...
timm/mobilenetv3_large_100.ra_in1k
timm
2023-04-27T22:49:21
15,846,252
17
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:1905.02244", "license:apache-2.0", "has_space", "region:us" ]
image-classification
2022-12-16T05:38:07
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for mobilenetv3_large_100.ra_in1k A MobileNet-v3 image classification model. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA` recipe. Inspi...
[ -0.42348533868789673, -0.28704380989074707, -0.060248397290706635, 0.08704610168933868, -0.3158630132675171, -0.41070929169654846, -0.07284087687730789, -0.35634562373161316, 0.38355836272239685, 0.4785507321357727, -0.37447744607925415, -0.741416871547699, -0.5964348316192627, -0.09770839...
distilgpt2
null
2023-04-29T12:24:21
14,735,848
275
transformers
[ "transformers", "pytorch", "tf", "jax", "tflite", "rust", "coreml", "safetensors", "gpt2", "text-generation", "exbert", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:2201.08542", "arxiv:2203.12574", "arxiv:1910.09700", "arxiv:1503.02531", "license:apache-2.0", "model-...
text-generation
2022-03-02T23:29:04
--- language: en tags: - exbert license: apache-2.0 datasets: - openwebtext model-index: - name: distilgpt2 results: - task: type: text-generation name: Text Generation dataset: type: wikitext name: WikiText-103 metrics: - type: perplexity name: Perplexity ...
[ -0.14997319877147675, -0.7801599502563477, 0.3517022430896759, 0.20232826471328735, -0.27720993757247925, -0.2595542073249817, -0.2875414192676544, -0.4347073435783386, -0.4004024565219879, 0.15011632442474365, -0.325300931930542, -0.021060721948742867, -0.9394680857658386, -0.013787766918...
roberta-base
null
2023-03-06T15:14:53
14,521,862
247
transformers
[ "transformers", "pytorch", "tf", "jax", "rust", "safetensors", "roberta", "fill-mask", "exbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1907.11692", "arxiv:1806.02847", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
fill-mask
2022-03-02T23:29:04
--- language: en tags: - exbert license: mit datasets: - bookcorpus - wikipedia --- # RoBERTa base model Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1907.11692) and first released in [this repository](https://github.com...
[ -0.15874525904655457, -0.7864440083503723, 0.2143075317144394, -0.01406010426580906, -0.36069953441619873, -0.06675083190202713, -0.35677042603492737, -0.37536904215812683, 0.27712708711624146, 0.408024400472641, -0.5700899958610535, -0.5863821506500244, -0.9068484306335449, 0.016096128150...
stabilityai/stable-diffusion-xl-base-1.0
stabilityai
2023-10-30T16:03:47
10,649,877
3,677
diffusers
["diffusers","onnx","text-to-image","stable-diffusion","arxiv:2307.01952","arxiv:2211.01324","arxiv:(...TRUNCATED)
text-to-image
2023-07-25T13:25:51
"---\nlicense: openrail++\ntags:\n- text-to-image\n- stable-diffusion\n---\n# SD-XL 1.0-base Model C(...TRUNCATED)
[-0.39037227630615234,-0.807611346244812,0.49455687403678894,0.12964530289173126,-0.1020255982875824(...TRUNCATED)
distilbert-base-uncased
null
2023-08-18T14:59:41
10,387,599
308
transformers
["transformers","pytorch","tf","jax","rust","safetensors","distilbert","fill-mask","exbert","en","da(...TRUNCATED)
fill-mask
2022-03-02T23:29:04
"---\nlanguage: en\ntags:\n- exbert\nlicense: apache-2.0\ndatasets:\n- bookcorpus\n- wikipedia\n---\(...TRUNCATED)
[-0.05776984989643097,-0.6613970994949341,0.2539806365966797,0.28165116906166077,-0.55666583776474,0(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards with Embeddings

This dataset consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

This dataset is the same as the Hugging Face Hub Model Cards dataset but with the addition of embeddings for each model card. The embeddings are generated using the jinaai/jina-embeddings-v2-base-en model.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards
  • build a recommender system for model cards
  • build a search engine for model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular. it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
7