repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
nejox/distilbert-base-uncased-distilled-squad-coffee20230108
nejox
distilbert
12
3
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,969
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-squad-coffee20230108 This model is a fine-tuned version of [distilbert-base-uncased-distilled-...
ac50711e4c51d792b26d642b1aa8a847
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128
gokuls
mobilebert
17
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,040
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_stsb_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggin...
d67261525a22e75eab30846a0dbc5531
microsoft/xclip-base-patch16-hmdb-2-shot
microsoft
xclip
10
2
transformers
0
feature-extraction
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
['vision', 'video-classification']
true
true
true
2,425
false
# X-CLIP (base-sized model) X-CLIP model (base-sized, patch resolution of 16) trained in a few-shot fashion (K=2) on [HMDB-51](https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/). It was introduced in the paper [Expanding Language-Image Pretrained Models for General Video Recognition](http...
c78f56c7cbd357af76c7855b4177f332
facebook/wmt19-en-ru
facebook
fsmt
9
3,395
transformers
4
translation
true
false
false
apache-2.0
['en', 'ru']
['wmt19']
null
0
0
0
0
0
0
0
['translation', 'wmt19', 'facebook']
false
true
true
3,248
false
# FSMT ## Model description This is a ported version of [fairseq wmt19 transformer](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md) for en-ru. For more details, please see, [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616). The abbreviation FSMT sta...
09fd5ca751e6c96921792d1b942ec023
PeterBanning71/t5-small-finetuned-xsum-finetuned-bioMedv3
PeterBanning71
t5
12
8
transformers
0
summarization
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
2,181
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum-finetuned-bioMedv3 This model is a fine-tuned version of [PeterBanning71/t5-small-finetuned-xsum](https:...
fd41830000499dbb6d5db2af04fc04e4
yip-i/xls-r-53-copy
yip-i
wav2vec2
6
1
transformers
0
null
true
false
true
apache-2.0
['multilingual']
['common_voice']
null
0
0
0
0
0
0
0
['speech']
false
true
true
2,197
false
# Wav2Vec2-XLSR-53 [Facebook's XLSR-Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned o...
fb0df48764b64890ae5c043865e65d6e
google/t5-11b-ssm-wq
google
t5
9
8
transformers
1
text2text-generation
true
true
false
apache-2.0
['en']
['c4', 'wikipedia', 'web_questions']
null
0
0
0
0
0
0
0
[]
false
true
true
2,413
false
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) for **Closed Book Question Answering**. The model was pre-trained using T5's denoising objective on [C4](https://huggingface.co/datasets/c4), subsequently additionally pre-trained using [REALM](https://arxiv.org/pdf/2002.08909.p...
421f2b02195337d45d10a6dd9600d571
josetapia/hygpt2-clm
josetapia
gpt2
17
4
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
980
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hygpt2-clm This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. ## Model description ...
4e288d13e1a2a45f6aa2104c6a908f1d
terzimert/bert-finetuned-ner-v2.2
terzimert
bert
12
7
transformers
0
token-classification
true
false
false
apache-2.0
null
['caner']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,545
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-v2.2 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-mu...
c1a5f866053a6d759a96278f6c27ab14
openclimatefix/nowcasting_cnn_v4
openclimatefix
null
4
0
transformers
1
null
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['nowcasting', 'forecasting', 'timeseries', 'remote-sensing']
false
true
true
962
false
# Nowcasting CNN ## Model description 3d conv model, that takes in different data streams architecture is roughly 1. satellite image time series goes into many 3d convolution layers. 2. nwp time series goes into many 3d convolution layers. 3. Final convolutional layer goes to full co...
409a984bb15368014d80cc8164fc5303
Thant123/distilbert-base-uncased-finetuned-emotion
Thant123
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,343
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co...
ffacf1d2dcc9b780be66d5ad7b68e5e2
philschmid/roberta-base-squad2-optimized
philschmid
null
15
3
generic
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
['endpoints-template', 'optimum']
false
true
true
9,622
false
# Optimized and Quantized [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) with a custom handler.py This repository implements a `custom` handler for `question-answering` for 🤗 Inference Endpoints for accelerated inference using [🤗 Optiumum](https://huggingface.co/docs/optimum/inde...
8561f0d74d18810e336a6fc8caf0ae6d
MaggieXM/distilbert-base-uncased-finetuned-squad
MaggieXM
distilbert
20
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,109
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d...
b71ec1cf30fd6b9f371d478067525884
jonatasgrosman/exp_w2v2t_pt_vp-nl_s6
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pt']
false
true
true
467
false
# exp_w2v2t_pt_vp-nl_s6 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your...
7a000bd97bc0b74b5287e62948946ec7
hsohn3/cchs-bert-visit-uncased-wordlevel-block512-batch8-ep10
hsohn3
bert
8
4
transformers
0
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,340
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # hsohn3/cchs-bert-visit-uncased-wordlevel-block512-batch8-ep10 This model is a fine-tuned version of [bert-base-uncased](https://huggin...
d1d10ad0216333d9b17d1427aae2e8d4
shumail/wav2vec2-base-timit-demo-colab
shumail
wav2vec2
24
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,341
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa...
21419281cc9c65d8413aab2df9d3ffbe
fveredas/xlm-roberta-base-finetuned-panx-de
fveredas
xlm-roberta
16
5
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-b...
91b7a03e208a0ae34eca0e47fccabdb1
kurianbenoy/music_genre_classification_baseline
kurianbenoy
null
4
0
fastai
1
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
['fastai']
false
true
true
736
false
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([docume...
f36361bbf3a4111abdeda44875b284bc
mkhairil/distillbert-finetuned-indonlusmsa
mkhairil
distilbert
12
8
transformers
0
text-classification
true
false
false
apache-2.0
null
['indonlu']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
948
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distillbert-finetuned-indonlusmsa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilb...
42b35bdf153d04de3bdfd39be5fd4cfc
Alred/bart-base-finetuned-summarization-cnn-ver2
Alred
bart
15
5
transformers
0
summarization
true
false
false
apache-2.0
null
['cnn_dailymail']
null
0
0
0
0
0
0
0
['summarization', 'generated_from_trainer']
true
true
true
1,176
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-finetuned-summarization-cnn-ver2 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/fac...
25269342aaf4d9a62e88d8b1b5ab5e8a
manandey/wav2vec2-large-xlsr-_irish
manandey
wav2vec2
9
7
transformers
0
automatic-speech-recognition
true
false
true
apache-2.0
['ga']
['common_voice']
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week', 'hf-asr-leaderboard']
true
true
true
3,265
false
# Wav2Vec2-Large-XLSR-53-Irish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Irish using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used ...
5019d14cdddfee6804b9e3be5a44eb38
Helsinki-NLP/opus-mt-it-vi
Helsinki-NLP
marian
11
38
transformers
0
translation
true
true
false
apache-2.0
['it', 'vi']
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
2,001
false
### ita-vie * source group: Italian * target group: Vietnamese * OPUS readme: [ita-vie](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ita-vie/README.md) * model: transformer-align * source language(s): ita * target language(s): vie * model: transformer-align * pre-processing: normalization...
449a6592d1e9c61ddf102c80ed93f5c6
coreml/coreml-stable-diffusion-v1-5
coreml
null
6
0
null
5
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['coreml', 'stable-diffusion', 'text-to-image']
false
true
true
13,867
false
# Core ML Converted Model This model was converted to Core ML for use on Apple Silicon devices by following Apple's instructions [here](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).<br> Provide the model to an app such as [Mochi Diffusion](https://github.com/godly-devotion/MochiDiffusio...
ff6dc79182f70d5525127385a73ba0ee
jonatasgrosman/exp_w2v2t_pl_wavlm_s250
jonatasgrosman
wavlm
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pl']
false
true
true
439
false
# exp_w2v2t_pl_wavlm_s250 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at ...
2ae287b7aad1344a15917389c6575372
Manishkalra/finetuning-movie-sentiment-model-9000-samples
Manishkalra
distilbert
13
7
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,061
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-movie-sentiment-model-9000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingfac...
7ef4c70555b92d4568f030df0ffc5331
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_pretrain_rte
gokuls
mobilebert
17
2
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,629
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_pretrain_rte This model is a fine-tuned version of [gokuls/mobilebert_add_pre-training-c...
d0cde83f8c26abb80591ae721ba50e2a
Sahara/finetuning-sentiment-model-3000-samples
Sahara
distilbert
13
12
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,055
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d...
900f912994c36c1ea4886ea41a8f8ee4
Nadav/camembert-base-squad-fr
Nadav
camembert
10
7
transformers
0
question-answering
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,226
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # camembert-base-squad-fr This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the Non...
27076f2df421437249dcc32fb253bc30
jonatasgrosman/exp_w2v2r_fr_vp-100k_gender_male-10_female-0_s626
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
499
false
# exp_w2v2r_fr_vp-100k_gender_male-10_female-0_s626 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using...
64dffcb720500b36cba63de43180e27a
karolill/nb-bert-finetuned-on-norec
karolill
bert
8
4
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
630
false
# NB-BERT fine-tuned on NoReC ## Description This model is based on the pre-trained [NB-BERT-large model](https://huggingface.co/NbAiLab/nb-bert-large?text=P%C3%A5+biblioteket+kan+du+l%C3%A5ne+en+%5BMASK%5D.). It is a model for sentiment analysis. ## Data for fine-tuning This model was fine-tuned on 1000 exemples ...
db8876a697ac2ee74b1e4f99bfbae95c
lmvasque/readability-es-benchmark-mbert-es-sentences-3class
lmvasque
bert
9
5
transformers
0
text-classification
true
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
6,041
false
## Readability benchmark (ES): mbert-es-sentences-3class This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish". You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). ## Models Our mo...
febbe796a5f094ef6ad3bf1db2d17a6a
AshishBalhara/distilbert-base-uncased-distilled-clinc
AshishBalhara
distilbert
10
2
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,730
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d...
9073c9738e9938c44e81a35e81987bb6
kaejo98/bart-base-question-generation
kaejo98
bart
11
20
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,038
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-question-generation This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-ba...
57de9dffeaa30d6dea822e0166a216b0
Gausstein26/wav2vec2-base-50k
Gausstein26
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,845
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-50k This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-...
39af85ca3a18ca97b7a4395f05670bd2
microsoft/resnet-34
microsoft
resnet
6
689
transformers
2
image-classification
true
true
false
apache-2.0
null
['imagenet-1k']
null
1
0
1
0
0
0
0
['vision', 'image-classification']
false
true
true
2,572
false
# ResNet-34 v1.5 ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al. Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been wri...
3a6fe139d3e20966c9c19b9645d70dca
torchxrayvision/densenet121-res224-rsna
torchxrayvision
null
4
2
null
0
image-classification
false
false
false
apache-2.0
null
['nih-pc-chex-mimic_ch-google-openi-rsna']
null
0
0
0
0
0
0
0
['vision', 'image-classification']
false
true
true
3,755
false
# densenet121-res224-rsna A DenseNet is a type of convolutional neural network that utilises dense connections between layers, through Dense Blocks, where we connect all layers (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each layer obtains additional input...
505e2b3064723731502eeedb68525169
luhua/chinese_pretrain_mrc_macbert_large
luhua
bert
7
960
transformers
7
question-answering
true
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
[]
false
true
true
848
false
## Chinese MRC macbert-large * 使用大量中文MRC数据训练的macbert-large模型,详情可查看:https://github.com/basketballandlearn/MRC_Competition_Dureader * 此库发布的再训练模型,在 阅读理解/分类 等任务上均有大幅提高<br/> (已有多位小伙伴在Dureader-2021等多个比赛中取得**top5**的成绩😁) | 模型/数据集 | Dureader-2021 | tencentmedical | | -----------------------...
92981deba62ebeb06ea120c1d0cea854
sudheer997/distilbert-base-uncased-finetuned-emotion
sudheer997
distilbert
12
4
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,344
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co...
09e96c41e3c29e83426fd262ed70d129
Yehor/wav2vec2-xls-r-300m-uk-with-lm
Yehor
wav2vec2
19
9
transformers
3
automatic-speech-recognition
true
false
false
apache-2.0
['uk']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer', 'uk']
true
true
true
2,482
false
# Ukrainian STT model (with Language Model) 🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk ⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk - Have a look on an updated 300m model: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-l...
ddc10d9e0cf5c2d5baf616b10f77be7e
it5/it5-efficient-small-el32-repubblica-to-ilgiornale
it5
t5
18
3
transformers
0
text2text-generation
true
true
true
apache-2.0
['it']
['gsarti/change_it']
null
0
0
0
0
0
0
0
['italian', 'sequence-to-sequence', 'efficient', 'newspaper', 'ilgiornale', 'repubblica', 'style-transfer']
true
true
true
4,264
false
# IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹 *Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!* This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingf...
70db3b7ae2d781068dff302cf9b67401
LysandreJik/testing
LysandreJik
distilbert
24
4
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,061
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # testing This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the G...
e5aa2fe722ebd3f2e3beefe57cd8446a
SfinOe/stable-diffusion-v2-1
SfinOe
null
18
15
diffusers
0
text-to-image
false
false
false
openrail++
null
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
12,114
false
# Stable Diffusion v2-1 Model Card This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion). This `stable-diffusion-2-1` model is fine-tuned from [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffu...
fe9e1cdc45333400917d375824dbe07a
hamidov02/wav2vec2-large-xls-hun-53h-colab
hamidov02
wav2vec2
9
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,350
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-hun-53h-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/...
af63d676d2f0b56d1f5c8d55d2bef6d9
Charalampos/whisper-new
Charalampos
whisper
14
2
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['el']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,317
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny Greek This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on th...
81ff0cb7249db2f1c03fa77c00aba031
PaddyP/distilbert-base-uncased-finetuned-emotion
PaddyP
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,335
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co...
273a7d6292504c7de9d52f0f6e59d80c
sd-concepts-library/manga-style
sd-concepts-library
null
13
0
null
6
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,424
false
### Manga style on Stable Diffusion This is the `<manga>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also ...
b9cfe7378f4a2aebab4af1e914f23ef6
google/multiberts-seed_2-step_1000k
google
bert
8
33
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_2', 'multiberts-seed_2-step_1000k']
false
true
true
3,527
false
# MultiBERTs, Intermediate Checkpoint - Seed 2, Step 1000k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with differe...
eda5f85611a7e6697f8395dc9df69d3f
milyiyo/multi-minilm-finetuned-amazon-review
milyiyo
bert
35
3
transformers
0
text-classification
true
false
false
mit
null
['amazon_reviews_multi']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,826
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multi-minilm-finetuned-amazon-review This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://hugg...
bbea158b23d3bd1f242a66420d2e24b5
thomas0104/whisper_large_v2_zh_tw
thomas0104
whisper
31
10
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['zh']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,769
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper large-v2 zh-tw This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-lar...
98f37609420e3ef0611174e3c40e0038
Helsinki-NLP/opus-mt-tc-big-itc-eu
Helsinki-NLP
marian
13
4
transformers
0
translation
true
true
false
cc-by-4.0
['es', 'eu']
null
null
1
1
0
0
0
0
0
['translation', 'opus-mt-tc']
true
true
true
7,012
false
# opus-mt-tc-big-itc-eu ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Training](#training) - [Evaluation](#evaluation) - [Citation Information](#citatio...
1ed70a370ef245ed1976a60c852add9a
scasutt/wav2vec2-large-xlsr-53_toy_train_data_fast_10pct
scasutt
wav2vec2
7
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,418
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53_toy_train_data_fast_10pct This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https:/...
ad6cf658f624b957a20d1f76b1637d68
kyonimouto/hoyu-ai
kyonimouto
null
9
0
null
0
null
false
false
false
other
['ja']
['hoyu256']
null
0
0
0
0
0
0
0
['PyTorch']
false
true
true
801
false
Diffusion GANというコードを使ってつくりました https://github.com/Zhendong-Wang/Diffusion-GAN つかいかた 試してないので動かなかったらごめんなさい - 環境をととのえる - 最近のNVIDIA製GPUがついたパソコンにLinuxを入れることをおすすめします - PytorchをCUDAありでインストールしてください - https://pytorch.org/get-started/locally/ - conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytor...
5820d86ecd93456d7e8e645e11ad9c1b
fathyshalab/all-roberta-large-v1-meta-1-16-5
fathyshalab
roberta
11
3
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,507
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-meta-1-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://hugg...
00bb1723ac003e09ff759f2718cdffd8
sd-dreambooth-library/quino
sd-dreambooth-library
null
65
62
diffusers
7
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
2
2
0
['text-to-image']
false
true
true
6,144
false
### quino Dreambooth model trained by machinelearnear with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/note...
f7376fefdb0e0fbc960890f4996a147f
T-Systems-onsite/cross-en-de-roberta-sentence-transformer
T-Systems-onsite
xlm-roberta
10
106,465
transformers
14
feature-extraction
true
true
false
mit
['de', 'en', 'multilingual']
['stsb_multi_mt']
null
2
0
2
0
1
1
0
['sentence_embedding', 'search', 'pytorch', 'xlm-roberta', 'roberta', 'xlm-r-distilroberta-base-paraphrase-v1', 'paraphrase']
false
true
true
7,627
false
# Cross English & German RoBERTa for Sentence Embeddings This model is intended to [compute sentence (text) embeddings](https://www.sbert.net/examples/applications/computing-embeddings/README.html) for English and German text. These embeddings can then be compared with [cosine-similarity](https://en.wikipedia.org/wiki...
39507c8ae82169a34201c6131c064719
gokuls/distilbert_add_GLUE_Experiment_mrpc
gokuls
distilbert
17
4
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,301
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_add_GLUE_Experiment_mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/disti...
9a7f7f884ae4664ffba19c3a471bd26f
jonatasgrosman/exp_w2v2t_fr_xls-r_s250
jonatasgrosman
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'fr']
false
true
true
453
false
# exp_w2v2t_fr_xls-r_s250 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input ...
fd90a5b962e7ad7a21c5907fb71b16bd
tbosse/bert-base-german-cased-noisy-pretrain-fine-tuned_v1.2
tbosse
bert
13
5
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,040
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-noisy-pretrain-fine-tuned_v1.2 This model is a fine-tuned version of [tbosse/bert-base-german-cased-finet...
12a2266c62d438271e4b1db92134a6a7
slplab/wav2vec2-large-xlsr-53-korean-nia13-asia-9634_001
slplab
wav2vec2
11
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,215
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-53-korean-samsung-60k This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggin...
aa6ec52905440e0969c9a8fafc6cf76e
ZinebSN/whisper-small-swedish-Test-3000
ZinebSN
whisper
41
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sv']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['hf-asr-leaderboard', 'generated_from_trainer']
true
true
true
1,422
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Swedish -3000 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-s...
fa8176c8386386acd76dd2a3c9c6097c
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_qqp_256
gokuls
mobilebert
17
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,201
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_add_GLUE_Experiment_logit_kd_qqp_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggin...
8466e0635a43675a0b34bc6de87930da
tkubotake/xlm-roberta-base-finetuned-panx-fr
tkubotake
xlm-roberta
9
8
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,375
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [tkubotake/xlm-roberta-base-finetuned-panx-de](https://...
0ceb5eeffb629c282138058fc91c160c
andreduarte/distilbert-base-uncased-finetuned-cola
andreduarte
distilbert
13
3
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,571
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
816c3a137dbc59a8a3c0d2e72b4ccb58
google/multiberts-seed_3-step_20k
google
bert
8
14
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_20k']
false
true
true
3,515
false
# MultiBERTs, Intermediate Checkpoint - Seed 3, Step 20k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different...
5054951bf7630834ee3b7576a7d32102
Zekunli/flan-t5-large-extraction-cnndm_fs0.1-all
Zekunli
t5
10
10
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,397
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-large-extraction-cnndm_fs0.1-all This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/goo...
bbfcc5bbc997d13f85a7be0129ab2efa
praf-choub/bart-CaPE-xsum
praf-choub
bart
9
5
transformers
0
summarization
true
false
false
bsd-3-clause
['en']
['xsum']
null
0
0
0
0
0
0
0
['summarization']
false
true
true
630
false
Citation ``` @misc{https://doi.org/10.48550/arxiv.2110.07166, doi = {10.48550/ARXIV.2110.07166}, url = {https://arxiv.org/abs/2110.07166}, author = {Choubey, Prafulla Kumar and Fabbri, Alexander R. and Vig, Jesse and Wu, Chien-Sheng and Liu, Wenhao and Rajani, Nazneen Fatema}, keywords = {Computation and Langu...
2468d26b38f2b16cbf690b197616995b
DrishtiSharma/wav2vec2-large-xls-r-300m-as-g1
DrishtiSharma
wav2vec2
12
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['as']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
3,861
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-as-g1 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/face...
06836b810af39f2736d85422bdc5412c
adiharush/tu-nlpweb-w22-g18-e6
adiharush
distilbert
8
16
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
920
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # result This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unk...
c4d6bc071fd5b40401576730a87dbdee
PaddlePaddle/uie-medium
PaddlePaddle
ernie
7
0
paddlenlp
0
null
false
false
false
apache-2.0
['zh']
null
null
0
0
0
0
0
0
0
[]
false
true
true
4,353
false
[![paddlenlp-banner](https://user-images.githubusercontent.com/1371212/175816733-8ec25eb0-9af3-4380-9218-27c154518258.png)](https://github.com/PaddlePaddle/PaddleNLP) # PaddlePaddle/uie-medium Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas. The unified ...
23016a311de271c35d1cf8e0e7c41f1a
Salesforce/blip2-flan-t5-xl-coco
Salesforce
blip-2
11
7
transformers
1
image-to-text
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
['vision', 'image-to-text', 'image-captioning', 'visual-question-answering']
false
true
true
2,029
false
# BLIP-2, Flan T5-xl, fine-tuned on COCO BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) b...
16fd192d2c19228210efc52bbf85be93
TestZee/t5-small-finetuned-xum-test
TestZee
t5
7
3
transformers
0
text2text-generation
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,169
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TestZee/t5-small-finetuned-xum-test This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown da...
bc3616ed76b830d500dcfc4b1075ec34
sd-concepts-library/sherhook-painting-v2
sd-concepts-library
null
14
0
null
3
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,648
false
### Sherhook Painting v2 on Stable Diffusion This is the `<sherhook>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. Y...
cf40b9acce2f8d693d45b053c3e2bc82
EIStakovskii/french_toxicity_classifier_plus
EIStakovskii
camembert
8
6
transformers
0
text-classification
true
false
false
other
['fr']
null
null
0
0
0
0
0
0
0
[]
false
true
true
940
false
This model was trained for toxicity labeling. Label_1 means TOXIC, Label_0 means NOT TOXIC The model was fine-tuned based off [the CamemBERT language model](https://huggingface.co/camembert-base). The accuracy is 93% on the test split during training and 79% on a manually picked (and thus harder) sample of 200 senten...
b6b0ef02d30aa35570f0dceb32f4b53d
Najeen/bert-finetuned-ner
Najeen
bert
16
13
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,518
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2...
8621ab6ec6418bf39fed49c8333196f3
yingqin/wav2vec2-base-timit-eng
yingqin
wav2vec2
11
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,984
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-eng This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-...
eab774a0e012c13bad9929acc0880242
itzo/bert-base-uncased-fine-tuned-on-clinc_oos-dataset
itzo
bert
14
0
transformers
0
text-classification
true
false
false
apache-2.0
null
['clinc_oos']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,623
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-fine-tuned-on-clinc_oos-dataset This model is a fine-tuned version of [bert-base-uncased](https://huggingface....
ac01da7b3425e866fb056ed1a1333feb
jonatasgrosman/exp_w2v2r_de_xls-r_accent_germany-10_austria-0_s728
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
481
false
# exp_w2v2r_de_xls-r_accent_germany-10_austria-0_s728 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make ...
aeaedfa8904f849d3fd51e5afd8c2ca9
Froddan/frost
Froddan
null
12
0
null
3
text-to-image
false
false
false
cc0-1.0
['en']
null
null
0
0
0
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
1,425
false
# Stable Diffusion fine tuned on photographs of frozen nature ### Usage Use by adding the keyword "frostography" to the prompt. The model was trained with the "nature" classname, which can also be added to the prompt. ## Samples I hope it gives you an idea of what kind of styles can be created with this model. <img...
b20d8a7549f6f3dcb45240890a11804d
surfingdoggo/ddpm-butterflies-128
surfingdoggo
null
13
0
diffusers
0
null
false
false
false
apache-2.0
['en']
['huggan/smithsonian_butterflies_subset']
null
0
0
0
0
0
0
0
[]
false
true
true
1,234
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/hu...
2fc542ff3b4b4e735376953a7950d023
MultiBertGunjanPatrick/multiberts-seed-4-400k
MultiBertGunjanPatrick
bert
7
4
transformers
0
null
true
false
false
apache-2.0
['en']
['bookcorpus', 'wikipedia']
null
0
0
0
0
0
0
0
['exbert', 'multiberts', 'multiberts-seed-4']
false
true
true
6,483
false
# MultiBERTs Seed 4 Checkpoint 400k (uncased) Seed 4 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/go...
f4fc4d69fb7b99c4f447db75fa1586f2
sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style
sd-concepts-library
null
17
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,723
false
### Isabell Schulte - PVIII - 12tiles - 3000steps - Style on Stable Diffusion This is the `<isabell-schulte-p8-style-12tiles-3000s>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/ma...
e67a4e0608960c550db5dfa918350859
Celal11/resnet-50-finetuned-FER2013-0.003-CKPlus
Celal11
resnet
9
9
transformers
0
image-classification
true
false
false
apache-2.0
null
['image_folder']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,424
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet-50-finetuned-FER2013-0.003-CKPlus This model is a fine-tuned version of [Celal11/resnet-50-finetuned-FER2013-0.003](https...
258f8c517ed1cc09d06c87b9abe4f706
nvia/distilbert-base-uncased-finetuned-cola
nvia
distilbert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,571
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
bc8ac23fd5c6facf328aed17a638f8e1
cahya/gpt2-small-indonesian-522M
cahya
gpt2
10
195
transformers
3
text-generation
true
true
true
mit
['id']
['Indonesian Wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
3,000
false
# Indonesian GPT2 small model ## Model description It is GPT2-small model pre-trained with indonesian Wikipedia using a causal language modeling (CLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-tra...
be8f290cf1935e1ff95a2058d3a46791
rmihaylov/gpt2-small-theseus-bg
rmihaylov
gpt2
10
6
transformers
0
text-generation
true
false
false
mit
['bg']
['oscar', 'chitanka', 'wikipedia']
null
0
0
0
0
0
0
0
['torch']
false
true
true
2,748
false
# GPT-2 Pretrained model on Bulgarian language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-langua...
d5d2d5d2d97953e6a68ef4d84c6f1ced
askainet/bart_lfqa
askainet
bart
8
259
transformers
1
text2text-generation
true
false
false
mit
['en']
['vblagoje/lfqa', 'vblagoje/lfqa_support_docs']
null
0
0
0
0
0
0
0
[]
false
true
true
3,175
false
## Introduction See [blog post](https://towardsdatascience.com/long-form-qa-beyond-eli5-an-updated-dataset-and-approach-319cb841aabb) for more details. ## Usage ```python import torch from transformers import AutoTokenizer, AutoModel, AutoModelForSeq2SeqLM model_name = "vblagoje/bart_lfqa" device = torch.device('cu...
5300d763d13a5da45266d46acf0e6fad
Helsinki-NLP/opus-mt-tiv-fr
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-tiv-fr * source languages: tiv * target languages: fr * OPUS readme: [tiv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](...
7b67510791791498937cfc16d663c61f
chintagunta85/test_ner3
chintagunta85
distilbert
12
5
transformers
0
token-classification
true
false
false
apache-2.0
null
['pv_dataset']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,115
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_ner3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the...
8a0cc7b56d4a0f40c904b187353855c0
MeshalAlamr/wav2vec2-xls-r-300m-ar-9
MeshalAlamr
wav2vec2
11
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,848
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-ar-9 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wa...
a6f363e939775d062d96c1642c6d9774
kevinbror/bertbaseuncasedny
kevinbror
bert
4
5
transformers
0
question-answering
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
2,332
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bertbaseuncasedny This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown da...
c15bb7d6eec15ca4812bb1e404ab0af5
YSKartal/bert-base-turkish-cased-turkish_offensive_trained_model
YSKartal
bert
10
3
transformers
1
text-classification
false
true
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,633
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # YSKartal/bert-base-turkish-cased-turkish_offensive_trained_model This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased]...
91d9eacdbd242cca4314a93c35532887
vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts
vijayv500
gpt2
8
5
transformers
0
conversational
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['conversational']
false
true
true
1,376
false
## I fine-tuned DialoGPT-small model on "The Big Bang Theory" TV Series dataset from Kaggle (https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("vijayv500/DialoGPT-small...
d240c7a766b3932490dcabd61a635eb1
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_mrpc
gokuls
mobilebert
17
3
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,362
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_logit_kd_mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingfac...
57cffc13464954ffb0d98f2e3dca23b1
NhatPham/wav2vec2-base-finetuned-ks
NhatPham
wav2vec2
10
7
transformers
0
audio-classification
true
false
false
apache-2.0
null
['superb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,559
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2ve...
6e1f259c065268bc3b6931280804e637
igorcadelima/distilbert-base-uncased-finetuned-emotion
igorcadelima
distilbert
12
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['emotion']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,338
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co...
852a2b2c1ba227bf1245d6203986ed9c
hsohn3/ehr-bert-base-uncased-cchs-wordlevel
hsohn3
bert
8
2
transformers
1
fill-mask
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,544
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # hsohn3/ehr-bert-base-uncased-cchs-wordlevel This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base...
8a9301a30bbdf63cfb3e69f4b2fa51e9
emilios/whisper-medium-el-n3
emilios
whisper
101
25
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['el']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,983
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper medium Greek El Greco This model is a fine-tuned version of [emilios/whisper-medium-el-n2](https://huggingface.co/emilio...
fdba85ecbbebd93a2ae4b94d1eeaa4f2
nlpie/bio-miniALBERT-128
nlpie
bert
8
3
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,117
false
# Model miniALBERT is a recursive transformer model which uses cross-layer parameter sharing, embedding factorisation, and bottleneck adapters to achieve high parameter efficiency. Since miniALBERT is a compact model, it is trained using a layer-to-layer distillation technique, using the BioBERT-v1.1 model as the teac...
196599766a4fa28ee1ed67e75b376edc
Lucapro/test-model
Lucapro
t5
13
8
transformers
0
text2text-generation
true
false
false
apache-2.0
['en', 'ro']
['wmt16']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,017
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tst-translation This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 ro-en dataset. It...
dd23cd43c9fcd35b06757c9be3491225
google/multiberts-seed_0-step_1700k
google
bert
8
22
transformers
0
null
true
true
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_1700k']
false
true
true
3,527
false
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with differe...
0f1f2445c07c2f221b49efc529d7efd5