repo_id
stringlengths
4
110
author
stringlengths
2
27
model_type
stringlengths
2
29
files_per_repo
int64
2
15.4k
downloads_30d
int64
0
19.9M
library
stringlengths
2
37
likes
int64
0
4.34k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
30
languages
stringlengths
4
1.63k
datasets
stringlengths
2
2.58k
co2
stringclasses
29 values
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
15
prs_closed
int64
0
28
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
1 class
has_text
bool
1 class
text_length
int64
401
598k
is_nc
bool
1 class
readme
stringlengths
0
598k
hash
stringlengths
32
32
Qalam/Lei
Qalam
null
2
0
null
0
text-to-image
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
28,989
false
<p align="center"> <br> <img src="./docs/source/en/imgs/diffusers_library.jpg" width="400"/> <br> <p> <p align="center"> <a href="https://github.com/huggingface/diffusers/blob/main/LICENSE"> <img alt="GitHub" src="https://img.shields.io/github/license/huggingface/datasets.svg?color=blue"> </...
5649976f381a19c93af23495becb8bf5
nateraw/vit-base-patch16-224-cifar10
nateraw
vit
5
300
transformers
4
image-classification
true
false
false
apache-2.0
null
['cifar10']
null
0
0
0
0
0
0
0
['image-classification', 'vision', 'pytorch']
false
true
true
2,211
false
# Vision Transformer Fine Tuned on CIFAR10 Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) and **fine-tuned on CIFAR10** at resolution 224x224. Check out the code at my [my Github repo](https://github.com/nateraw/huggingface-vit-finetune). ## Usage ```python from tran...
a7720f05c366487247c0c8ddec5f5f70
jeapaul/languagemodel
jeapaul
wav2vec2
13
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,806
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # languagemodel This model is a fine-tuned version of [monideep2255/XLRS-torgo](https://huggingface.co/monideep2255/XLRS-torgo) on...
04bbbdb7edbd59c5b2c31d25803acb7f
Helsinki-NLP/opus-mt-de-efi
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-de-efi * source languages: de * target languages: efi * OPUS readme: [de-efi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-efi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](...
78c1aa5620eb159180800cab78b7e81e
Cwhgn/DAMO-YOLO-T
Cwhgn
null
5
0
null
1
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
4,105
false
## Model Description This **DAMO-YOLO-T** model is a tiny-size object detection model with fast inference speed and high accuracy, trained by **DAMO-YOLO**. DAMO-YOLO is a fast and accurate object detection method, which is developed by TinyML Team from Alibaba DAMO Data Analytics and Intelligence Lab. And it achieve...
2b6545482d3b485a60db785e800a5f36
espnet/realzza-meld-asr-hubert-transformer
espnet
null
21
0
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['en']
['meld']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition', 'spoken-language-understanding']
false
true
true
1,636
false
# ESPnet2: Meld Recipe ## Demo: How to use in ESPnet2 ```bash cd espnet pip install -e . cd egs2/meld/asr1/ ./run.sh ``` ## Environments - date: `Thu Nov 10 09:07:40 EST 2022` - python version: `3.8.6 (default, Dec 17 2020, 16:57:01) [GCC 10.2.0]` - espnet version: `espnet 202207` - pytorch version: `pytorch 1.8.1+c...
7519deaf7b47d3610deb6a523d6f610e
gorkemgoknar/gpt2-small-turkish
gorkemgoknar
gpt2
9
160
transformers
4
text-generation
true
false
true
apache-2.0
['tr']
['wikipedia-turkish']
null
0
0
0
0
0
0
0
['gpt2', 'turkish']
false
true
true
3,479
false
# Turkish GPT2 Model Finetuned # Türkçe GPT2 Modeli ## Model description This is a GPT2-Small English based model finetuned and additionaly trainied with Wikipedia Articles in Turkish as of 28-10-2020 Live demo based on this work at : https://www.metayazar.com/ Fine tuned writer on this model: https://huggingface...
45f1507b46de4efe36497523568a73a3
davanstrien/distilbert-base-cased_fine_tuned_food_ner
davanstrien
distilbert
12
12
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
5,875
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased_fine_tuned_food_ner This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/d...
dfc415a57928faeff60711e7d211362a
yip-i/wav2vec2-demo-F04-2
yip-i
wav2vec2
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,203
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-demo-F04-2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2...
9611873b854e5b846fc5f901066a2684
rajistics/informal_formal_style_transfer
rajistics
t5
10
4
transformers
2
text2text-generation
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,495
false
## Source A Neural Language Style Transfer framework to transfer natural language text smoothly between fine-grained language styles like formal/casual. The original model is at [https://github.com/PrithivirajDamodaran/Styleformer](https://github.com/PrithivirajDamodaran/Styleformer). ![Style](Styleformer.png) ##...
3e92178b50846e4c0e85b6bddc271780
imvladikon/wav2vec2-xls-r-300m-lm-hebrew
imvladikon
wav2vec2
16
12
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
1
1
0
['generated_from_trainer', 'he', 'robust-speech-event']
true
true
true
1,048
false
# wav2vec2-xls-r-300m-lm-hebrew This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset with adding ngram models according to [Boosting Wav2Vec2 with n-grams in 🤗 Transformers](https://huggingface.co/blog/wav2vec2-with-ngram) ## ...
dd0b6d26ec6bd6985c2566c9b1b831b5
TencentMedicalNet/MedicalNet-Resnet10
TencentMedicalNet
null
5
0
null
2
null
false
false
false
mit
['en']
['MRBrainS18']
null
0
0
0
0
0
0
0
['MedicalNet', 'medical images', 'medical', '3D', 'Med3D']
false
true
true
1,531
false
# MedicalNet This repository contains a Pytorch implementation of [Med3D: Transfer Learning for 3D Medical Image Analysis](https://arxiv.org/abs/1904.00625). Many studies have shown that the performance on deep learning is significantly affected by volume of training data. The MedicalNet project aggregated the dataset...
3b78cd30983091b59fd000537cc9ab87
danieleV9H/hubert-base-libri-clean-ft100h
danieleV9H
hubert
12
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['librispeech_asr']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,400
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hubert-base-libri-clean-ft100h This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/faceboo...
8324194e16045b7cc5cddb2ba388c513
DrishtiSharma/whisper-large-v2-hungarian-400-steps
DrishtiSharma
whisper
15
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['hu']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,312
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large Nepali - Drishti Sharma This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai...
13caa302125caf15f8975418c5c656a6
paola-md/recipe-lr1e05-wd0.01-bs16
paola-md
roberta
6
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,467
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-lr1e05-wd0.01-bs16 This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-...
658cadc2476f5c2ef3581b45b0ea7834
sentence-transformers/bert-base-nli-max-tokens
sentence-transformers
bert
15
310
sentence-transformers
0
sentence-similarity
true
true
true
apache-2.0
null
null
null
0
0
0
0
0
0
0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
true
true
3,816
false
**⚠️ This model is deprecated. Please don't use it as it produces sentence embeddings of low quality. You can find recommended sentence embedding models here: [SBERT.net - Pretrained Models](https://www.sbert.net/docs/pretrained_models.html)** # sentence-transformers/bert-base-nli-max-tokens This is a [sentence-tran...
01424900dc45c408817091f060f291da
kha-white/manga-ocr-base
kha-white
vision-encoder-decoder
8
35,462
transformers
18
image-to-text
true
false
false
apache-2.0
['ja']
['manga109s']
null
1
0
1
0
0
0
0
['image-to-text']
false
true
true
620
false
# Manga OCR Optical character recognition for Japanese text, with the main focus being Japanese manga. It uses [Vision Encoder Decoder](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder) framework. Manga OCR can be used as a general purpose printed Japanese OCR, but its main goal was to prov...
01ad2a2f436ea34209d9527bd1aa6468
xliu128/xlm-roberta-base-finetuned-panx-de
xliu128
xlm-roberta
12
6
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,320
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-b...
e8ba46ccc2397d2774a76da7f86d30d6
wdcqc/starcraft-platform-terrain-32x32
wdcqc
null
17
21
diffusers
8
other
true
false
false
creativeml-openrail-m
null
['wdcqc/starcraft-remastered-melee-maps']
null
0
0
0
0
0
0
0
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'landscape']
false
true
true
3,157
false
# DreamBooth model for Starcraft:Remastered terrain This is a Stable Diffusion model fine-tuned on Starcraft terrain images on the Space Platform tileset with DreamBooth. It can be used by adding the `instance_prompt`: **isometric scspace terrain** It was trained on 32x32 terrain images from 265 melee maps including...
14350a45f4811851417304f551104815
jperezv/bert-finetuned-ner
jperezv
bert
12
3
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,518
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2...
113e53e09798f02595c30622bb91e235
yanaiela/roberta-base-epoch_69
yanaiela
roberta
9
2
transformers
0
fill-mask
true
false
false
mit
['en']
['wikipedia', 'bookcorpus']
null
0
0
0
0
0
0
0
['roberta-base', 'roberta-base-epoch_69']
false
true
true
2,102
false
# RoBERTa, Intermediate Checkpoint - Epoch 69 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randoml...
4a0fe2a00a3cc71cbc560697dc698607
projecte-aina/mt-aina-en-ca
projecte-aina
null
5
0
null
0
null
false
false
false
cc-by-4.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
8,803
false
## Aina Project's English-Catalan machine translation model ## Table of Contents - [Model Description](#model-description) - [Intended Uses and Limitations](#intended-use) - [How to Use](#how-to-use) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Data ...
c00058da64d3154b5ae406178924eaca
Helsinki-NLP/opus-mt-sv-mos
Helsinki-NLP
marian
10
9
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
false
### opus-mt-sv-mos * source languages: sv * target languages: mos * OPUS readme: [sv-mos](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-mos/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](...
43b2b1761968671d22f82fab09dd2ed5
joey234/whisper-small-vi
joey234
whisper
55
5
transformers
1
automatic-speech-recognition
true
false
false
apache-2.0
['vi']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
1,552
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Vietnamese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-smal...
48f84e65873df9efa4ae9927b20eb30e
qanastek/whisper-large-french-uncased
qanastek
whisper
17
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['fr']
['mozilla-foundation/common_voice_11_0']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard']
true
true
true
1,310
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large French This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) o...
f14b5761e15606a957ae0332eb91336e
alexjercan/codet5-base-buggy-error-description
alexjercan
t5
11
5
transformers
1
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
948
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-base-buggy-error-description This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesf...
aa9c5535d99bba236370cceda837f19e
lct-rug-2022/edos-2023-baseline-distilbert-base-uncased-label_sexist
lct-rug-2022
distilbert
10
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,544
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # edos-2023-baseline-distilbert-base-uncased-label_sexist This model is a fine-tuned version of [distilbert-base-uncased](https://...
060ab1672e994f920d3af913bfb5a3d5
JovialValley/model_syllable_onSet1
JovialValley
wav2vec2
13
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
11,452
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_syllable_onSet1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wa...
74b22347f41a4e124dd113a44611d4fe
arampacha/whisper-large-uk
arampacha
whisper
13
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['uk']
['mozilla-foundation/common_voice_11_0', 'google/fleurs']
null
0
0
0
0
0
0
0
['whisper-event', 'generated_from_trainer']
true
true
true
976
false
# whisper-base-uk This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 1.3201 - eval_wer: 10.2869 ## Model description More information needed ## Inten...
ab69e623d71ab9b9903e7079b2244bdc
jonatasgrosman/exp_w2v2r_en_xls-r_accent_us-5_england-5_s334
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['en']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'en']
false
true
true
475
false
# exp_w2v2r_en_xls-r_accent_us-5_england-5_s334 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure t...
e50e85324282c1ee7dc346d1d93bae19
TheNateTCY/fulltrain_optmodel
TheNateTCY
opt
8
0
transformers
0
text-generation
false
true
false
other
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
1,521
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TheNateTCY/fulltrain_optmodel This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on a...
3be524e9197c1900fc86c15cc8c37ee3
ieborhan/irisg444_4c0-Species-classification
ieborhan
null
4
0
sklearn
0
tabular-classification
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['tabular-classification', 'baseline-trainer']
false
true
true
7,540
false
## Baseline Model trained on irisg444_4c0 to apply classification on Species **Metrics of the best model:** accuracy 0.953333 recall_macro 0.953333 precision_macro 0.956229 f1_macro 0.953216 Name: LogisticRegression(class_weight='balanced', max_iter=1000), dtype: float64 **See mod...
ec8f759a2cbcd3838ac5a9ae0eee5a5b
Helsinki-NLP/opus-mt-fr-bi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
false
### opus-mt-fr-bi * source languages: fr * target languages: bi * OPUS readme: [fr-bi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](http...
5649db29ae0810704144daf9ba068e0f
SetFit/distilbert-base-uncased__hate_speech_offensive__train-32-4
SetFit
distilbert
10
5
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,215
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased__hate_speech_offensive__train-32-4 This model is a fine-tuned version of [distilbert-base-uncased](https...
5f378270188a2cd951033abe2aa32a85
jonatasgrosman/exp_w2v2t_es_no-pretraining_s807
jonatasgrosman
wav2vec2
10
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['es']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'es']
false
true
true
414
false
# exp_w2v2t_es_no-pretraining_s807 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has be...
1608e6c97878c5322a3d7e3d3d806c08
MEDT/ChatBot
MEDT
gpt2
9
4
transformers
0
conversational
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['conversational']
false
true
true
1,752
false
# DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script...
a2d65dd0fa0e00364c69ac839da931ff
k3lana/xlm-roberta-base-finetuned-panx-de-fr
k3lana
xlm-roberta
10
5
transformers
0
token-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,321
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-robert...
cf47ea12a762581cc79bd9c003e3e485
csam/finetuning-sentiment-model-3000-samples
csam
distilbert
13
11
transformers
0
text-classification
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,053
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d...
e5b4c5ee4a3ad64138b404e64d7135cb
gchhablani/bert-base-cased-finetuned-stsb
gchhablani
bert
52
88
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['glue']
null
0
0
0
0
0
0
0
['generated_from_trainer', 'fnet-bert-base-comparison']
true
true
true
2,394
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-stsb This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) o...
82422fc3000b327511490fcfb35bf262
Helsinki-NLP/opus-mt-tc-big-es-zle
Helsinki-NLP
marian
13
5
transformers
0
translation
true
true
false
cc-by-4.0
['be', 'es', 'ru', 'uk', 'zle']
null
null
1
0
1
0
0
0
0
['translation', 'opus-mt-tc']
true
true
true
5,963
false
# opus-mt-tc-big-es-zle Neural machine translation model for translating from Spanish (es) to East Slavic languages (zle). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the...
26ea00e38fd0841a9c2ea4611b0ed9b6
gostrive/distilbert-base-uncased-finetuned-squad-d5716d28
gostrive
distilbert
8
5
transformers
0
question-answering
true
false
false
apache-2.0
['en']
['squad']
null
0
0
0
0
0
0
0
['question-answering']
false
true
true
1,392
false
# DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1)...
206913b81dd6917c52eb8c6176e2b1eb
Evelyn18/distilbert-base-uncased-becasv3-1
Evelyn18
distilbert
19
5
transformers
0
question-answering
true
false
false
apache-2.0
null
['becasv3']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,530
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-becasv3-1 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilb...
8c273b223c303663038329160bf83339
Sandeepanie/clinical-finetuned-AgitationModel
Sandeepanie
bert
18
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,584
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical-finetuned-AgitationModel This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co...
c42143ba868f11ba4d7dc20e46e7983d
kSaluja/new-test-model2
kSaluja
bert
14
5
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,163
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # new-test-model2 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unkn...
f12660b975911f1b3a692931bcbadc8d
Helsinki-NLP/opus-mt-hy-en
Helsinki-NLP
marian
10
192
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
false
### opus-mt-hy-en * source languages: hy * target languages: en * OPUS readme: [hy-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/hy-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](http...
85ae1b911f80c333af96c75f2d35f3bd
popcornell/chime7_task1_asr1_baseline
popcornell
null
23
7
espnet
0
automatic-speech-recognition
false
false
false
cc-by-4.0
['en']
['chime7_task1']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'automatic-speech-recognition', 'speech separation']
false
true
true
10,211
false
## ESPnet2 ASR model ### `popcornell/chime7_task1_asr1_baseline` This model was trained by popcornell using chime7_task1 recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [CHiME-7 DASR installation instructions](https://github.com/espnet/espnet/blob/master/egs2/chim...
445aeb1f46c12b854264b9da438a80c1
minjibi/test1000v2
minjibi
wav2vec2
12
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,638
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test1000v2 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-larg...
367a1b02d526e0d712b87447f337eb8c
ibm/ColD-Fusion-itr21-seed2
ibm
roberta
9
3
transformers
0
text-classification
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
['exbert']
false
true
true
3,148
false
# ColD Fusion model Finetuned model that aims to be a great base model. It improves over RoBERTa base, trained on 35 datasets. Full details at [this paper](https://arxiv.org/abs/2212.01378). ## Paper Abstract: Pretraining has been shown to scale well with compute, data size and data diversity. Multitask learning t...
5c8fa2b9a466ea10f283dd893ce2d1a5
Kurapka/koja
Kurapka
null
18
4
diffusers
0
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
1
1
0
0
0
0
0
['text-to-image', 'stable-diffusion']
false
true
true
606
false
### koja Dreambooth model trained by Kurapka with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffu...
469cbb48bbcbaa34569f445e688eabe1
jonatasgrosman/exp_w2v2t_sv-se_r-wav2vec2_s418
jonatasgrosman
wav2vec2
10
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['sv-SE']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'sv-SE']
false
true
true
468
false
# exp_w2v2t_sv-se_r-wav2vec2_s418 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that you...
68a2706abb7fec9a7d722f709415d789
MortalSage/Strange_Dedication
MortalSage
null
38
0
null
16
text-to-image
false
false
false
unknown
['en']
null
null
1
0
0
1
1
1
0
['stable-diffusion', 'text-to-image']
false
true
true
1,952
false
.safetensor model for automatic1111 webui. Strange_Dedication_v3 is an improvement to Strange_Dedication_v2 using Anything_v4.5. It's better at the cutesexyrobutts style, without having to use a trigger. Also, it's good at shiny_skin and shiny_clothes and artistical backgrounds. I have only used it with "vae-ft-ms...
2ac69240c9ec8a6839e66c10c790cb88
Sa1i/gakki-mix-512-young
Sa1i
null
22
2
diffusers
1
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['text-to-image', 'stable-diffusion', 'gakki']
false
true
true
529
false
# VAE Highly recommended for use with VAE # legal & risk ⚠️⚠ It is prohibited to use this model for commercial purposes and any scenarios of illegal acts and purposes. Sample pictures of this concept: ![0](https://huggingface.co/Sa1i/gakki-mix/resolve/main/sample_images/00986-2977967196.png) ![1](ht...
e2e411d950545015996011bb76f95a94
bigmorning/whisper_havest_0015
bigmorning
whisper
7
6
transformers
0
automatic-speech-recognition
false
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_keras_callback']
true
true
true
3,113
false
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_havest_0015 This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unkn...
7d22385a960de6372ace2a5ceff99557
jcblaise/electra-tagalog-small-uncased-generator
jcblaise
electra
6
4
transformers
0
fill-mask
true
false
false
gpl-3.0
['tl']
null
null
0
0
0
0
0
0
0
['electra', 'tagalog', 'filipino']
false
true
true
1,393
false
# ELECTRA Tagalog Small Uncased Generator Tagalog ELECTRA model pretrained with a large corpus scraped from the internet. This model is part of a larger research project. We open-source the model to allow greater usage within the Filipino NLP community. This is the generator model used to sample synthetic text and pr...
8b6ffc4dd3c28bb5c24f4a941aa87675
arvkevi/nba_pbp_distilgpt2
arvkevi
gpt2
21
2
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,251
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nba_pbp_distilgpt2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on text files containin...
8350dd6b5f7786145e6b0bef1e2ad520
muhtasham/small-mlm-glue-qqp-custom-tokenizer
muhtasham
bert
12
0
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,457
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # small-mlm-glue-qqp-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingfac...
aedf9d49499f69fb9e8b14113d294bb2
jonatasgrosman/exp_w2v2r_de_xls-r_age_teens-8_sixties-2_s945
jonatasgrosman
wav2vec2
10
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['de']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'de']
false
true
true
475
false
# exp_w2v2r_de_xls-r_age_teens-8_sixties-2_s945 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure t...
eeff3a02f55872ba95b04bc82f8f8efd
2020uee0139/distilbert-base-uncased-finetuned-squad
2020uee0139
distilbert
12
3
transformers
0
question-answering
true
false
false
apache-2.0
null
['squad']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,284
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d...
bdd84eba2f64d8fa267d694ea40d857a
Intel/bert-base-uncased-mrpc-int8-dynamic
Intel
bert
9
4
transformers
0
text-classification
true
false
false
apache-2.0
['en']
['mrpc']
null
0
0
0
0
0
0
0
['text-classfication', 'int8', 'Intel® Neural Compressor', 'PostTrainingDynamic', 'onnx']
false
true
true
1,445
false
# INT8 BERT base uncased finetuned MRPC ## Post-training dynamic quantization ### PyTorch This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The origina...
439aac8e766a0d7796c3738f812e06b4
prakharz/DIAL-FLANT5-XL
prakharz
t5
8
729
transformers
3
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,332
false
# InstructDial Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialo...
8551ede43863e78a31514b0a652dc412
huggingnft/alpacadabraz
huggingnft
null
5
10
transformers
1
unconditional-image-generation
false
false
false
mit
null
['huggingnft/alpacadabraz']
null
0
0
0
0
0
0
0
['huggingnft', 'nft', 'huggan', 'gan', 'image', 'images', 'unconditional-image-generation']
false
true
true
2,190
false
# Hugging NFT: alpacadabraz ## Disclaimer All rights belong to their owners. Models and datasets can be removed from the site at the request of the copyright holder. ## Model description LightWeight GAN model for unconditional generation. NFT collection available [here](https://opensea.io/collection/alpacadabraz)...
8099a5c6818b8263c173d2cc7ee8d440
megantosh/flair-arabic-MSA-aqmar
megantosh
null
11
44
flair
0
token-classification
true
false
false
apache-2.0
['ar']
['AQMAR', 'ANERcorp']
null
0
0
0
0
0
0
0
['flair', 'Text Classification', 'token-classification', 'sequence-tagger-model']
false
true
true
3,938
false
# Arabic NER Model for AQMAR dataset Training was conducted over 86 epochs, using a linear decaying learning rate of 2e-05, starting from 0.3 and a batch size of 48 with fastText and Flair forward and backward embeddings. ## Original Dataset: - [AQMAR](http://www.cs.cmu.edu/~ark/ArabicNER/) ## Results: - F1-score (m...
11e4388388b4286db8293fe9dc815596
roscazo/CTEBMSP_ANAT_DISO
roscazo
roberta
17
1
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
3,589
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CTEBMSP_ANAT_DISO This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-...
98835688bbc75a121215ab68e234fcfc
dxiao/bert-finetuned-ner-20percent
dxiao
bert
12
7
transformers
0
token-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,525
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner-20percent This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on ...
140bd2b8399a76158210abbebd816fef
Simon17/Klassifizierung-Heizung
Simon17
bert
12
1
transformers
0
text-classification
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,318
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Klassifizierung-Heizung This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-c...
92a43128590c7933cb7f3d2552f8f4ec
sd-concepts-library/james-web-space-telescope
sd-concepts-library
null
9
0
null
0
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,150
false
### James Web space Telescope on Stable Diffusion This is the `<James-Web-Telescope>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ip...
1fa34a6ac260d9bca1ad288d3ec7d4a6
tau/bart-base-sled-contractnli
tau
tau/sled
5
0
transformers
0
null
true
false
false
mit
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
4,972
false
# BART-SLED (SLiding-Encoder and Decoder, base-sized model) SLED models use pretrained, short-range encoder-decoder models, and apply them over long-text inputs by splitting the input into multiple overlapping chunks, encoding each independently and perform fusion-in-decoder ## Model description This SLED model i...
652ac6c93ae67a41b4f8d885f27845b1
thyagosme/gpt2-wikitext2
thyagosme
gpt2
9
4
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,216
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the fo...
db207bf12cddb2e1fba07948e78679ce
omriuz/distilbert-base-uncased-finetuned-mnli
omriuz
distilbert
14
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,291
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
eb2b9fcf50cead783754391fa6a139fb
jonatasgrosman/exp_w2v2t_nl_unispeech_s493
jonatasgrosman
unispeech
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['nl']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'nl']
false
true
true
469
false
# exp_w2v2t_nl_unispeech_s493 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that yo...
835abbb190bd60eeb21e6199a2587a74
Vishfeb27/wav2vec2-base-timit-demo-colab
Vishfeb27
wav2vec2
14
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,014
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa...
7812217c55e44d62bc2b6221acd290ad
96harsh56/bert-large-cased-berta-finetuned-subjqa_1
96harsh56
bert
12
2
transformers
0
question-answering
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
939
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-cased-berta-finetuned-subjqa_1 This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-l...
3ee5acb3b2366a86a716270a5f0d353e
lmqg/mt5-base-itquad-ae
lmqg
mt5
13
66
transformers
0
text2text-generation
true
false
false
cc-by-4.0
['it']
['lmqg/qg_itquad']
null
0
0
0
0
0
0
0
['answer extraction']
true
true
true
4,612
false
# Model Card of `lmqg/mt5-base-itquad-ae` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for answer extraction on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation)....
ce2b2e616da685f4f5bc6498b07e925d
Raccourci/t5-sentiment
Raccourci
t5
11
1
transformers
0
text2text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,807
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-sentiment-hub This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achie...
3b6f02c97009fbf9ab363eed36da4aed
flax-community/alberti-bert-base-multilingual-cased
flax-community
bert
47
96
transformers
4
fill-mask
true
false
true
cc-by-4.0
['es']
null
null
0
0
0
0
0
0
0
['multilingual', 'bert']
false
true
true
4,318
false
# ALBERTI ALBERTI is a set of two BERT-based multilingual model for poetry. One for verses and another one for stanzas. This model has been further trained with the PULPO corpus for verses using [Flax](https://github.com/google/flax), including training scripts. This is part of the [Flax/Jax Community Week](https://...
ea460dc4946fd092a8847a10d71f798a
Rgl73/xlm-roberta-base-finetuned-panx-de
Rgl73
xlm-roberta
26
11
transformers
0
token-classification
true
false
false
mit
null
['xtreme']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,314
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-b...
76067d9f672e706ab5a9b7a4af0d61ce
willcai/wav2vec2_common_voice_accents_indian_only_rerun
willcai
wav2vec2
11
4
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,504
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_common_voice_accents_indian_only_rerun This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://hug...
d42e48f0246fc894f956177536df3fa4
KarelDO/bert-base-uncased.CEBaB_confounding.food_service_positive.sa.5-class.seed_44
KarelDO
bert
14
2
transformers
0
null
true
false
false
apache-2.0
['en']
['OpenTable']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,131
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased.CEBaB_confounding.food_service_positive.sa.5-class.seed_44 This model is a fine-tuned version of [bert-base-un...
7cafd07dc3b5011af25ccac708b96d7f
tahazakir/wav2vec2-base-timit-demo-colab0
tahazakir
wav2vec2
12
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,342
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab0 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/w...
53254c2d38088ab00e6ed79748fd605b
Geotrend/bert-base-pt-cased
Geotrend
bert
8
39
transformers
0
fill-mask
true
true
true
apache-2.0
['pt']
['wikipedia']
null
0
0
0
0
0
0
0
[]
false
true
true
1,283
false
# bert-base-pt-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the s...
54f05ce2de162a9d4d61144222fbe932
BrianT/distilbert-base-uncased-finetuned-cola
BrianT
distilbert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
['glue']
null
1
1
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,571
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
aca2c0303da4cf2f214551eeecec8f45
Mascariddu8/distilbert-base-uncased-finetuned-imdb
Mascariddu8
distilbert
9
4
transformers
0
fill-mask
true
false
false
apache-2.0
null
['imdb']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,318
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
b3aa13b2f8da3c0f2b3d34b10d34cd60
tuwonga/marblesh
tuwonga
null
5
0
null
20
text-to-image
false
false
false
creativeml-openrail-m
null
null
null
6
5
1
0
0
0
0
['stable-diffusion', 'text-to-image']
false
true
true
1,692
false
### marblesh This is a fine-tuned Stable Diffusion model (based on v1.5) trained on screenshots from marble statues. This model is a merge from 2 checkpoints trained on different marble statues. Use the token "**marblesh**" in your prompt for person and animals. If you have veichles or other object in your prompt use t...
d474d47365c22203b8db9f6e421bc723
osanseviero/test123
osanseviero
null
2
0
spacy
0
token-classification
false
false
false
cc-by-sa-4.0
['de']
null
null
0
0
0
0
0
0
0
['spacy', 'token-classification']
false
true
true
598,363
false
UD v2.5 benchmarking pipeline for UD_German-HDT | Feature | Description | | --- | --- | | **Name** | `de_udv25_germanhdt_trf` | | **Version** | `0.0.1` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_t...
d0c98d305581ae211f1adb30ae12cb24
frgfm/resnet18
frgfm
null
5
6
transformers
0
image-classification
true
false
false
apache-2.0
null
['frgfm/imagenette']
null
0
0
0
0
0
0
0
['image-classification', 'pytorch', 'onnx']
false
true
true
2,771
false
# ResNet-18 model Pretrained on [ImageNette](https://github.com/fastai/imagenette). The ResNet architecture was introduced in [this paper](https://arxiv.org/pdf/1512.03385.pdf). ## Model description The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection...
7e4410f0dc2025ea66303aa8771819d5
Andranik/blinding1
Andranik
bert
13
1
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,376
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # blinding This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT...
1f25cd0d415930a9d50353e050f3623a
matthh/gpt2-poetry-model
matthh
gpt2
11
3
transformers
0
text-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
864
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-poetry-model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model des...
d7f19035e03e97b8ab4f657e72e40a7b
rycont/emoji-diffusion
rycont
null
7
0
diffusers
0
null
false
false
false
apache-2.0
['en']
['microsoft/fluentui-emoji']
null
0
0
0
0
0
0
0
[]
false
true
true
1,206
false
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # emoji-diffusion ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/hugging...
7434bb6f78f35e77c4e48f035615162b
MBMMurad/wav2vec2_murad_with_some_new_data
MBMMurad
wav2vec2
17
1
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['cvbn']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,221
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_murad_with_some_new_data This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/fa...
eeca045623bd9d5bee14e118e5634262
jiobiala24/wav2vec2-base-checkpoint-12
jiobiala24
wav2vec2
13
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
['common_voice']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,362
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-12 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-11.1](https://huggingface...
6c56fb12474a265924d879f8f2b7f773
jvkape/WikiHowSDModel
jvkape
null
6
0
null
8
null
false
false
false
openrail
null
null
null
1
0
1
0
0
0
0
[]
false
true
true
1,523
false
This model card is a copy-paste from https://www.reddit.com/r/StableDiffusion/comments/ybavif/wikihow_db_model_entirely_free_model_trained_with/ The template is not 100% accurate and sometimes creates erroneous images, but it is incomparable to the natural quality of SD. The images used for training were all CC from ...
e5bbfa0346938d29f138804b6a0f0ab1
jonatasgrosman/exp_w2v2r_es_xls-r_gender_male-2_female-8_s772
jonatasgrosman
wav2vec2
10
3
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['es']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'es']
false
true
true
476
false
# exp_w2v2r_es_xls-r_gender_male-2_female-8_s772 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure ...
f1fbfe95cd65f1bb60b50d6ceea758f3
Tune-A-Video-library/redshift-man-skiing
Tune-A-Video-library
null
17
0
diffusers
2
null
false
false
false
creativeml-openrail-m
null
null
null
0
0
0
0
0
0
0
['tune-a-video', 'text-to-video', 'diffusers']
false
true
true
1,575
false
# Tune-A-Video - Redshift ## Model Description - Base model: [nitrosocke/redshift-diffusion](https://huggingface.co/nitrosocke/redshift-diffusion) - Training prompt: a man is skiing. ![sample-train](samples/train.gif) ## Samples ![sample-500](samples/sample-500.gif) Test prompt: (redshift style) [spider man/black w...
21eb22879334f07d09f7a8e87916ef5f
infinitejoy/wav2vec2-large-xls-r-300m-breton-cv8
infinitejoy
wav2vec2
13
7
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['br']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'br', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
true
true
true
2,271
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # XLS-R-300M - Breton This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec...
e213b5597dbcc2dd6e875fce53a06e0f
utsavnandi/fashion-mnist-ddpm-32px-5000_steps
utsavnandi
null
3
0
null
0
unconditional-image-generation
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['unconditional-image-generation']
false
true
true
487
false
Fashion MNIST unconditional Unet Model trained using DDPM Model Hyperparams: - Model size: 51,834,625 params - 3 stages: 128, 256, 512 channels - Linear Attention in 2nd and 3rd stages, Self Attention in Middle Stage - Optimizer: Adam - LR: 3e-4 - Batch Size: 64 - Grad Accumulation: 8 steps - Effectibe Batch Size: 51...
fe49b47cd35280ba30fc8f3f9a78511f
fathyshalab/all-roberta-large-v1-banking-1000-16-5-oos
fathyshalab
roberta
11
4
transformers
0
text-classification
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,519
false
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # all-roberta-large-v1-banking-1000-16-5-oos This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](ht...
5cea3565c5a9ccc67ced3ed0dd1b6f13
a1noack/bart-large-gigaword
a1noack
bart
6
115
transformers
0
summarization
true
false
false
mit
null
['gigaword']
null
0
0
0
0
0
0
0
['summarization']
false
true
true
1,234
false
# BART for Gigaword - This model was created by fine-tuning the `facebook/bart-large-cnn` weights (also on HuggingFace) for the Gigaword dataset. The model was fine-tuned on the Gigaword training set for 3 epochs, and the model with the highest ROUGE-1 score on the training set batches was kept. - The BART Tokenizer ...
7bc82302bc5f9e9bd8ccc20d98f05e11
sd-concepts-library/liminalspaces
sd-concepts-library
null
11
0
null
3
null
false
false
false
mit
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,286
false
### Liminalspaces on Stable Diffusion This is the `<liminal image>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You...
4b47449245e502dbc42065b3b16d5ce5
jonatasgrosman/exp_w2v2t_pt_unispeech-ml_s610
jonatasgrosman
unispeech
10
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['pt']
['mozilla-foundation/common_voice_7_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'pt']
false
true
true
500
false
# exp_w2v2t_pt_unispeech-ml_s610 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When usin...
cc854b4b4b2721b5c73842a575abad18
olgaduchovny/t5-base-ner-mit-movie
olgaduchovny
t5
8
1
transformers
0
text2text-generation
true
false
false
mit
['en']
['conll2003']
null
0
0
0
0
0
0
0
['pytorch', 'ner', 'text generation', 'seq2seq']
false
true
true
1,174
false
# t5-base-qa-ner-conll Unofficial implementation of [InstructionNER](https://arxiv.org/pdf/2203.03903v1.pdf). t5-base model tuned on conll2003 dataset. https://github.com/ovbystrova/InstructionNER ## Inference ```shell git clone https://github.com/ovbystrova/InstructionNER cd InstructionNER ``` ```python from in...
e076b3955885d24a9b530b32e46cfec8