Search is not available for this dataset
pipeline_tag stringclasses 48
values | library_name stringclasses 205
values | text stringlengths 0 18.3M | metadata stringlengths 2 1.07B | id stringlengths 5 122 | last_modified null | tags listlengths 1 1.84k | sha null | created_at stringlengths 25 25 |
|---|---|---|---|---|---|---|---|---|
null | null | {} | 202015004/wav2vec2-base-TLT-Shreya-trial | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wa... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | 202015004/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | 202015004/wav2vec2-base-timit-trial_by_SHREYA | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
automatic-speech-recognition | transformers | {} | 275Gameplay/test | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
text-generation | transformers |
# Deadpool DialoGPT Model | {"tags": ["conversational"]} | 2early4coffee/DialoGPT-medium-deadpool | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Deadpool DialoGPT Model | {"tags": ["conversational"]} | 2early4coffee/DialoGPT-small-deadpool | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | 2umm3r/bert-base-uncased-finetuned-cls | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "ar... | 2umm3r/distilbert-base-uncased-finetuned-cola | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | this is a fine tuned GPT2 text generation model on a Hunter x Hunter TV anime series dataset.\
you can find a link to the used dataset here : https://www.kaggle.com/bkoozy/hunter-x-hunter-subtitles
you can find a colab notebook for fine-tuning the gpt2 model here : https://github.com/3koozy/fine-tune-gpt2-HxH/ | {} | 3koozy/gpt2-HxH | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | 3zooze/Dd | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | Akshay-Vs/AI | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 511663/bert_finetuning_test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 54Tor/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 5dimension/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 609ead0502/test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 61birds/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 842458199/model_name | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 850886470/xxy_gpt2_chinese | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 873101411/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 91Rodman/111 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | 923/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
token-classification | transformers |
## Model description
This model is a fine-tuned version of macbert for the purpose of spell checking in medical application scenarios. We fine-tuned macbert Chinese base version on a 300M dataset including 60K+ authorized medical articles. We proposed to randomly confuse 30% sentences of these articles by adding n... | {"language": "zh", "license": "apache-2.0", "tags": ["Token Classification"], "metrics": ["precision", "recall", "f1", "accuracy"]} | 9pinus/macbert-base-chinese-medical-collation | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"Token Classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
## Model description
This model is a fine-tuned version of bert-base-chinese for the purpose of medicine name recognition. We fine-tuned bert-base-chinese on a 500M dataset including 100K+ authorized medical articles on which we labeled all the medicine names. The model achieves 92% accuracy on our test dataset.
... | {"language": ["zh"], "license": "apache-2.0", "tags": ["Token Classification"]} | 9pinus/macbert-base-chinese-medicine-recognition | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"Token Classification",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
bert-base-cased model trained on quora question pair dataset. The task requires to predict whether the two given sentences (or questions) are `not_duplicate` (label 0) or `duplicate` (label 1). The model achieves 89% evaluation accuracy
| {"datasets": ["qqp"], "inference": false} | A-bhimany-u08/bert-base-cased-qqp | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"dataset:qqp",
"autotrain_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | AAli/bert-base-cased-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/bert-base-uncased-finetuned-swag | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/gpt2-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/my-new-shiny-tokenizer | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/opus-mt-en-ro-finetuned-en-to-ro | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/t5-small-finetuned-xsum | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/wav2vec2-base-demo-colab | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AAli/wav2vec2-base-finetuned-ks | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
text-generation | transformers |
@Harry Potter DialoGPT model | {"tags": ["conversational"]} | ABBHISHEK/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
feature-extraction | transformers | Pre trained on clus_ chapter only. | {} | AG/pretraining | null | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | AHussain0418/distillbert-truth-detector | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {"license": "apache-2.0"} | AI-Ahmed/DisDistilBert-sst-N-Grams-en | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
sentence-similarity | sentence-transformers |
# PatentSBERTa
## PatentSBERTa: A Deep NLP based Hybrid Model for Patent Distance and Classification using Augmented SBERT
### Aalborg University Business School, AI: Growth-Lab
https://arxiv.org/abs/2103.11933
https://github.com/AI-Growth-Lab/PatentSBERTa
This is a [sentence-transformers](https://www.SBERT.ne... | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AI-Growth-Lab/PatentSBERTa | null | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2103.11933",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Machine Translation
- Model ID: 474612462
- CO2 Emissions (in grams): 133.0219882109991
## Validation Metrics
- Loss: 1.336498737335205
- Rouge1: 52.5404
- Rouge2: 31.6639
- RougeL: 50.1696
- RougeLsum: 50.3398
- Gen Len: 39.046
## Usage
You can use cURL to access thi... | {"language": "unk", "tags": "autonlp", "datasets": ["Eric Peter/autonlp-data-EN-LUG"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 133.0219882109991} | AI-Lab-Makerere/en_lg | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autonlp",
"unk",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Machine Translation
- Model ID: 475112539
- CO2 Emissions (in grams): 126.34446293851818
## Validation Metrics
- Loss: 1.5376628637313843
- Rouge1: 62.4613
- Rouge2: 39.4759
- RougeL: 58.183
- RougeLsum: 58.226
- Gen Len: 26.5644
## Usage
You can use cURL to access th... | {"language": "unk", "tags": "autonlp", "datasets": ["EricPeter/autonlp-data-MarianMT_lg_en"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 126.34446293851818} | AI-Lab-Makerere/lg_en | null | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"autonlp",
"unk",
"dataset:EricPeter/autonlp-data-MarianMT_lg_en",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# A Swedish Bert model
## Model description
This model follows the Bert Large model architecture as implemented in [Megatron-LM framework](https://github.com/NVIDIA/Megatron-LM). It was trained with a batch size of 512 in 600k steps. The model contains following parameters:
<figure>
| Hyperparameter | Value ... | {"language": "sv"} | AI-Nordics/bert-large-swedish-cased | null | [
"transformers",
"pytorch",
"megatron-bert",
"fill-mask",
"sv",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | IssakaAI/wav2vec2-large-xls-r-300m-turkish-colab | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
token-classification | transformers | {"license": "mit"} | AI4Sec/cyner-xlm-roberta-base | null | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
token-classification | transformers | {"license": "mit"} | AI4Sec/cyner-xlm-roberta-large | null | [
"transformers",
"xlm-roberta",
"token-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when ... | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/MSTSb_paraphrase-multilingual-MiniLM-L12-v2 | null | [
"sentence-transformers",
"pytorch",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
sentence-similarity | sentence-transformers |
# AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
... | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/MSTSb_paraphrase-xlm-r-multilingual-v1 | null | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
sentence-similarity | sentence-transformers |
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when ... | {"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/MSTSb_stsb-xlm-r-multilingual | null | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# bertweet-base-multi-mami
This is a Bertweet model: It maps sentences & paragraphs to a 768 dimensional dense vector space and classifies them into 5 multi labels.
# Multilabels
label2id={
"misogynous": 0,
"shaming": 1,
"stereotype": 2,
"objectification": 3,
"violence": 4,... | {"language": "en", "license": "apache-2.0", "tags": ["text-classification", "misogyny"], "pipeline_tag": "text-classification", "widget": [{"text": "Women wear yoga pants because men don't stare at their personality", "example_title": "Misogyny detection"}]} | AIDA-UPM/bertweet-base-multi-mami | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"misogyny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
sentence-similarity | transformers |
# mstsb-paraphrase-multilingual-mpnet-base-v2
This is a fine-tuned version of `paraphrase-multilingual-mpnet-base-v2` from [sentence-transformers](https://www.SBERT.net) model with [Semantic Textual Similarity Benchmark](http://ixa2.si.ehu.eus/stswiki/index.php/Main_Page) extended to 15 languages: It maps sentences &... | {"language": "multilingual", "tags": ["feature-extraction", "sentence-similarity", "transformers", "multilingual"], "pipeline_tag": "sentence-similarity"} | AIDA-UPM/mstsb-paraphrase-multilingual-mpnet-base-v2 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"multilingual",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
This is a finetuned XLM-RoBERTA model for natural language inference. It has been trained with a massive ammount of data following the ANLI pipeline training. We include data from:
- [mnli](https://cims.nyu.edu/~sbowman/multinli/) {train, dev and test}
- [snli](https://nlp.stanford.edu/projects/snli/) {train, dev and ... | {"language": "en", "license": "apache-2.0", "tags": ["natural-language-inference", "misogyny"], "pipeline_tag": "text-classification", "widget": [{"text": "Las mascarillas causan hipoxia. Wearing masks is harmful to human health", "example_title": "Natural Language Inference"}]} | AIDA-UPM/xlm-roberta-large-snli_mnli_xnli_fever_r1_r2_r3 | null | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"natural-language-inference",
"misogyny",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# tests | {"tags": ["conversational"]} | AIDynamics/DialoGPT-medium-MentorDealerGuy | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Uses DialoGPT | {"tags": ["conversational"]} | AJ/DialoGPT-small-ricksanchez | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | AJ/rick-ai | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AJ/rick-bot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
text-generation | transformers |
# its rick from rick and morty | {"tags": ["conversational", "humor"]} | AJ/rick-discord-bot | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"humor",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | null | # uses dialogpt | {"tags": ["conversational", "funny"]} | AJ/rick-sanchez-bot | null | [
"conversational",
"funny",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT model | {"tags": ["conversational"]} | AJ-Dude/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | AK/ak_nlp | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | AK270802/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | AKMyscich/VetTrain-v1.2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AKulk/wav2vec2-base-timit-demo-colab | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs10
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs5](https://huggingface.co/AK... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs10", "results": []}]} | AKulk/wav2vec2-base-timit-epochs10 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs10](https://huggingface.co/A... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs15", "results": []}]} | AKulk/wav2vec2-base-timit-epochs15 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | AKulk/wav2vec2-base-timit-epochs20 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs5
This model is a fine-tuned version of [facebook/wav2vec2-lv-60-espeak-cv-ft](https://huggingface.co/... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-epochs5", "results": []}]} | AKulk/wav2vec2-base-timit-epochs5 | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | ALINEAR/albert-japanese-v2 | null | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
fill-mask | transformers | {} | ALINEAR/albert-japanese | null | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | ALaks96/distilbart-cnn-12-6 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | ARATHI/electra-small-discriminator-fintuned-cola | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | ARCYVILK/gpt2-bot | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
summarization | transformers |
# summarization_fanpage128
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.5348
- Rouge1: 34.1882
- Rouge2: 15.7866
- Rougel: 25.141
- Rougelsum: 28.4882
- Gen Len: 69.3041
... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_fanpage128", "results": []}]} | ARTeLab/it5-summarization-fanpage | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
summarization | transformers |
# summarization_ilpost
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 1.6020
- Rouge1: 33.7802
- Rouge2: 16.2953
- Rougel: 27.4797
- Rougelsum: 30.2273
- Gen Len: 45.3175
## U... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_ilpost", "results": []}]} | ARTeLab/it5-summarization-ilpost | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/ilpost",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
summarization | transformers |
# summarization_mlsum
This model is a fine-tuned version of [gsarti/it5-base](https://huggingface.co/gsarti/it5-base) on MLSum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 2.0190
- Rouge1: 19.3739
- Rouge2: 5.9753
- Rougel: 16.691
- Rougelsum: 16.7862
- Gen Len: 32.5268
## Usage
```... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "gsarti/it5-base", "model-index": [{"name": "summarization_mlsum", "results": []}]} | ARTeLab/it5-summarization-mlsum | null | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/mlsum-it",
"base_model:gsarti/it5-base",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
summarization | transformers |
# mbart-summarization-fanpage
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on Fanpage dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.1833
- Rouge1: 36.5027
- Rouge2: 17.4428
- Rougel: 26.1734
- Rougelsum: 30.2... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/fanpage"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_fanpage4epoch", "results": []}]} | ARTeLab/mbart-summarization-fanpage | null | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
summarization | transformers |
# mbart_summarization_ilpost
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on IlPost dataset for Abstractive Summarization.
It achieves the following results:
- Loss: 2.3640
- Rouge1: 38.9101
- Rouge2: 21.384
- Rougel: 32.0517
- Rougelsum: 35.0743... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/ilpost"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_ilpost", "results": []}]} | ARTeLab/mbart-summarization-ilpost | null | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/ilpost",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
summarization | transformers |
# mbart_summarization_mlsum
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on mlsum-it for Abstractive Summarization.
It achieves the following results:
- Loss: 3.3336
- Rouge1: 19.3489
- Rouge2: 6.4028
- Rougel: 16.3497
- Rougelsum: 16.5387
- Gen ... | {"language": ["it"], "tags": ["summarization"], "datasets": ["ARTeLab/mlsum-it"], "metrics": ["rouge"], "base_model": "facebook/mbart-large-cc25", "model-index": [{"name": "summarization_mbart_mlsum", "results": []}]} | ARTeLab/mbart-summarization-mlsum | null | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"summarization",
"it",
"dataset:ARTeLab/mlsum-it",
"base_model:facebook/mbart-large-cc25",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PENGMENGJIE-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "PENGMENGJIE-finetuned-emotion", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | ASCCCCCCCC/PENGMENGJIE-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {"license": "apache-2.0"} | ASCCCCCCCC/PENGMENGJIE | null | [
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | ASCCCCCCCC/PMJ | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
text-classification | transformers | {} | ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/ber... | {"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-chinese-finetuned-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-chinese-amazon_zh_20000
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-ba... | {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-chinese-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/distilbert-base-chinese-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-multilingual-cased](ht... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-multilingual-cased-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/distilbert-base-multilingual-cased-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-amazon_zh_20000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggin... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-amazon_zh_20000", "results": []}]} | ASCCCCCCCC/distilbert-base-uncased-finetuned-amazon_zh_20000 | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model_index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}}]}]} | ASCCCCCCCC/distilbert-base-uncased-finetuned-clinc | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | AT/bert-base-uncased-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AT/distilbert-base-cased-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AT/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilr... | {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-finetuned-wikitext2", "results": []}]} | AT/distilroberta-base-finetuned-wikitext2 | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#Harry Potter DialoGPT Model | {"tags": ["conversational"]} | ATGdev/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
null | null | {} | ATGdev/ai_ironman | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {"license": "cc-by-nc-4.0"} | AUBMC-AIM/MammoGANesis | null | [
"license:cc-by-nc-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {"license": "cc-by-nc-4.0"} | AUBMC-AIM/OCTaGAN | null | [
"license:cc-by-nc-4.0",
"has_space",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
null | null | {} | AVAIYA/python-test | null | [
"region:us"
] | null | 2022-03-02T23:29:04+00:00 | |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-lar... | {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]} | AVSilva/bertimbau-large-fine-tuned-md | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-lar... | {"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "result", "results": []}]} | AVSilva/bertimbau-large-fine-tuned-sd | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#Tony Stark DialoGPT model | {"tags": ["conversational"]} | AVeryRealHuman/DialoGPT-small-TonyStark | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.