librarian-bots/model-card-dataset-mentions
Text Classification • 0.1B • Updated • 3
text stringlengths 21 2.11k | label class label 2
classes |
|---|---|
Intended uses & limitations More information needed | 1no_dataset_mention |
Training and evaluation data More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Training results | 1no_dataset_mention |
Training and evaluation data More information needed | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
all-roberta-large-v1-utility-5-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3728 - Accuracy: 0.3956 | 0dataset_mention |
donut-base-sroie This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. | 0dataset_mention |
all-roberta-large-v1-travel-9-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1384 - Accuracy: 0.4289 | 0dataset_mention |
DevRoBERTa DevRoBERTa is a Devanagari RoBERTa model. It is a multilingual RoBERTa (xlm-roberta-base) model fine-tuned on publicly available Hindi and Marathi monolingual datasets. [project link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our ... | 0dataset_mention |
Training procedure | 1no_dataset_mention |
mobilebert_add_GLUE_Experiment_logit_kd_rte_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.3914 - Accuracy: 0.5271 | 0dataset_mention |
K-12BERT model K-12BERT is a model trained by performing continued pretraining on the K-12Corpus. Since, performance of BERT like models on domain adaptive tasks have shown great progress, we noticed the lack of such a model for the education domain (especially K-12 education). On that end we present K-12BERT, a BERT ... | 0dataset_mention |
Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. | 0dataset_mention |
Performance on Text Dataset We conducted experiments on the in-house test sets of the three different domains of Internet, medical care, and finance: <table> <tr><th row_span='2'><th colspan='2'>finance<th colspan='2'>healthcare<th colspan='2'>internet <tr><td><th>0-shot<th>5-shot<th>0-shot<th>5-shot<th>0-shot<th>5-... | 0dataset_mention |
distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4626 | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 | 1no_dataset_mention |
Training and evaluation data More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00034 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_sc... | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
JimmyWu/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0086 - Validation Loss: 0.0791 - Epoch: 4 | 0dataset_mention |
Model Description This is a RoBERTa model pre-trained on 青空文庫 texts for POS-tagging and dependency-parsing, derived from [roberta-small-japanese-aozora-char](https://huggingface.co/KoichiYasuoka/roberta-small-japanese-aozora-char). Every long-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Un... | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The inten... | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
base-mlm-tweet This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2872 | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3184 - Accuracy: 0.8667 - F1: 0.8684 | 0dataset_mention |
Training procedure | 1no_dataset_mention |
distilbert-base-uncased-finetuned-ft500_4class This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1343 - Accuracy: 0.4853 - F1: 0.4777 | 0dataset_mention |
distilbert-base-uncased-qa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1925 | 0dataset_mention |
MiniLM-L12-H384-uncased__sst2__all-train This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2632 - Accuracy: 0.9055 | 0dataset_mention |
distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0604 - Precision: 0.9271 - Recall: 0.9381 - F1: 0.9326 - Accuracy: 0.9836 | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
token_fine_tunned_flipkart_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3435 - Precision: 0.8797 - Recall: 0.9039 - F1: 0.8916 - Accuracy: 0.9061 | 0dataset_mention |
Training details cT5 models used T5's weights as a starting point, and then it was finetuned on the English [wikipedia](https://huggingface.co/datasets/wikipedia) for 3 epochs, achieving ~74% validation accuracy (ct5-small). The training script is in JAX + Flax and can be found in `pretrain_ct5.py`. Flax checkpoin... | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Haakf/allsides_right_text_headline_padded_overfit This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8995 - Validation Loss: 1.7970 - Epoch: 19 | 0dataset_mention |
bert-base-multilingual-cased-tuned-smartcat This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0000 | 0dataset_mention |
Model Description This model was captured during a reproduction of [BERT-base uncased](https://github.com/google-research/bert), for English: it is a Transformers model pretrained on a large corpus of English data, using the Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP) objectives. The inten... | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
sachinsahu/Human_Development_Index-clustered This model is a fine-tuned version of [nandysoham16/4-clustered_aug](https://huggingface.co/nandysoham16/4-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2111 - Train End Logits Accuracy: 0.9583 - Train Start ... | 0dataset_mention |
rdpatilds/distilbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.6914 - Validation Loss: 2.5383 - Epoch: 0 | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
BERiT_2000 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.7293 | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training data The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
distilbert-base-uncased-finetuned2-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4725 | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Romanian paraphrase  Fine-tune t5-small model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v1). The dataset contains ~60k examples. | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Evaluation The model can be evaluated as follows on the Portuguese test data of Common Voice. You need to install Enelvo, an open-source spell correction trained with Twitter user posts `pip install enelvo` ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers imp... | 0dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [Tianyi98/opt-350m-finetuned-cola](https://huggingface.co/Tianyi98/opt-350m-finetuned-cola) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4133 - Accuracy: 0.92 - F1: 0.9205 | 0dataset_mention |
A tiny GPT2 model for generating Hebrew text A distilGPT2 sized model. <br> Training data was hewiki-20200701-pages-articles-multistream.xml.bz2 from https://dumps.wikimedia.org/hewiki/20200701/ <br> XML has been converted to plain text using Wikipedia Extractor http://medialab.di.unipi.it/wiki/Wikipedia_Extractor ... | 0dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
Training procedure | 1no_dataset_mention |
Intended uses & limitations More information needed | 1no_dataset_mention |
🚀 Text Punctuator Based on Transformers model T5. T5 model fine-tuned for punctuation restoration. Model currently supports only French Language. More language supports will be added later using mT5. Train Datasets : Model trained using 2 french datasets (around 500k records): - [orange_sum](https://huggingface.co... | 0dataset_mention |
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]
[More Information Needed]