Dataset Viewer
Auto-converted to Parquet Duplicate
docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
Can’t download (some) models although they are in the hub
https://discuss.huggingface.co/t/cant-download-some-models-although-they-are-in-the-hub/13944
Can’t download (some) models to pytorch, although they are in the hub (tried also the from_tf flag) Error: 404 Client Error: Not Found for url: https://huggingface.co/umarayub/t5-small-finetuned-xsum/resolve/main/config.json 1 Models for example: all of those models give 404 when trying to download them [ “SvPolina/t5-...
Looking at umarayub/t5-small-finetuned-xsum at main, there’s indeed no files in there. There’s no config.json uploaded in that repo.
0
huggingface
Beginners
Trainer.push_to_hub is taking lot of time, is this expected behaviour?
https://discuss.huggingface.co/t/trainer-push-to-hub-is-taking-lot-of-time-is-this-expected-behaviour/14162
Hi, I’m trying to push my model to HF hub via trainer.push_to_hub and I see that it is taking a lot of time. Is this the expected behaviour? below are the o/p Saving model checkpoint to xlm-roberta-base Configuration saved in xlm-roberta-base/config.json Model weights saved in xlm-roberta-base/pytorch_model.bin tokeniz...
@sgugger can you please help me out with this error?
0
huggingface
Beginners
SSLCertVerificationError when loading a model
https://discuss.huggingface.co/t/sslcertverificationerror-when-loading-a-model/12005
I am exploring potential opportunities of using HuggingFace “Transformers”. I have been trying check some basic examples from the introductory course, but I came across a problem that I have not been able to solve. I have successfully installed transformers on my laptop using pip, and I have tried to run your “sentime...
I’m also getting the same error. Please let me know if any one has resolution for this?
0
huggingface
Beginners
How to use embeddings to compute similarity?
https://discuss.huggingface.co/t/how-to-use-embeddings-to-compute-similarity/13876
Hi, I would like to compute sentence similarity from an input text and output text using cosine similarity and the embeddings I can get from the Feature Extraction task. However, I noticed that it returns different dimension matrix, so I cannot perform the matrix calculation. For example, in facebook/bart-base · Huggin...
With transformers, the feature-extraction pipeline will retrieve one embedding per token. If you want a single embedding for the full sentence, you probably want to use the sentence-transformers library. There are some hundreds of st models at HF you can use Models - Hugging Face. You might also want to use a transform...
0
huggingface
Beginners
How to use additional input features for NER?
https://discuss.huggingface.co/t/how-to-use-additional-input-features-for-ner/4364
Hello, I’ve been following the documentation on fine-tuning custom datasets (Fine-tuning with custom datasets — transformers 4.3.0 documentation 29), I was wondering how additional token level features can be used as input (e.g.POS tags). My intuition was to concatenate each token with the tag before feeding it into a ...
mhl: e.g [“Arizona_NNP”, “Ice_NNP”, “Tea_NNP”]). Is this the right way to do it? Actually no, because the pre-trained tokenizer only knows tokens, not tokens + POS tags. A better way to do this would be to create an additional input to the model (besides input_ids and token_type_ids) called pos_tag_ids, for which yo...
0
huggingface
Beginners
How to get ‘sequences_scores’ from ‘scores’ in ‘generate()’ method
https://discuss.huggingface.co/t/how-to-get-sequences-scores-from-scores-in-generate-method/6048
Hi all I was wondering if I can ask you some questions about how to use .generate() for BART or other pre-trained models. The example code is, from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig path = 'facebook/bart-large' model = BartForConditionalGeneration.from_pretrained(path) tokenize...
I am having the same problem as in 4. @patrickvonplaten , can you help us with this? Thanks!
0
huggingface
Beginners
How to prepare local dataset for load_dataset() and mimic its behavior when loading HF’s existing online dataset
https://discuss.huggingface.co/t/how-to-prepare-local-dataset-for-load-dataset-and-mimic-its-behavior-when-loading-hfs-existing-online-dataset/6368
Good day! Thank you very much for reading this question. I am working on private dataset in local storage and I want to mimic the program that loads dataset with load_dataset(). In order not to modify the training loop, I would like to convert my private dataset into the exact format the online dataset is stored; so th...
hey @jenniferL, to have the same behaviour as your example you’ll need to create a dataset loading script (see docs 8) which defines the configuration (de-en in your example), along with the column names and types. once your script is ready, you should be able to do something like: from datasets import load_dataset da...
0
huggingface
Beginners
RagRetriver Import error
https://discuss.huggingface.co/t/ragretriver-import-error/3475
I am getting an error cannot import name RagRetriever (unknown location). Using base terminal under conda python3.8. transformers 4.2.2
Does it work under 4.2.1 (stable version) ? Do either of these help at all: stackoverflow.com cannot import name 'pipline' from 'transformers' (unknown location) 25 python, jupyter-lab, huggingface-transformers asked by Arman on 06:35PM - 21 Jul 20 UTC ...
0
huggingface
Beginners
Class weights for bertForSequenceClassification
https://discuss.huggingface.co/t/class-weights-for-bertforsequenceclassification/1674
I have an unbalanced data with a couple of classes with relatively smaller sample sizes. I am wondering if there is a way to assign the class weights to BertFor SequenceClassification class, maybe in BertConfig ?, as we can do in nn.CrossEntropyLoss. Thank you, in advance!
No, you need to compute the loss outside of the model for this. If you’re using Trainer, see here 374 on how to change the loss form the default computed by the model.
0
huggingface
Beginners
How to show the learning rate during training
https://discuss.huggingface.co/t/how-to-show-the-learning-rate-during-training/13914
Hi everyone I would like to know if it is possible to include the learning rate value as part of the information presented during the training. The columns Accuracy, F1, Precision and Recall were added after setting a custom compute_metrics function. And I would like to have the Learning Rate as well. image775×193 2...
Hi Alberto, yes it is possible to include learning rate in the evaluation logs! Fortunately, the log() method of the Trainer class is one of the methods that you can “subclass” to inject custom behaviour: Trainer So, all you have to do is create your own Trainer subclass and override the log() method like so: class MyT...
1
huggingface
Beginners
IndexError: index out of bounds
https://discuss.huggingface.co/t/indexerror-index-out-of-bounds/2859
Hi, I am trying to further pretrain “allenai/scibert_scivocab_uncased” on my own dataset using MLM. I am using following command - python3 ./transformers/examples/language-modeling/run_mlm.py --model_name_or_path "allenai/scibert_scivocab_uncased" --train_file train.txt --validation_file validation.txt --do_train --do_...
Any progress on this? I’m currently facing the same issue.
0
huggingface
Beginners
What is ‘model is currently loading;’
https://discuss.huggingface.co/t/what-is-model-is-currently-loading/13917
Hi. I’m a beginner at NLP. I’m going to summarize sentences using T5 model’s information API. The model is currently loading keeps popping up and does not proceed. Can you tell me what this error is? image852×453 48 KB also I want to summarize more than 5,000 characters into 1,000 to 2,000 characters. How should I wr...
wait_for_model is documented in the link shared above. If false, you will get a 503 when it’s loading. If true, your process will hang waiting for the response, which might take a bit while the model is loading. You can pin models for instant loading (see Hugging Face – Pricing 1)
1
huggingface
Beginners
How to specify labels-column in BERT
https://discuss.huggingface.co/t/how-to-specify-labels-column-in-bert/13649
Hi, i’m trying to follow the huggingface datasets tutorial 1 to finetune a BERT model on a custom dataset for sentiment analysis. The quicktour states: rename our label column in labels which is the expected input name for labels in BertForSequenceClassification. In the docs for to_tf_dataset it states: label_cols –...
Hi @fogx, this is a good question! Here’s what’s happening in to_tf_dataset: columns specifies the list of columns to be passed as the input to the model, and label_cols specifies the list of columns to be passed to Keras at the label. For most tasks (including sentiment analysis), you will usually only want one column...
1
huggingface
Beginners
Dataloader_num_workers in a torch.distributed setup using HF Trainer
https://discuss.huggingface.co/t/dataloader-num-workers-in-a-torch-distributed-setup-using-hf-trainer/13905
Hi everyone - super quick question. I looked around and I couldn’t find this previously asked, but my apologies if I missed something! Wondering if I have HF trainer set up using torch.distributed.launch on 8 gpus… if my dataloader_num_workers = 10… is that 10 total processes for dataloaders or 10*8=80 processes? Thank...
10 * 8 = 80
1
huggingface
Beginners
Memory use of GPT-J-6B
https://discuss.huggingface.co/t/memory-use-of-gpt-j-6b/10078
Hello everyone! I am trying to install GPT-J-6B on a powerful (more or less “powerful”) computer and I have encountered some problems. I have followed the documentation examples (GPT-J — transformers 4.11.0.dev0 documentation 23) and also this guide (Use GPT-J 6 Billion Parameters Model with Huggingface 30). The follow...
You need at least 12GB of GPU RAM for to put the model on the GPU and your GPU has less memory than that, so you won’t be able to use it on the GPU of this machine. You can’t use it in half precision on CPU because all layers of the models are not implemented for half precision (like the layernorm layer) so you need to...
0
huggingface
Beginners
How is the dataset loaded?
https://discuss.huggingface.co/t/how-is-the-dataset-loaded/13790
Hi everyone, I’m trying to pre-train BERT on a cluster server, using the classic run_mlm script. I have a dataset of 27M sentences divided into 27 files. When I was testing my script with 2-5 files, it all worked perfectly, but when I try to use all the dataset, it seems that the execution remains stuck before training...
Hi! What do you get when you run print(dset.cache_files) on your dataset object?
0
huggingface
Beginners
Handling long text in BERT for Question Answering
https://discuss.huggingface.co/t/handling-long-text-in-bert-for-question-answering/382
I’ve read post which explains how the sliding window works but I cannot find any information on how it is actually implemented. From what I understand if the input are too long, sliding window can be used to process the text. Please correct me if I am wrong. Say I have a text "In June 2017 Kaggle announced that it pass...
Hi @benj, Sylvain has a nice tutorial (link 331) on question answering that provides a lot of detail on how the sliding window approach is implemented. The short answer is that all chunks are used to train the model, which is why there is a fairly complex amount of post-processing required to combine everything back to...
0
huggingface
Beginners
Continue fine-tuning with Trainer() after completing the initial training process
https://discuss.huggingface.co/t/continue-fine-tuning-with-trainer-after-completing-the-initial-training-process/9842
Hey all, Let’s say I’ve fine-tuned a model after loading it using from_pretrained() for 40 epochs. After looking at my resulting plots, I can see that there’s still some room for improvement, and perhaps I could train it for a few more epochs. I realize that in order to continue training, I have to use the code trainer...
Yes, you will need to restart a new training with new training arguments, since you are not resuming from a checkpoint. The Trainer uses a linear decay by default, not the 1cycle policy, so you learning rate did end up at 0 at the end of the first training, and will restart at the value you set in your new training arg...
0
huggingface
Beginners
Can one get an embeddings from an inference API that computes Sentence Similarity?
https://discuss.huggingface.co/t/can-one-get-an-embeddings-from-an-inference-api-that-computes-sentence-similarity/9433
Hi there, I’m new to using Huggingface’s inference API and wanted to check if a model whose task is to return Sentence Similarity can return sentence embeddings instead. For example, in this sentence-transformers model, the model task is to return sentence similarity. Instead, I would like to just get the embeddings of...
Hi there! Yes, you can compute sentence embeddings. Here is an example import requests API_URL = "https://api-inference.huggingface.co/pipeline/feature-extraction/sentence-transformers/all-mpnet-base-v2" headers = {"Authorization": "Bearer API_TOKEN"} def query(payload): response = requests.post(API_URL, headers=hea...
0
huggingface
Beginners
Obtaining word-embeddings from Roberta
https://discuss.huggingface.co/t/obtaining-word-embeddings-from-roberta/7735
Hello Everyone, I am fine-tuning a pertained masked LM (distil-roberta) on a custom dataset. Post-training, I would like to use the word embeddings in a downstream task. How does one go about obtaining embeddings for whole-words when the model uses sub-word tokenising. For example, tokeniser.tokenize(‘floral’) will gi...
hey @okkular i believe the standard approach to dealing with this is to simply average the token embeddings of the subwords to generate an embedding for the whole word. having said that, aggregating the embeddings this way might have a negative effect on your downstream performance, so trying both approaches would be a...
0
huggingface
Beginners
Comparing output of BERT model - why do two runs differ even with fixed seed?
https://discuss.huggingface.co/t/comparing-output-of-bert-model-why-do-two-runs-differ-even-with-fixed-seed/11412
I have example code as below. If I instantiate two models as below and compare the outputs, I see different outputs. wondering why would this be the case? – code snippet – # fix seed torch.manual_seed(10) tokenizer = BertTokenizer.from_pretrained(“bert-base-uncased”) config = BertConfig(vocab_size_or_config_json_fi...
resolved : used pretrained to match this.
0
huggingface
Beginners
Which dataset can be used to evaluate a model for sentiment analysis?
https://discuss.huggingface.co/t/which-dataset-can-be-used-to-evaluate-a-model-for-sentiment-analysis/13633
Which dataset can we use to evaluate a model for sentiment analysis that can be used as reference? The model has been fine-tuned over a human-labeled dataset to be trained, so as a reference, is there any metric can be used to evaluate it?
SST-2 is a bit general. If you have a domain-specific use case I’d doubt if there would be a benchmark for that, but your best option is SST-2 imho.
1
huggingface
Beginners
Email confirmation?
https://discuss.huggingface.co/t/email-confirmation/13755
It’s been an hour, I can’t access my token because apparently my email isn’t confirmed yet. I checked my inbox and spam folder and i didn’t see the confirmation email from hugging face??
It’s not working brother, same here , you will get confirmation link but it will say …ehhhh token invalid, damn
0
huggingface
Beginners
Confirmation link
https://discuss.huggingface.co/t/confirmation-link/13741
I haven’t received any confirmation link
hello! i’m having the same issue… did you get it now?
0
huggingface
Beginners
Instances in tensorflow serving DialoGPT-large model
https://discuss.huggingface.co/t/instances-in-tensorflow-serving-dialogpt-large-model/13586
Hi. I have managed to successfully use pretrained DialoGPT model in tensorflow serving (thanks to Merve). Rest API is up and running as it should. The issue occurs when I try to send the data to it. When I try to pass in the example input (available at Huggingface’s API documentation under conversational model section)...
Hello Here you can find a neat tutorial on how to use Hugging Face models with TF Serving 2. As you guessed, instances are your examples you want your model to infer. batch = tokenizer(sentence) batch = dict(batch) batch = [batch] input_data = {"instances": batch} Your payload input works just fine in Inference API b...
0
huggingface
Beginners
How can i get the word representation using BERT?
https://discuss.huggingface.co/t/how-can-i-get-the-word-representation-using-bert/13739
I need to get the embeddings using BERT for word-level, not sub-word . I got a lot of functions one of them gets the embeddings from the last layers in the model but the results for sub-word . How can I start, please? I hope someone can help . Appreciating your time
This previous discussion can be useful Obtaining word-embeddings from Roberta 7
0
huggingface
Beginners
RuntimeError: CUDA out of memory even with simple inference
https://discuss.huggingface.co/t/runtimeerror-cuda-out-of-memory-even-with-simple-inference/11984
Hi everyone, I am trying to use the pre-trained DistiBert model to perform sentiment analysis on some stock data data. When trying to feed the input sentences to the model though I get the following error: ““RuntimeError: CUDA out of memory. Tried to allocate 968.00 MiB (GPU 0; 11.17 GiB total capacity; 8.86 GiB alread...
Hi @totoro02, I faced the same error fine tuning another model, but in my case I needed to lower the batch size from 64 to 16. I did not applied the torch.no_grad(). In your case, what was lowest batch size you tried?
0
huggingface
Beginners
Error 403! What to do about it?
https://discuss.huggingface.co/t/error-403-what-to-do-about-it/12983
I can not save the model, it gives HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/repos/create 5 - You don’t have the rights to create a model under this namespace how to fix?
This is an English-focused forum. Please translate your message - or use a translation engine.
0
huggingface
Beginners
Validation loss always 0.0 for BERT Sequence Tagger
https://discuss.huggingface.co/t/validation-loss-always-0-0-for-bert-sequence-tagger/13654
I want to implement a BERT sequence tagger following this tutorial. My dataset is rather small, so the size of the validation dataset are around 10 texts. The training loss decreases quite good, but the validation loss is always at 0.0. I don’t know what is going on there. When I look at the predicted and the true outp...
Update: I tried out different data and also fine tuning only the last linear layer of the BERT model vs. fine tuning the pretrained + linear layer. I still don’t get why there is no validation loss.
0
huggingface
Beginners
Exporting wav2vec model to ONNX
https://discuss.huggingface.co/t/exporting-wav2vec-model-to-onnx/6695
Hello, I am trying to export a wav2vec model (cahya/wav2vec2-base-turkish-artificial-cv) to ONNX format with convert_graph_to_onnx.py 11 script provided in transformers repository. When I try to use these script with this line: python convert_graph_to_onnx.py --framework pt --model cahya/wav2vec2-base-turkish-artificia...
Hi, Were you able to get this working? I am running into the same issue.
0
huggingface
Beginners
Pinned model still needs to load
https://discuss.huggingface.co/t/pinned-model-still-needs-to-load/12006
Hello, I have a model pinned. After a short amount of idle time the inference API still needs to load the model, i.e. it returns the message ‘Model <username>/<model_name> is currently loading’. This is not supposed to happen, right? As I understand it, this is the whole purpose of pinning models. I have confirmed it i...
We’re encountering the same issue.
0
huggingface
Beginners
FlaxVisionEncoderDecoderModel decoder_start_token_id
https://discuss.huggingface.co/t/flaxvisionencoderdecodermodel-decoder-start-token-id/13635
Im following the documentation here to instantiate a FlaxVisionEncoderDecoderModel but am unable to do so. I’m on Transformers 4.15.0 huggingface.co Vision Encoder Decoder Models We’re on a journey to advance and democratize artificial intelligence through open source and open science. ...
Hi. Do you have Flax/Jax installed on your computer? It’s required in order to use FlaxVisionEncoderDecoderModel. (There should have a better error message for this situation, and it will be fixed.)
0
huggingface
Beginners
Controlling bos, eos, etc in api-inference
https://discuss.huggingface.co/t/controlling-bos-eos-etc-in-api-inference/5701
Is there a way to control the beginning of sentence, end of sentence tokens through the inference api? I could not find it in the documentation.
Hi @cristivlad , Currently there is no way to override those within the API. We are adding and end_sequence parameter to enable stopping the generation when using prompt-like generation (for GPT-Neo for instance). For BOS, what did you want to do with it ? Cheers, Nicolas
0
huggingface
Beginners
Custom Loss: compute_loss() got an unexpected keyword argument ‘return_outputs’
https://discuss.huggingface.co/t/custom-loss-compute-loss-got-an-unexpected-keyword-argument-return-outputs/4148
Hello, I have created my own trainer with a custom loss function; from torch.nn import CrossEntropyLoss device = torch.device("cuda") class_weights = torch.from_numpy(class_weights).float().to(device) class MyTrainer(Trainer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def...
Hi @theudster, I ran into just this problem today The solution is to change the signature of your compute_loss to reflect what is implemented in the source code 18: def compute_loss(self, model, inputs, return_outputs=False): ... return (loss, outputs) if return_outputs else loss It seems that the example in ...
0
huggingface
Beginners
Do transformers need Cross-Validation
https://discuss.huggingface.co/t/do-transformers-need-cross-validation/4074
Hello, I am training a model and according to the standard documentation, I split the data into training and validation and pass those on to Trainer(), where I calculate various metrics. In previous ML projects I used to do K-fold validation. I have not found any examples of people doing this with Transformers and was ...
hi @theudster, i believe the main reasons you don’t see cross-validation with transformers (or deep learning more generally) is because a) transfer learning is quite effective and b) most public applications involve large enough datasets where the effect of picking a bad train/test split is diminished. having said that...
0
huggingface
Beginners
Boilerplate for Trainer using torch.distributed
https://discuss.huggingface.co/t/boilerplate-for-trainer-using-torch-distributed/13567
Hi everyone, Thanks in advance for helping! First I just have to say, as my first post here, Huggingface is awesome. We have been using the tools/libraries for a while for NLP work and it is just a pleasure to use and so applicable to real-world problems!!! We are “graduating” if you will from single GPU to multi-GPU m...
The Trainer code will run on distributed or one GPU without any change. Regarding your other questions: you need to define your model in all processes, they will see different part of the data each and all copies will be kept the same. the tokenizer and dataset preprocessing can either be done on all processes if it d...
1
huggingface
Beginners
Fine-Tune Wav2Vec2 for English ASR with Transformers article bug
https://discuss.huggingface.co/t/fine-tune-wav2vec2-for-english-asr-with-transformers-article-bug/10440
Hello @patrickvonplaten! Thank you so much for the tutorials. Super helpful! I’ve been having troubles with setting up a CTC head on a pre-trained model with an existing CTC head and wanted to point out a possible problem in the tutorial 3 that in part led me to having my main problem. First, I’ll pinpoint the problem ...
Well, the label error was my error with adding a special char to the vocabulary… But the addition of a token with zero id is still under my investigation
0
huggingface
Beginners
How to load my own BILOU/IOB labels for training?
https://discuss.huggingface.co/t/how-to-load-my-own-bilou-iob-labels-for-training/12877
Hi everyone, I’m not sure if I just missed it in the documentation, but I’m looking to fine-tune a model with my own annotated data (out of Doccano). I’m comfortable manipulating the Doccano output into a format specific to what HuggingFace needs, but I’m not actually sure how to load my own data with the IOB labels....
I was having the same problem and this 2 helped. Basically you still need to create your own data loader. Based on what they described in their documents I thought the Dataset library could automatically identify and load common data formats, guess I was wrong…
0
huggingface
Beginners
Should we save the tokenizer state over domain adaptation?
https://discuss.huggingface.co/t/should-we-save-the-tokenizer-state-over-domain-adaptation/13324
I am going to do domain adaptation over my dataset with BERT. When I train the BERT model, should I save the tokenizer state? I mean, will tokenizer state change over training the model?
No, the tokenizer is not changed by the model fine-tuning on a new dataset.
1
huggingface
Beginners
Why am I getting KeyError: ‘loss’?
https://discuss.huggingface.co/t/why-am-i-getting-keyerror-loss/6948
Why when I run trainer.train() it gives me Keyerror:‘loss’ previously i use something like start_text and stop_text and I read in previous solution that this the cause of error so I delete it, but it still give same error.Did you have any solution? Thanks from transformers import AutoTokenizer, AutoModelWithLMHead toke...
There are no labels in your dataset, so it can’t train (and the model does not produce a loss, hence your error). Maybe you wanted to use the DataCollatorForMaskedLM to generate those labels automatically?
0
huggingface
Beginners
Helsinki-NLP/opus-mt-en-fr missing tf_model.h5 file
https://discuss.huggingface.co/t/helsinki-nlp-opus-mt-en-fr-missing-tf-model-h5-file/13467
Hi there, I have been following the tensorflow track of the HF course and got an http 404 error when running the below: from transformers import TFAutoModelForSeq2SeqLM model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) error message: 404 Client Error: Not Found for url: https://huggingface.co/Helsinki...
Hello You need to set from_pt = True when loading. from transformers import TFAutoModelForSeq2SeqLM model_checkpoint = "Helsinki-NLP/opus-mt-en-fr" model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint, from_pt = True) Downloading: 100% 1.26k/1.26k [00:00<00:00, 34.4kB/s] Downloading: 100% 287M/287M [...
1
huggingface
Beginners
SpanBERT, ELECTRA, MARGE from scratch?
https://discuss.huggingface.co/t/spanbert-electra-marge-from-scratch/13374
Hey everyone! I am incredibly grateful for this tutorial on training a language model from scratch: How to train a new language model from scratch using Transformers and Tokenizers 3 I really want to expand this to contiguous masking of longer token sequences (e.g. [mask-5], [mask-8]). I have begun looking into how to ...
Found something useful on StackOverflow for this: stackoverflow.com Best way of using hugging face's Mask Filling for more than 1 masked token at a time 5 python, neural-network, nlp, huggingface-transformers asked by user3472360 on 11:50AM - 02 Apr 20 UTC
0
huggingface
Beginners
How to get embedding matrix of bert in hugging face
https://discuss.huggingface.co/t/how-to-get-embedding-matrix-of-bert-in-hugging-face/10261
I have tried to build sentence-pooling by bert provided by hugging face from transformers import BertModel, BertTokenizer model_name = 'bert-base-uncased' tokenizer = BertTokenizer.from_pretrained(model_name) # load model = BertModel.from_pretrained(model_name) input_text = "Here is some text to encode" # tokenizer-> ...
Betacat: actually I want to get the word that my last_hidden_state refer to Actually, that’s not possible, unless you compute cosine similarity between the mean of the last hidden state and the embedding vectors of each token in BERT’s vocabulary. You can do that easily using sklearn. The embedding matrix of BERT ca...
1
huggingface
Beginners
How to load a pipeline saved with pipeline.save_pretrained?
https://discuss.huggingface.co/t/how-to-load-a-pipeline-saved-with-pipeline-save-pretrained/5373
Hi, I have a system saving an HF pipeline with the following code: from transformers import pipeline text_generator = pipeline('...') text_generator.save_pretrained('modeldir') How can I re-instantiate that model from a different system What code snippet can do that? I’m looking for something like p = pipeline.from_pr...
I think pipeline(task, 'modeldir') should work to reload it.
0
huggingface
Beginners
How do I message someone?
https://discuss.huggingface.co/t/how-do-i-message-someone/13459
I’ve found a space and I would like to message the developer. Is there any way I can contact the person in the platform or get the contact email?
You can’t message them directly via their Hugging Face profile at the moment, but you can send them a message via the forum
0
huggingface
Beginners
Do we use pre-trained weights in Trainer?
https://discuss.huggingface.co/t/do-we-use-pre-trained-weights-in-trainer/13472
When we use Trainer to build a language model with MLM, based on which model we use (suppose DistilBERT), do we use the pre-trained weights in Trainer or weights are supposed to be updated from scrach?
You can do either – it depends on how you create your model. Trainer just handles the training aspect, not the model initialization. # Model randomly initialized (starting from scratch) config = AutoConfig.for_model("distilbert") # Update config if you'd like # config.update({"param": value}) model = AutoModelForMasked...
0
huggingface
Beginners
Longformer for Encoder Decoder with gradient checkpointing
https://discuss.huggingface.co/t/longformer-for-encoder-decoder-with-gradient-checkpointing/13428
I’m struggling to find the right transformers class for my task. I want to solve a seq2seq problem with an encoder decoder longformer. I generated one with this german RoBERTa model 1 using this script 1. I know that I could use EncoderDecoderModel() 1, but the issue is that it doesn’t support gradient checkpointing, w...
I found a solution which at least helps a little: When using EncoderDecoderModel(), it is possible to set gradient checkpointing at least on the encoder part: model.encoder.config.gradient_checkpointing = True
0
huggingface
Beginners
Trouble with fine tuning DialoGPT-large
https://discuss.huggingface.co/t/trouble-with-fine-tuning-dialogpt-large/8161
I’m trying to fine tune the DialoGPT-large model but I’m still really new to ML and am probably misusing the trainer API. I already went through the tutorial and the colab examples but I still can’t figure out the issue. error: Traceback (most recent call last): File "/.../main.py", line 26, in <module> tokenized...
Hello, I’ve had this same error. It’s due to there being nonetypes in your data. So when you are creating your dataframe from the data, change your code to specify values, and convert it to a list: for i in data.index.values.tolist(): If you’re still having issues after that, try the following code on your dataframe o...
0
huggingface
Beginners
How can I view total downloads of my model?
https://discuss.huggingface.co/t/how-can-i-view-total-downloads-of-my-model/10476
Hello, I have uploaded my fine-tuned ‘PEGASUS’ model more than 3 months ago. Is there a way I can view my total downloads rather than just my last month downloads?
cc @julien-c
0
huggingface
Beginners
How can I enforce reproducibility for Longformer?
https://discuss.huggingface.co/t/how-can-i-enforce-reproducibility-for-longformer/8862
Hi all, I’m struggling with ensuring reproducible results with the Longformer. Here is the result of transformer-cli env: transformers version: 4.9.1 Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29 Python version: 3.8.10 PyTorch version (GPU?): 1.8.1+cu102 (True) Tensorflow version (GPU?): 2.5.0 (False) Flax v...
Hi @DavidPfl, were you able to figure this out?
0
huggingface
Beginners
Any pretrained model for Grammatical Error Correction(GEC)?
https://discuss.huggingface.co/t/any-pretrained-model-for-grammatical-error-correction-gec/4662
Hi, is there any pre-trained model for GEC task? It is often treated as an MT task.
As I couldn’t find one, I developed a model using Marian NMT and then migrated this model to Huggingface to use it as pre trained. I wrote a post describing my path. Hope this helps anyone. Medium – 24 Dec 21 Training a Grammar Error Correction (GEC) Model from Scratch with Marian NMT... 2 Ob...
0
huggingface
Beginners
Bert embedding layer
https://discuss.huggingface.co/t/bert-embedding-layer/13355
I have taken specific word embeddings and considered bert model with those embeddings self.bert = BertModel.from_pretrained(‘bert-base-uncased’) self.bert(inputs_embeds=x,attention_mask=attention_mask, *args, **kwargs) Does this means I’m replacing the bert input embeddings(token+position+segment embeddings) How to con...
Hi, As you can see here 2, if you provide inputs_embeds yourself, they will only be used to replace the token embeddings. The token type and position embeddings will be added separately.
1
huggingface
Beginners
Best model to use for Abstract Summarization
https://discuss.huggingface.co/t/best-model-to-use-for-abstract-summarization/13240
I am looking for a pre-trained model for abstract summarization, I have used the Google Pegasus-xsum and Pegasus-Large, the xsum seems good but only provide one liner summary, while the Pegasus-large seems that it is providing the extractive summary rather than the abstractive, it just picks up the sentences from parag...
I have tested a few different models and have found facebook/bart-large-cnn to be the best for my use-case.
0
huggingface
Beginners
Tokenizer issue in Huggingface Inference on uploaded models
https://discuss.huggingface.co/t/tokenizer-issue-in-huggingface-inference-on-uploaded-models/3724
While inferencing on the uploaded model in huggingface, I am getting the below error, Can’t load tokenizer using from_pretrained, please update its configuration: Can’t load tokenizer for ‘bala1802/model_1_test’. Make sure that: - ‘bala1802/model_1_test’ is a correct model identifier listed on ‘Hugging Face – On a miss...
Hi @bala1802, I just tried your text generation model on the inference API (link 11) and it seems to work without any error - perhaps you found a way to solve your problem?
0
huggingface
Beginners
BERT and RoBERTA giving same outputs
https://discuss.huggingface.co/t/bert-and-roberta-giving-same-outputs/10214
Hi All. I tried using Roberta model in two different models. In both these models, I’ve faced same problem of getting same output for different test input during evaluation process. Earlier, I thought it might be due to some implementation problem and hence I took a small dataset to overfit the dataset and predict the ...
@theguywithblacktie have you figured out what was wrong with your code as I am facing the same issue while using roberta from transformer library.
0
huggingface
Beginners
Push_to_hub usage errors?
https://discuss.huggingface.co/t/push-to-hub-usage-errors/9132
Trying to push my model back to the hub from python (not notebook) and failing so far: I am using a T5 model with the latest development version of the example “run_summarization.py” and pass a load of runtime parameters in and my model works fine. There are some parameters that seem to relate to pushing the model back...
As the error indicates, you are trying to clone an existing repository in a folder that is not a git repository, so you should use an empty folder, or an ID for a new repository.
0
huggingface
Beginners
Hugging Face Tutorials - Basics / Classification tasks
https://discuss.huggingface.co/t/hugging-face-tutorials-basics-classification-tasks/13345
Hi everyone, I recently decided to make practical coding guides for hugging face because I thought videos like these would have been useful for when I was learning the basics. I have made one on the basics of the Hugging Face library / website and its layout (and BERT basics). The second one is a guide using pytorch / ...
This is awesome! Thanks for sharing
0
huggingface
Beginners
Where to put use_auth_token in the code if you can’t run hugginface-cli login command?
https://discuss.huggingface.co/t/where-to-put-use-auth-token-in-the-code-if-you-cant-run-hugginface-cli-login-command/11701
I am training a bart_large_cnn model for summarization, where I have used : - training_args = Seq2SeqTrainingArguments( output_dir="results", num_train_epochs=1, # demo do_train=True, do_eval=True, per_device_train_batch_size=4, # demo per_device_eval_batch_size=4, # learning_rate=3e-05, ...
I added the following parameter (hub_token) in the training_args as below :- training_args = Seq2SeqTrainingArguments( output_dir="results", num_train_epochs=1, # demo do_train=True, do_eval=True, per_device_train_batch_size=4, # demo per_device_eval_batch_size=4, # learning_rate=3e-05, ...
0
huggingface
Beginners
Errors when fine-tuning T5
https://discuss.huggingface.co/t/errors-when-fine-tuning-t5/3527
Hi everyone, I’m trying to fine-tune a T5 model. I followed most (all?) the tutorials, notebooks and code snippets from the Transformers library to understand what to do, but so far, I’m only getting errors. The end goal is giving T5 a task such as finding the max/min of a sequence of numbers, for example, but I’m star...
Good job posting your issue in a very clear fashion, it was very easy to reproduce and debug So the problem is that you are using model_inputs wrong. It contains one key for input_ids, labels and attention_mask and each value is a 2d tensor with the first dimension being 3 (your number of sentences). You dataset shoul...
0
huggingface
Beginners
Inference API in JavaScript
https://discuss.huggingface.co/t/inference-api-in-javascript/11537
Dear Community, Good day. How can I use Inference API in pure JavaScript. My goal is to test the ML model on the web by sending API request and getting a response. Is it possible to do it? Can someone show me an example? Akbar
I would also be interested in this. All help highly appreciated
0
huggingface
Beginners
Shuffle a Single Feature (column) in a Dataset
https://discuss.huggingface.co/t/shuffle-a-single-feature-column-in-a-dataset/13195
Hi, I am learning the dataset API 2. The shuffle API states that it rearranges the values of a column but from my experimentations, it shuffles the rows. The code documentation 2 is more clear and states that the rows are shuffled. To achieve column shuffle I used the map functionality (batch=True) and created the foll...
Using the imdb (movie review dataset) data as an example, this is 1000s of movie reviews, with columns being the text for the movie review and then the label (0 or 1). We wouldn’t want to shuffle the columns - this would only be swapping the text and the label - there is no benefit to that. We care about shuffling the ...
0
huggingface
Beginners
Can I download the raw text of a dataset?
https://discuss.huggingface.co/t/can-i-download-the-raw-text-of-a-dataset/13298
Hi, Can I download the raw text as a file of a dataset? Thanks, Steve
There are a couple of options. Using the imdb dataset as an example. As an arrow file: from datasets import load_dataset im = load_dataset('imdb') imdb.save_to_disk(dataset_dict_path='./imdb') Will save your files in the imdb directory. Or convert to pandas as then save as csv / json from datasets import load_dataset...
0
huggingface
Beginners
How do organizations work?
https://discuss.huggingface.co/t/how-do-organizations-work/8918
After creating a new user account, I see an option to create a new organization. As I am hoping to create some datasets that might be shared within my organization, this seems like a good choice. However, I don’t see a way to search existing organizations to see if (unlikely) someone has previously created a huggingfac...
hey @jnemecek here’s a few quick answers, but feel free to open an issue on the huggingface_hub repo 2 if you think the docs could be improved further. jnemecek: However, I don’t see a way to search existing organizations to see if (unlikely) someone has previously created a huggingface organization that I should ...
0
huggingface
Beginners
Cnn_dailymail dataset loading problem with Colab
https://discuss.huggingface.co/t/cnn-dailymail-dataset-loading-problem-with-colab/13281
The cnn_dailymail dataset was rarely downloaded successfully in the past few days. import datasets test_dataset = datasets.load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”) Most of the time when I try to load this dataset using Colab, it throws a “Not a directory” error: NotADirectoryError: [Errno 20] Not a direc...
I would either try streaming 3 or clear the cache, mount drive & let it save under ‘/content’.
0
huggingface
Beginners
Deploying to Model Hub for Inference with custom tokenizer
https://discuss.huggingface.co/t/deploying-to-model-hub-for-inference-with-custom-tokenizer/3595
Hello everyone, I have a working model for a text generation task and I would to use the huggingface Inference API to simplify calling this model to generate new text. However I used a custom WordLevel tokenizer due to the nature of my domain and the docs aren’t very clear on how I would make this work since I didn’t u...
@jodiak Did you ever find a solution to this?
0
huggingface
Beginners
Trainer.evaluate() with text generation
https://discuss.huggingface.co/t/trainer-evaluate-with-text-generation/591
Hi everyone, I’m fine-tuning XLNet for generation. For training, I’ve edited the permutation_mask to predict the target sequence one word at a time. I’m evaluating my trained model and am trying to decide between trainer.evaluate() and model.generate(). Running the same input/model with both methods yields different...
Hi, I encountered a similar problem when trying to use EncoderDecoderModel for seq2seq tasks. It seems like Trainer does not support text-generation tasks for now, as their website https://huggingface.co/transformers/examples.html 36 shows.
0
huggingface
Beginners
XLM-Roberta for many-topic classification
https://discuss.huggingface.co/t/xlm-roberta-for-many-topic-classification/12638
Hi, I am working on a multi-label topic classifier that can classify webpages into some of our ~100 topics. The classifier currently uses a basic Neural Network and I wish to adapt the XLM-R model provided by Huggingface to give the classifier multi-lingual capabilities. However, when I train a classifier using XLM-R t...
I don’t have an answer, unfortunately, but I have the same issue and can add some details. I have a classification problem with 60 classes. I have training data of ~70k documents, unfortunately, with very unbalanced class distribution. My baseline is a FastText classifier trained on the same data which achieves an accu...
0
huggingface
Beginners
Pegasus - how to get summary of more than 1 line?
https://discuss.huggingface.co/t/pegasus-how-to-get-summary-of-more-than-1-line/11014
I am trying Abstractive summary with Pegasus by following the example code give here: huggingface.co Pegasus 2 DISCLAIMER: If you see something strange, file a Github Issue and assign @patrickvonplaten. Overview: The Pegasus model was proposed in PEGASUS: Pre-training... ...
kapilkathuria: e only. Is t that is because you are probably using the pegasus-xsum model… All the xsum model generates one line summary.
0
huggingface
Beginners
How to change my account to organization?
https://discuss.huggingface.co/t/how-to-change-my-account-to-organization/13025
hi, I want to sign up as a organization (company). Is there any way to register this account as a company? If not, we request you to delete this account thank you.
As a user, you can create an organization by clicking your profile picture + New Organization. There is no concept of an account being a company; instead, you and more people can belong to an organization. Let me know if you need any further help!
0
huggingface
Beginners
Entity type classification
https://discuss.huggingface.co/t/entity-type-classification/13146
Certain entities are wrongly classified. For example in some cases ORG are classified as people. How to correct this?
Welcome @dorait Is it possible if you could send me the model you’re inferring with and an example input?
0
huggingface
Beginners
Trainer .train (resume _from _checkpoint =True)
https://discuss.huggingface.co/t/trainer-train-resume-from-checkpoint-true/13118
Hi all, I’m trying to resume my training from a checkpoint my training argument: training_args = TrainingArguments( output_dir=repo_name, group_by_length=True, per_device_train_batch_size=16, per_device_eval_batch_size=1, gradient_accumulation_steps=8, evaluation_strategy=“steps”, num_train_epochs=50, fp16=True, save_s...
maher13: trainer.train(resume_from_checkpoint=True) Probably you need to check if the models are saving in the checkpoint directory, You can also provide the checkpoint directory in the resume_from_checkpoint=‘checkpoint_dir’
0
huggingface
Beginners
Using DialoGPT for Text Classification
https://discuss.huggingface.co/t/using-dialogpt-for-text-classification/13123
I have labelled set of 21000 chat transcripts between agent and customer. My intent is to classify what they are talking about? I have already tried few models but wanted to try something that was trained on similar kind of data. I stumbled upon DialoGPT. How can I finetune DialoGPT for Multi-class Text Classification?
Hello, You have two ways of doing this. First is through an intent action chatbot (which looks like it’s something you need instead of a generative model). If you want a better control over the responses your chatbot gives and have a set of answers (actions) to user inputs (intents) you should rather solve this as a se...
0
huggingface
Beginners
Adding new Entities to Flair NER models
https://discuss.huggingface.co/t/adding-new-entities-to-flair-ner-models/3913
Hi, I hope it’s not inapproriate to ask question about Flair here. I noticed that Flair models were also hosted on the model hub and I could not find the answer to my question anywhere else. I have a NER problem that I need to tackle and there is a nearly perfect existing model. The problem is however, that it lacks a ...
Hi @neuralpat, I’ve never tried this but I wonder whether you could fine-tune the existing NER model on a small corpus composed of a mix of the original annotation and the new ones you’d like to extend with (I think the mix is needed so the original model doesn’t forget the original entities)? Alternatively you could t...
0
huggingface
Beginners
The inputs into BERT are token IDs. How do we get the corresponding input token VECTORS?
https://discuss.huggingface.co/t/the-inputs-into-bert-are-token-ids-how-do-we-get-the-corresponding-input-token-vectors/11273
Hi, I am new and learning about transformers. In alot of BERT tutorials i see the input is just the token id of the words. But surely we need to convert this token ID to a vector representation (it can be one hot encoding, or any initial vector representation for each token ID) so that it can be used by the model? My q...
The token ID specifically is used in the embedding layer, which you can see as a matrix with as row indices all possible token IDs (so one row for each item in the total vocabulary size, for instance 30K rows). Every token therefore has a (learned!) representation. Be ware though, that this is not the same as word2vec ...
1
huggingface
Beginners
T5 [ input & target ] text
https://discuss.huggingface.co/t/t5-input-target-text/13134
Hello, I am trying to fine tune mt5 on seq2seq tasks with my own dataset, and the dataset has input and target columns. How can I tell the model about those input & target to be trained on ?
Hi, The T5 docs is quite extensive (it’s equivalent for mT5): T5 1. Next to that, I do have some notebooks that illustrate how to fine-tune T5 models: Transformers-Tutorials/T5 at master · NielsRogge/Transformers-Tutorials · GitHub 3
0
huggingface
Beginners
Getting started with GPT2
https://discuss.huggingface.co/t/getting-started-with-gpt2/13125
Hi I am new to HuggingFace. I want to build a ReactNative mobile app that can leverage the HuggingFace GPT2 model. I wanted to host a GPT2 model (as is, without fine-tuning) on HuggingFace servers so I can invoke the model via an API from my mobile app. Can someone please guide me on how to do this? Your help is really...
Hey @shivenpandey21, Check out the following links: How to programmatically access the Inference API 4 Inference API Docs Pricing 1
0
huggingface
Beginners
Error while downloading BertForQuestionAnswering
https://discuss.huggingface.co/t/error-while-downloading-bertforquestionanswering/13120
Hi, I just ran this code: ""from transformers import BertTokenizer, BertForQuestionAnswering modelname = ‘deepset/bert-base-cased-squad2’ tokenizer = BertTokenizer.from_pretrained(modelname) model = BertForQuestionAnswering.from_pretrained(modelname)"" but I got an error like this: ""OSError: Can’t load weights for ‘de...
Hey, I ran this code in Colab and works perfectly fine for me. Can you please try the below code again? from transformers import BertTokenizer, BertForQuestionAnswering modelname = "deepset/bert-base-cased-squad2" tokenizer = BertTokenizer.from_pretrained(modelname) model = BertForQuestionAnswering.from_pretrained(mo...
0
huggingface
Beginners
Unfreeze BERT vs pre-train BERT for Sentiment Analysis
https://discuss.huggingface.co/t/unfreeze-bert-vs-pre-train-bert-for-sentiment-analysis/13041
I am doing Sentiment Analysis over some text reviews, but I do not get good results from. I use BERT for feature extraction and a Fully Connected as classifier. I am going to do these experiments, but I do not have any overview of the results in general. I have two options: 1- Unfreeze some Transfomer layers and let th...
Currently, it seems that the consensus is that to get the best results when fine-tuning on a downstream task you don’t freeze any layers at all. If you’re freezing the weights to save up on memory, then I’d suggest considering Adapter Framework. The idea of it is, basically, to insert additional trainable layers in-bet...
1
huggingface
Beginners
Vector similarity in Python representing music notes
https://discuss.huggingface.co/t/vector-similarity-in-python-representing-music-notes/13094
I have the following vector, which represents 5 notes played on a guitar: [(0, 0.06365224719047546), (41, 0.6289597749710083), (42, 0.6319441795349121), (43, 0.632896363735199), (44, 0.631447434425354), (45, 0.6318693161010742), (46, 0.6315509080886841), (47, 0.6318208575248718), (48, 0.6322312355041504), (49, 0.631270...
Visual representation of a second vector: vv221280×642 19.9 KB
0
huggingface
Beginners
Is last_hidden_state the output of Encoder block?
https://discuss.huggingface.co/t/is-last-hidden-state-the-output-of-encoder-block/13084
When we use BertModel.forward() , is the last_hidden_state the output of Encoder in Transformers block?
Yes! It’s a tensor of shape (batch_size, seq_len, hidden_size).
1
huggingface
Beginners
Replacing last layer of a fine-tuned model to use different set of labels
https://discuss.huggingface.co/t/replacing-last-layer-of-a-fine-tuned-model-to-use-different-set-of-labels/12995
I’m trying to fine-tune dslim/bert-base-NER using the wnut_17 dataset. Since the number of NER labels is different, I manually replaced these parameters in the model to get rid of the size mismatch error : model.config.id2label = my_id2label model.config.label2id = my_label2id model.config._num_labels = len(my_id2label...
Thank you @nielsr for being responsive. That error is resolved now, but the question is "does simply changing the number of labels mean that we have changed the classifier head?!" By the way, for my problem, I had to do these modifications: model_name = "dslim/bert-base-NER" mymodel = AutoModelForTokenClassification.fr...
1
huggingface
Beginners
Key Error ‘loss’ - finetuning [ arabert , mbert ]
https://discuss.huggingface.co/t/key-error-loss-finetuning-arabert-mbert/13052
Hello all! I am trying to fine-tune mbert and arabert models on translation task as explained here, however, I am getting this error Key Error ‘loss’ the input to the model : DatasetDict({ train: Dataset({ features: ['attention_mask', 'input_ids', 'labels', 'src', 'token_type_ids', 'trg'], num_rows:...
You can use the Trainer on BertModel as it has no objective, it’s the body of the model with no particular head. You should pick a model with a head suitable for the task at hand.
0
huggingface
Beginners
Predicting On New Text With Fine-Tuned Multi-Label Model
https://discuss.huggingface.co/t/predicting-on-new-text-with-fine-tuned-multi-label-model/13046
This is very much a beginner question. I am trying to load a fine tuned model for multi-label text classification. Fine-tuned on 11 labels, bert-base-uncased. I just want to feed new text to the model and get the labels predicted to be associated with the text. I have looked everywhere and cannot find an example of ho...
To get all scores, pipeline has a parameter clf("text", return_all_scores=True) For the label being LABEL_7, you need to check out the config.json in your repo. See for example id2label and label2id in config.json · distilbert-base-uncased-finetuned-sst-2-english at main 1.
1
huggingface
Beginners
Deployed GPT-2 models vs “Model Card” question
https://discuss.huggingface.co/t/deployed-gpt-2-models-vs-model-card-question/12961
Hi, I’ve noticed a difference in performance between a GPT-2 trained conversational chat bot deployed as a Discord chat bot, and it’s “Model Card” page. (Not sure if that is the correct term) huggingface.co Jonesy/DialoGPT-medium_Barney · Hugging Face When you repeat...
Hello The inference widget and your bot in discord might be using different temperature and sampling parameters (here’s a great blog post if you’re interested btw), at least this is my guess.
0
huggingface
Beginners
HOW TO determine the best threshold for predictions when making inference with a finetune model?
https://discuss.huggingface.co/t/how-to-determine-the-best-threshold-for-predictions-when-making-inference-with-a-finetune-model/13001
Hello, I finetune a model but the F-score is not quite good for certains classes. To avoid a lot of false positives I decide to set a threshhold for the probabilities and I would like to know how to determine, the best threshhold ? Should I use the mean, median , or just look at accuracy of the model on the test_data ?
Hi, The best way to determine the threshold is to compute the true positive rate (TPR) and false positive rate (FPR) at different thresholds, and then plot the so-called ROC-curve. The ROC curve plots, for every threshold, the corresponding true positive rate and false positive rate. Then, selecting the point (i.e. thr...
0
huggingface
Beginners
Which model can use to pre-train a BERT model?
https://discuss.huggingface.co/t/which-model-can-use-to-pre-train-a-bert-model/13027
I am going to do pre-train a BERT model on specific dataset aiming for Sentiment Analysis. To self-train the model, which method will be better to use: Masked Language Modeling or Next Sentence Prediction? Or maybe there is not specific answer.
Choosing depends on what you want to do. Using masked language modeling is good when you want good representations of the data with which it was trained. Next sentence prediction, or rather causal language modeling (such as GPT), are better when you want to focus in generation. The course has a section in how to fine...
1
huggingface
Beginners
Which parameter is causing the decrease in Learning rate every epoch?
https://discuss.huggingface.co/t/which-parameter-is-causing-the-decrease-in-learning-rate-every-epoch/13015
Hey, I have been trying to train my model on mnli and the learning rate seems to keep decreasing for no reason. Can someone help me? - train_args = TrainingArguments( output_dir=f'./resultsv3/output', logging_dir=f'./resultsv3/output/logs', learning_rate=3e-6, per_device_train_batch_size=4, per_devi...
The learning_rate parameter is just the initial learning rate, but it is usually changed during training. You can find the default values of TrainingArguments at Trainer. You can see that lr_scheduler_type is linear by default. As specified in its [documentation(Optimization), linear creates a schedule with a learning ...
0
huggingface
Beginners
Batch size vs gradient accumulation
https://discuss.huggingface.co/t/batch-size-vs-gradient-accumulation/5260
Hi, I have a basic theoretical question. Which one is better for the model and GPU usage? First option: --per_device_train_batch_size 8 --gradient_accumulation_steps 2 Second option: --per_device_train_batch_size 16
Using gradient accumulation loops over your forward and backward pass (the number of steps in the loop being the number of gradient accumulation steps). A for loop over the model is less efficient than feeding more data to the model, as you’re not taking advantage of the parallelization your hardware can offer. The onl...
1
huggingface
Beginners
Is it possible to do inference on gpt-j-6B via Colab?
https://discuss.huggingface.co/t/is-it-possible-to-do-inference-on-gpt-j-6b-via-colab/13007
When I use the pipeline API, it crashes Colab with an out of memory error (fills 25.5GB of RAM). I think it should be possible to do the inference on TPUv2? But how do I tell the pipeline to start using the TPUs from the start? from transformers import pipeline model_name = 'EleutherAI/gpt-j-6B' generator = pipeline('t...
Hi, Inference is only possible on Colab Pro. You can check my notebook here 3 for more info.
1
huggingface
Beginners
Different results predicting from trainer and model
https://discuss.huggingface.co/t/different-results-predicting-from-trainer-and-model/12922
Hi, I’m training a simple classification model and I’m experiencing an unexpected behaviour: When the training ends, I predict with the model loaded at the end with: predictions = trainer.predict(tokenized_test_dataset) list(np.argmax(predictions.predictions, axis=-1)) and I obtain predictions which match the accuracy ...
It’s hard to know where the problem lies without seeing the whole code. It could be that your model_inputs are defined differently than in the tokenized_test_dataset for instance.
0
huggingface
Beginners
GPT-2 trained models output repeated “!”
https://discuss.huggingface.co/t/gpt-2-trained-models-output-repeated/12962
Hello, separate question from my last post. Same example model: huggingface.co Jonesy/DialoGPT-medium_Barney · Hugging Face When I repeat the same input on my models, they appear to have a nervous breakdown (repeated exclamation marks over and over, it is a little di...
Hello, I feel like you can make use of temperature parameter when inferring to avoid repetition and put more randomness to your conversations. I found a nice model card showing how to infer 1 with DialoGPT. Hope it helps. There’s a nice blog post by Patrick that explains generative models. from transformers import Auto...
0
huggingface
Beginners
Trainer.save_pretrained(modeldir) AttributeError: ‘Trainer’ object has no attribute ‘save_pretrained’
https://discuss.huggingface.co/t/trainer-save-pretrained-modeldir-attributeerror-trainer-object-has-no-attribute-save-pretrained/12950
I am trying to save a model during finetuning but I get this error ? trainer, outdir = prepare_fine_tuning(PRE_TRAINED_MODEL_NAME, train_dataset, val_dataset, tokenizer, sigle, train_name, elt_train.name) trainer.train() trainer.evaluate() #trainer.save_model(modeldir) trainer.save_pretrained(modeldir) to...
I don’t knoe where you read that code, but Trainer does not have a save_pretrained method. Checkout the documentaiton 3 for a list of its methods!
1
huggingface
Beginners
NER fine-tuning
https://discuss.huggingface.co/t/ner-fine-tuning/12980
Hi, I want to fine tune a NER model on my own dataset and entities. Is that possible? If yes how ? Thanks in advance
Hello, You can take a look at token classification notebooks here 3 for guidance. There’s also course chapter on token classification. 5
0
huggingface
Beginners
Log training accuracy using Trainer class
https://discuss.huggingface.co/t/log-training-accuracy-using-trainer-class/5529
Hello, I am running BertForSequenceClassification and I would like to log the accuracy as well as other metrics that I have already defined for my training set . I saw in another issue that I have to add a self.evaluate(self.train_dataset) somewhere in the code, but I am a beginner when it comes to Python and deep lear...
Hi, I am having a similar issue. I used the Trainer similar to the example and all I see in the output is the training loss but I don’t see any training accuracy. I wonder if find out how to log accuracy. Thanks.
0
huggingface
Beginners
How can I get the last value of the tensor token obtained from model.generate?
https://discuss.huggingface.co/t/how-can-i-get-the-last-value-of-the-tensor-token-obtained-from-model-generate/12937
tensor([[ 2337, 2112, 387, 3458, 385, 378, 1559, 379, 1823, 1674, 427, 547, 2158, 803, 3328, 26186, 409, 1751, 4194, 971, 395, 1591, 418, 1670, 4427, 2518, 107, 978, 461, 758, 463, 418, 549, 402, 1959, 393, 499, 409, 17263, 792]], ...
I found a solution. I was able to do it with token[0][-1] . (indexing) I guess I should study tensor.
1
huggingface
Beginners
Question answering bot: fine-tuning with custom dataset
https://discuss.huggingface.co/t/question-answering-bot-fine-tuning-with-custom-dataset/4412
Hello everybody I would like to fine-tune a custom QAbot that will work on italian texts (I was thinking about using the model ‘dbmdz/bert-base-italian-cased’) in a very specific field (medical reports). I already followed this guide 7 and fine-tuned an english model by using the default train and dev file. The problem...
Hi @Neuroinformatica, from the datasets docs 3 it seems that the ideal format is line-separated JSON, so what I usually do is convert the SQuAD format as follows: import json from datasets import load_dataset input_filename = "dev-v2.0.json" output_filename = "dev-v2.0.jsonl" with open(input_filename) as f: datas...
0
huggingface
Beginners
Does Trainer prefetch data?
https://discuss.huggingface.co/t/does-trainer-prefetch-data/12777
Hi everyone, I’m pretty new to this. I’m trying to train a transformer model on a GPU using transformers.Trainer. I’m doing my prototyping at home on a Windows 10 machine with a 4-core CPU with a 1060 gtx. I have my data, model, and trainer all set up, and my dataset is of type torch.utils.data.Dataset. Based on what I...
In case anyone else is wondering about this, I figured it out. Trainer indeed appears to prefetch data. The problem was that my data loader was too slow to keep up with the GPU. After optimizing my data loading routine, I’m able to keep the GPU busy constantly.
1
huggingface
Beginners
Problems and solution on Trainer
https://discuss.huggingface.co/t/problems-and-solution-on-trainer/11498
I am using the trainer to train an ASR model, the dataset and the output dimension are huge. This will cause some problems during training. I struggle with it many days, so I post my solution here, hope it can help. compute_metrics out of memory issue during compute_metrics, it will save all the logits in an array, w...
Note that for 3, you can have it in your datasets.Dataset computed once and for all with the map method, you just have to store the resuls in a "lengths" column. It will then uses the Dataset features and not try to access every element.
1
huggingface
Beginners
Can i export a VisionEncoderDecoder checkpoint to onnx
https://discuss.huggingface.co/t/can-i-export-a-visionencoderdecoder-checkpoint-to-onnx/12885
Can I export any hugging face checkpoint to onnx? If not, how do I go about it @nielsr I want to export trocr checkpoint to onnx, is it possible. I tried doing the same with a fine-tuned checkpoint of mine, it gives a KEY ERROR KeyError: <class 'transformers.models.vision_encoder_decoder.configuration_vision_encoder_de...
Pinging @lewtun here, to let him know people are also interested in exporting EncoderDecoder model classes to ONNX.
0
huggingface
Beginners
How do I fine-tune roberta-large for text classification
https://discuss.huggingface.co/t/how-do-i-fine-tune-roberta-large-for-text-classification/12845
Hi there, I have been doing the HF course and decided to apply what I have learned but I have unfortunately encountered some errors at the model.fit() stage. I extracted BBC text data as an excel file from kaggle and converted it to a DatasetDict as below: image1132×612 32.7 KB Loaded the tokenizer and tokenized the ...
Hi @nickmuchi, the key is in the warning that pops up when you compile()! When you compile without specifying a loss, the model will compute loss internally. For this to work, though, the labels need to be in your input dict. We talk about this in the Debugging your Training Pipeline 1 section of the course. There are ...
1
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
17