docs
stringclasses
4 values
category
stringlengths
3
31
thread
stringlengths
7
255
href
stringlengths
42
278
question
stringlengths
0
30.3k
context
stringlengths
0
24.9k
marked
int64
0
1
huggingface
Beginners
Pre-Train LayoutLM
https://discuss.huggingface.co/t/pre-train-layoutlm/1350
Hello, We are using pretrained LayoutLM model which is working well with only English Language. We have many forms or invoices from different languages. How can i pre-train LayoutLM model with my own corpus? Thank you.
Hi sharathmk99, LayoutLM model is not currently available in the huggingface transformers library. If you want to add it that should be possible (though not simple). Alternatively, you could put in a suggestion and hope that someone else will incorporate it. If you decide instead to pre-train a LayoutLM model usi...
0
huggingface
Beginners
ONNX Conversion - transformers.onnx vs convert_graph_to_onnx.py
https://discuss.huggingface.co/t/onnx-conversion-transformers-onnx-vs-convert-graph-to-onnx-py/10278
Hey Folks, I am attempting to convert a RobertaForSequenceClassification pytorch model (fine tuned for classification from distilroberta-base) to ONNX using transformers 4.9.2. When using the transformers.onnx package, the classifier seems to be lost: Some weights of the model checkpoint at {} were not used when initia...
Hi @meggers. I’m like you: I know how to use the old method (transformers/convert_graph_to_onnx.py) but not the new one (transformers.onnx) to get the quantized onnx version of a Hugging Face task model (for example: a Question-Answering model). In order to illustrate it, I did publish this notebook in Colab: ONNX Runt...
0
huggingface
Beginners
Visualize Loss without tensorboard
https://discuss.huggingface.co/t/visualize-loss-without-tensorboard/9523
Hello, is there any way to visualize the loss functions of a Trainer model without tensorboard? I am using Jupyter Lab, pytorch and tensorboard refuses to work. Cheers
Met the same problem. I’ve set the eval_step and eval_strategy, but there’s no log in the logging_dir at all.
0
huggingface
Beginners
When finetuning Bert on classification task raised TypeError(f’Object of type {o.__class__.__name__} ’ TypeError: Object of type ndarray is not JSON serializable
https://discuss.huggingface.co/t/when-finetuning-bert-on-classification-task-raised-typeerror-fobject-of-type-o-class-name-typeerror-object-of-type-ndarray-is-not-json-serializable/11370
Hello, I am trying to finetune bert on classification task but I am getting this error during the training. e[ASaving model checkpoint to /gpfswork/rech/kpf/umg16uw/results_hf/checkpoint-500 Configuration saved in /gpfswork/rech/kpf/umg16uw/results_hf/checkpoint-500/config.json Model weights saved in /gpfswork/rech/kpf...
It looks like your compute_metric function is returning NumPy arrays, which is not supported.
0
huggingface
Beginners
Wav2vec: how to run decoding with a language model?
https://discuss.huggingface.co/t/wav2vec-how-to-run-decoding-with-a-language-model/6055
Hello. I am finetuning wav2vec “wav2vec2-large-lv60 “ using my own dataset. I followed Patrick’s tutorial (Fine-Tune Wav2Vec2 for English ASR in Hugging Face with 🤗 Transformers 40) and successfully finished the finetuning (thanks for very nice tutorial.) Now, I would like to run decoding with a language model and hav...
Oh, I found the following previous discussion from the forum. Sorry for missing this one. Language model for wav2vec2.0 decoding Models Hello, I implemented wav2vec2.0 code and a language model is not used for decoding. How can I add a language model (let’s say a language model which is trai...
0
huggingface
Beginners
How to tokenize input if I plan to train a Machine Translation model. I’m having difficulties with text_pair argument of Tokenizer()
https://discuss.huggingface.co/t/how-to-tokenize-input-if-i-plan-to-train-a-machine-translation-model-im-having-difficulties-with-text-pair-argument-of-tokenizer/11333
Hi! If I want to use an already trained Machine Translation model for inference, I do something along these lines: from transformers import MarianMTModel, MarianTokenizer tokenizer = MarianTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-en-de”) model=MarianMTModel.from_pretrained(“Helsinki-NLP/opus-mt-en-de”) sentence...
Hi, To fine-tune a MarianMT (or any other seq2seq model in the library), you don’t need to feed the source and target sentences at once to the tokenizer. Instead, they should be tokenized separately: from transformers import MarianTokenizer tokenizer = MarianTokenizer.from_pretrained(“Helsinki-NLP/opus-mt-en-de”) in...
0
huggingface
Beginners
Loading custom audio dataset and fine-tuning model
https://discuss.huggingface.co/t/loading-custom-audio-dataset-and-fine-tuning-model/8836
Hi all. I’m very new to HuggingFace and I have a question that I hope someone can help with. I was suggested the XLSR-53 (Wav2Vec) model for my use-case which is a speech to text model. However, the languages I require aren’t supported so I was told I need to fine-tune the model per my requirements. I’ve seen several d...
@patrickvonplaten I am also trying it out for a similar usecase but couldnt find any example script till now for audio datasets other than CommonVoice. I have several datasets with me which arent available on huggingface datasets but because almost all the scripts rely so much on the usage of huggingface datasets its h...
0
huggingface
Beginners
Is BERT document embedding model?
https://discuss.huggingface.co/t/is-bert-document-embedding-model/11205
Are BERT and its derivatives(like DistilBert, RoBertA,…) document embedding methods like Doc2Vec?
Do you mean they will map the words to vectors? Yes, they do, but it’s different than some methods like word2veq; I am not sure about Doc2Vec, though. For example, in word2veq, we give each word only one vector, and that’s it. This is not ideal since some words have different meanings in different contexts; for example...
0
huggingface
Beginners
Use custom loss function for training ML task
https://discuss.huggingface.co/t/use-custom-loss-function-for-training-ml-task/11351
Hello. I’d like to train BERT from stratch on my custom corpus for the Masked Language Modeling task. But corpus has one specific - it is sequence of the numbers and absolute value of the difference of two words corresponds to its proximity. Therefore I guess I should use this difference(or some similar) as loss funct...
You can compute the loss outside of your model since it returns the logits, and apply any function you like. If you question was related to the Trainer, you should definte your subclass with a compute_loss method. There is an example in the documentation 53 (scroll a bit down).
0
huggingface
Beginners
Model.generate() – IndexError: too many indices for tensor of dimension 2
https://discuss.huggingface.co/t/model-generate-indexerror-too-many-indices-for-tensor-of-dimension-2/11316
I’ve tried merging most of the code blocks below; but to sum up: DistilGPT2 with extra tokens. Google Colab from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=2,) tokenizer = Auto...
Could it be the vocabulary embedding dimensions not carrying over to text generation?
0
huggingface
Beginners
Fine-tuning XLM-RoBERTa for binary sentiment classification
https://discuss.huggingface.co/t/fine-tuning-xlm-roberta-for-binary-sentiment-classification/11337
I’m trying to fine-tune xlm-roberta-base model for binary sentiment classification problem on review data. I’ve implemented the code as follows: Split data into train, validation set. from sklearn.model_selection import train_test_split train_texts, val_texts, train_labels, val_labels = train_test_split(sample['text']...
Can you verify that you prepared the data correctly for the model? What I typically do is check random elements of the dataset, i.e. encoding = train_dataset[0] then verify some things like: for k,v in encoding.items(): print(k, v.shape) I also decode the input_ids of an example back to text to see whether it’s ...
0
huggingface
Beginners
How to finetune Bert on aspect based sentiment analysis?
https://discuss.huggingface.co/t/how-to-finetune-bert-on-aspect-based-sentiment-analysis/11350
Hello, I check huggingface library and I only saw notebooks for finetuning on text classification, I am looking for a tuto to finetune on aspect based sentiment analysis , does it exsit for Bert ?
Can you clarify what you mean by aspect based sentiment analysis?
0
huggingface
Beginners
Unsupported value type BatchEncoding returned by IteratorSpec._serialize
https://discuss.huggingface.co/t/unsupported-value-type-batchencoding-returned-by-iteratorspec-serialize/7535
Hi all! I’m having a go at fine tuning BERT for a regression problem (given a passage of text, predict it’s readability score) as a part of a Kaggle competition 1. To do so I’m doing the following: 1. Loading BERT tokenizer and applying it to the dataset from transformers import BertTokenizer, TFBertModel tokenizer = B...
Eventually got this working - appears that the error was in step 2, where I was combining the target column with the tokenized labels before creating the dataset. I also needed to turn the tokenized labels into a dict. # This version works train_dataset = tf.data.Dataset.from_tensor_slices(( dict(tokens_w_labels), ...
0
huggingface
Beginners
How to deal with of new vocabulary?
https://discuss.huggingface.co/t/how-to-deal-with-of-new-vocabulary/11295
Hi, the project that I am working on has a lot of domain-specific vocabulary. Could you please suggest techniques for tuning BERT on domain data? I do have over 1 million unlabeled sentences. Hoping that should be enough to pre-train the language model. My end goal is to train a multi-class classification model. But, ...
I’d also be very interested to see if/how this could be done for BART’s encoder since this might be a solution to this problem 9
0
huggingface
Beginners
How to ensure fast inference on both CPU and GPU with BertForSequenceClassification?
https://discuss.huggingface.co/t/how-to-ensure-fast-inference-on-both-cpu-and-gpu-with-bertforsequenceclassification/1694
Hi! I’d like to perform fast inference using BertForSequenceClassification on both CPUs and GPUs. For the purpose, I thought that torch DataLoaders could be useful, and indeed on GPU they are. Given a set of sentences sents I encode them and employ a DataLoader as in encoded_data_val = tokenizer.batch_encode_plus(sents...
I think use ONNX runtime run faster 2x on cpu. you can check my repo: https://github.com/BinhMinhs10/transformers_onnx 50 or repo microsoft: https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/PyTorch_Bert-Squad_OnnxRuntime_CPU.ipynb 30. And i note that notebook huggingf...
0
huggingface
Beginners
True/False or Yes/No Question-answering?
https://discuss.huggingface.co/t/true-false-or-yes-no-question-answering/11271
How can I perform a Question-Answering system that returns Yes or No to my questions? For example, I give the context “Machine Learning is lorem ipsum etc etc”, and I ask a question “Does it talks about Machine Learning?” and returns to me “Yes”. Is it possible to do this? If so, what path I need to follow to perform t...
This kind of problem is a task in the SuperGLUE 1 benchmark. Generally, the approach is to fine-tune a BERT-like model with question/context pairs with a SEP token (or equivalent) separating them. Accordingly, labels correspond to yes/no answers. You can use BERT for sequence classification 3 model to do so. Do note th...
1
huggingface
Beginners
Adding examples in GPT-J
https://discuss.huggingface.co/t/adding-examples-in-gpt-j/11183
In GPT-3, we can add examples before generating the output from the prompt. How do we add examples in GPT-J?
You might take a look at my demo notebook here 5, which illustrates how to use GPT-J for inference, including adding examples to the prompt.
1
huggingface
Beginners
Using HuggingFace on no-free-internet-access server
https://discuss.huggingface.co/t/using-huggingface-on-no-free-internet-access-server/11202
How can I have pre-trained models from HuggingFace when I run any model on a limited server to free internet? Because it cannot download the pre-trained model parameters and then use them. Is there any way to download whatever needed for any model like BERT to my local and then transfer it to my server?
When loading a model from pretrained, you can pass the model’s name to be obtained from Hugging Face servers, or you can pass the path of the pretrained model. On a computer with internet access, load a pretrained model by passing the name of the model to be downloaded, then save it and move it to the computer without ...
1
huggingface
Beginners
I get a “You have to specify either input_ids or inputs_embeds” error, but I do specify the input ids
https://discuss.huggingface.co/t/i-get-a-you-have-to-specify-either-input-ids-or-inputs-embeds-error-but-i-do-specify-the-input-ids/6535
I trained a BERT based encoder decoder model: ed_model I tokenized the input with: txt = "I love huggingface" inputs = input_tokenizer(txt, return_tensors="pt").to(device) print(inputs) The output clearly shows that a input_ids is the return dict {'input_ids': tensor([[ 101, 5660, 7975, 2127, 2053, 2936, 5061, 102]]...
Does this help: ValueError: You have to specify either input_ids or inputs_embeds! · Issue #3626 · huggingface/transformers · GitHub 422
0
huggingface
Beginners
Which model for inference on 11 GB GPU?
https://discuss.huggingface.co/t/which-model-for-inference-on-11-gb-gpu/11147
Hello everybody I’ve just found the amazing Huggingface library. It is an awesome piece of work. I would like to train a chatbot on some existing dataset or several datasets (e.g. the Pile). For training (or fine-tuning) the model I have no GPU memory limitations (48 GB GPU is available). For inference, I only have a G...
Does anybody have some input? Any input is highly appreciated.
0
huggingface
Beginners
How give weight to some specific tokens in BERT?
https://discuss.huggingface.co/t/how-give-weight-to-some-specific-tokens-in-bert/11110
I am going to do Opinion Mining on twitter posts. According to the hashtags, users who are against the topic mostly use some specific hashtags and also users who are with that topic use other hashtags. Can we give more importance to these hashtags (weight up)? First, is that a good idea and second is that possible to d...
Hi Mahdi, My guess is that with enough training data, the transformer model, and in particular its attention heads, will learn to recognise what they should be paying most attention to, i.e. which parts of the text are more important for the classification, and which parts are not that relevant, so this will happen imp...
1
huggingface
Beginners
Fine-tuning T5 with custom datasets
https://discuss.huggingface.co/t/fine-tuning-t5-with-custom-datasets/8858
Hi folks, I am a newbie to T5 and transformers in general so apologies in advance for any stupidity or incorrect assumptions on my part! I am trying to put together an example of fine-tuning the T5 model to use a custom dataset for a custom task. I have the “How to fine-tune a model on summarization 72” example noteboo...
I think I may have found a way around this issue (or at least the trainer starts and completes!). The subclassing of a torch.utils.data.Dataset object for the distilbert example in “Fine-tuning with custom datasets 7” needs changing as follows. I guess because the distilbert model provides just a list of integers where...
0
huggingface
Beginners
Seq2Seq Loss computation in Trainer
https://discuss.huggingface.co/t/seq2seq-loss-computation-in-trainer/10988
Hello, I’m using the EncoderDecoderModel to do the summarization task. I have questions on the loss computation in Trainer class. For text summarization task, as far as I know, the encoder input is the content, the decoder input and the label is the summary. The EncoderDecoderModel utilizes CausalLMModel as the Decoder...
Note that the loss is only popped if you use label smoothing. The default behavior is indeed that the loss is calculated within the forward.
0
huggingface
Beginners
Source code for model definition
https://discuss.huggingface.co/t/source-code-for-model-definition/11108
Hi, I am new to Huggingface transformers. Could you please point me to the source code containing the definition of BERT model (uncased) Thanks
Yes, it can be found here: transformers/modeling_bert.py at master · huggingface/transformers · GitHub 4
0
huggingface
Beginners
How to calculate perplexity properly
https://discuss.huggingface.co/t/how-to-calculate-perplexity-properly/11121
Hey guys, i’m trying to evaluate my model through it’s perplexity on my test set and started to read this guide: Perplexity of fixed-length models — transformers 4.11.3 documentation 1 However, i don’t understand why joining our texts like this would not damage my models predictions: from datasets import load_dataset t...
It allows the model to generalize across sentence or document boundaries, which is typically what you want in generative models. This is not a requirement, by the way, but combining it with a strided window this is quite powerful.
1
huggingface
Beginners
NER model fine tuning with labeled spans
https://discuss.huggingface.co/t/ner-model-fine-tuning-with-labeled-spans/10633
Hi! I’m looking to fine-tune an NER model (dslim/bert-base-NER-uncased) with my own data. My annotations are of this form: for each example I have a piece of raw text (str) and a list of annotated spans of this form: {start_index: int, end_index: int, tag: str} However, to fine-tune the NER model, I need to prepare X (...
Hi folks! Does this make sense?
0
huggingface
Beginners
EncoderDecoderModel converts classifier layer of decoder
https://discuss.huggingface.co/t/encoderdecodermodel-converts-classifier-layer-of-decoder/11072
I am trying to do named entity recognition using a Sequence-to-Sequence-model. My output is simple IOB-tags, and thus I only want to predict probabilities for 3 labels for each token (IOB). I am trying a EncoderDecoderModel using the HuggingFace-implementation with a DistilBert as my encoder, and a BertForTokenClassifi...
The EncoderDecoderModel class is not meant to do token classification. It is meant to do text generation (like summarization, translation). Hence, the head on top of the decoder will be a language modeling head. To do token classification, you can use any xxxForTokenClassification model in the library, such as BertForT...
0
huggingface
Beginners
Does GPT-J support api access?
https://discuss.huggingface.co/t/does-gpt-j-support-api-access/11086
Hi, i’m trying to use the gpt-j model to test through the api but i’m getting this error >> The model EleutherAI/gpt-j-6B is too large to be loaded automatically. can you please tell me how to resolve this? should i purchase the paid package or am i doing something wrong? i used this code, i took it from the official w...
Can you try again? I just tested the inference widget 3, and it works for me.
1
huggingface
Beginners
Generating actual answers from QA models
https://discuss.huggingface.co/t/generating-actual-answers-from-qa-models/11048
Hi everybody, I’m planning or training a BARTQA model, but before training I would like to test how to actually generate an answer at inference time. I’ve looked through the documentation, but I couldn’t find an obvious answer. Can I do it using the BARTForQuestionAnswering model or do I have to use the BARTForConditio...
Hi, Solving question-answering using Transformers is usually done in one of 2 ways: either extractive, where the model predicts start_scores and end_scores. In other words, the model predicts which token it believes is at the start of the answer, and which token is at the end of the answer. This was introduced in the ...
0
huggingface
Beginners
Refresh of API Key
https://discuss.huggingface.co/t/refresh-of-api-key/10651
Hello, I was wondering if I could get a refresh on my API key? CCing @pierric @julien-c Thank you!
Same here. I would also need my API key reseting. Thanks.
0
huggingface
Beginners
DataCollatorWithPaddings without Tokenizer
https://discuss.huggingface.co/t/datacollatorwithpaddings-without-tokenizer/11068
I want to fine-tune a model… model = BertForTokenClassification.from_pretrained('monilouise/ner_pt_br' with this dataset: raw_datasets = load_dataset('lener_br') The raw_datasets loaded are already tokenized and encoded. And I don’t know how it was tokenized. Now, I want to pad the inputs, but I don’t know how to use D...
You can use the base BERT tokenizer I would say (since it’s a BERT model). Just make sure the pad token is compatible with what the model expects.
1
huggingface
Beginners
How to convert string labels into ClassLabel classes for custom set in pandas
https://discuss.huggingface.co/t/how-to-convert-string-labels-into-classlabel-classes-for-custom-set-in-pandas/8473
I am trying to fine tune bert-base-uncased model, but after loading datasets from pandas dataframe I get the following error with the trainer.train(): ValueError: Target size (torch.Size([16])) must be the same as input size (torch.Size([16, 5])) I tried to understand the problem and I think it is related to the wrong ...
hi @Krzysztof, i think you can get what you want by using the features argument of Dataset.from_pandas: from datasets import Dataset, Value, ClassLabel, Features text = ["John", "snake", "car", "tree", "cloud", "clerk", "bike"] labels = [0,1,2,3,4,0,2] df = pd.DataFrame({"text": text, "label": labels})# define data se...
0
huggingface
Beginners
Train GPT2 on wikitext from scratch
https://discuss.huggingface.co/t/train-gpt2-on-wikitext-from-scratch/5276
Hello everyone, I would like to train GPT2 on wikitext from scratch (not fine-tune pre-trained model). I launched the following script in this 62 folder. python run_clm.py –model_type gpt2 –tokenizer_name gpt2 –block_size 256 –dataset_name wikitext –dataset_config_name wikitext-2-raw-v1 –do_train –do_eval –over...
I can confirm the command is correct if you want to train from scratch. As for hyperparameters, you will need to tune them a bit, but the defaults should not be too bad.
0
huggingface
Beginners
Getting KeyError: ‘logits’ when trying to run deberta model
https://discuss.huggingface.co/t/getting-keyerror-logits-when-trying-to-run-deberta-model/11012
!pip install sentencepiece --upgrade !pip install transformers --upgrade from transformers import pipeline,AutoModel,AutoTokenizer model =AutoModel.from_pretrained("microsoft/deberta-v2-xxlarge-mnli") tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xxlarge") classifier = pipeline("zero-shot-classificat...
also, how do I fine-tune such a model for my purpose?
0
huggingface
Beginners
Bigbirdmodel: Problem with running code provided in documentation
https://discuss.huggingface.co/t/bigbirdmodel-problem-with-running-code-provided-in-documentation/5657
Hey folks, QQ: Has anyone tried running the provided code in Bigbird documentation and run into problems? I’m simply trying to embed some input using the pre-trained model 2 for initial exploration, and I’m running into an error: IndexError: index out of range in self Has anyone come across this error before or seen a ...
cc @vasudevgupta
0
huggingface
Beginners
CUDA out of memory for Longformer
https://discuss.huggingface.co/t/cuda-out-of-memory-for-longformer/1472
I have issue training the longformer on custom dataset, even on a small batch number, it says CUDA out of memory, RuntimeError Traceback (most recent call last) in () ----> 1 trainer.train() 18 frames /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in _pad(input, pad, mode, v...
Did you try smaller batch sizes? What is the size of single batch size in your RAM?
0
huggingface
Beginners
How to specify which metric to use for earlystopping?
https://discuss.huggingface.co/t/how-to-specify-which-metric-to-use-for-earlystopping/10990
For example, I define a custom compute_metrics which returns a dict including rouge and bleu. How to tell early stopping callback to stop training based on bleu? Thank you!
You can specifiy in the field metric_for_best_model (should be the name of the key in the dictionary returned by your compiute_metrics).
0
huggingface
Beginners
How to use specified GPUs with Accelerator to train the model?
https://discuss.huggingface.co/t/how-to-use-specified-gpus-with-accelerator-to-train-the-model/10967
I’m training my own prompt-tuning model using transformers package. I’m following the training framework in the official example to train the model. I’m training environment is the one-machine-multiple-gpu setup. My current machine has 8 gpu cards and I only want to use some of them. However, the Accelerator fails to w...
No it needs to be done before the lauching command: CUDA_VISIBLE_DEVICES = "3,4,5,6" accelerate launch training_script.py
1
huggingface
Beginners
Why the HF tokenizer time is bigger when launched just once?
https://discuss.huggingface.co/t/why-the-hf-tokenizer-time-is-bigger-when-launched-just-once/10948
Hello, I guess this is more a python for loop issue and/or a Colab one but as I tested it with a HF tokenizer, I’m sending this question to this forum: Why the HF tokenizer time is bigger when launched just once? I published a colab notebook 2 to explain this issue that is showed in the following graph: If I launch ju...
I think it is just the first time running the tokenization that results in a bigger time and subsequent calls are faster. I noticed that this only happens with the fast tokenizer, I think this is due to how the fast is designed, although I don’t know the details below it, maybe a delayed operation?
0
huggingface
Beginners
Spaces not running latest version of Streamlit
https://discuss.huggingface.co/t/spaces-not-running-latest-version-of-streamlit/10790
Hello, I am not sure if this is the right place to drop but I noticed that some of my implementations that use some of the functionalities included in the more recent versions of Streamlit were displaying errors when deployed on Spaces, even though they work fine in my local environment. My efforts: I created a require...
cc @julien-c
0
huggingface
Beginners
Extract hidden layers from a Roberta model in sagemaker
https://discuss.huggingface.co/t/extract-hidden-layers-from-a-roberta-model-in-sagemaker/10748
Hello, I have fine tuned a Camembert Model (inherits from Roberta) on a custom dataset using sagemaker. My goal is to have a language model able to extract embedding to be used in my search engine. Camembert is trained for a “fill-mask” task. Using the Huggingface API outputting hidden_layers (thus computing embedding)...
For those having the same issue I found a solution. Train the model on masked ML and at inference time use the pipeline ‘feature_extraction’ by setting the HF_TASK environment variable. hub = { 'HF_TASK': 'feature_extraction' } huggingface_model = HuggingFaceModel( env=hub, model_data="s3://bucket/model.tar.g...
0
huggingface
Beginners
Load CLIP pretrained model on GPU
https://discuss.huggingface.co/t/load-clip-pretrained-model-on-gpu/10940
I’m using the CLIP for finding similarities between text and image but I realized the pretrained models are loading on CPU but I want to load it on GPU since in CPU is not fast. How can I load them on GPU? model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") processor = CLIPProcessor.from_pretrained("open...
Here’s how you can put a model on GPU (same for any PyTorch model): import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") model.to(device)
0
huggingface
Beginners
Metrics in Comet.ml from Transformers
https://discuss.huggingface.co/t/metrics-in-comet-ml-from-transformers/10777
Hello, I am using Comet.ml to log my transformer training. For the puposes of logging confusion matrix, i am not able to use CometCallback because it become practically impossible to then calculate these matrices. Due to this, i am getting very different interpretation of steps. While my output shows, total optimizatio...
I think we have these questions answered over here: Comet and Transformers · Issue #434 · comet-ml/issue-tracking · GitHub 13
0
huggingface
Beginners
GPU OOM when training
https://discuss.huggingface.co/t/gpu-oom-when-training/10945
I’m running the language modeling script provided here. I’m training a Roberta-base model and I have an RTX 3090 with 24 Gb, although when training it runs well until 9k steps, then an OOM error is through. The memory usage on training begins at 12Gb, runs a few steps, and keeps growing until OOM error. It seems to be ...
My guess would be that you have a specific sample in your dataset that is very long. Your collate function (not shown) might then be padding up to that length. That means that, for instance, your first <9k steps are of size 128x64 (seq_len x batch_size), which does not lead to an OOM. But then, around 9k steps you have...
1
huggingface
Beginners
Cuda out of memory while using Trainer API
https://discuss.huggingface.co/t/cuda-out-of-memory-while-using-trainer-api/7138
Hi I am trying to test the trainer API of huggingface through this small code snippet on a toy small data. Unfortunately I am getting cuda memory error although I have 32gb cuda memory which seems sufficient for this small data. Any help will be greatly appreciated from datasets import load_dataset,load_metric import ...
Could this be related to this issue 59?
0
huggingface
Beginners
Sst2 dataset labels look worng
https://discuss.huggingface.co/t/sst2-dataset-labels-look-worng/10895
Hello all, I feel like this is a stupid question but I cant figure it out I was looking at the GLUE SST2 dataset through the huggingface datasets viewer and all the labels for the test set are all -1. They are 0 and 1 for the training and validation set but all -1 for the test set. Shouldn’t the test labels match the t...
GLUE is a benchmark, so the true labels are hidden, and only known by its creators. One can submit a script to the official website, which is then run on the test set. In that way, one can create a leaderboard with the best performing algorithms.
1
huggingface
Beginners
How to use Trainer with Vision Transformer
https://discuss.huggingface.co/t/how-to-use-trainer-with-vision-transformer/10852
What changes should be made for using Trainer with the Vision Transformer, are the keys expected by the trainer from dataset input_ids, attention_mask, and labels? class OCRDataset(torch.utils.data.Dataset): def __init__(self, texts, tokenizer, transforms = None): self.texts = texts self.tokenizer ...
Hi, I do have a demo notebook on using the Trainer for fine-tuning the Vision Transformer here: Transformers-Tutorials/Fine_tuning_the_Vision_Transformer_on_CIFAR_10_with_the_🤗_Trainer.ipynb at master · NielsRogge/Transformers-Tutorials · GitHub 8. ViT doesn’t expect input_ids and attention_mask as input, but pixel_va...
1
huggingface
Beginners
Wav2VecForPreTraining - Not able to run trainer.train()
https://discuss.huggingface.co/t/wav2vecforpretraining-not-able-to-run-trainer-train/10884
I am trying to use Wav2VecForPreTraining to train the model from scratch on own audio dataset. from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining, TrainingArguments, Trainer feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("patrickvonplaten/wav2vec2-base") model = Wav2Vec2ForPreTrain...
You can’t use this model with the Trainer as it does not compute the loss. The Trainer API is only compatible with models that compute the loss when they are provided with labels.
0
huggingface
Beginners
How to install latest version of transformers?
https://discuss.huggingface.co/t/how-to-install-latest-version-of-transformers/10876
No matter what command I try, I always end up with version 2.1.1 of transformers, where models such as the GPT2 do not accept the input_embeds arguments in the forward pass, which I really need. How can I install the latest version? I have tried with conda, with pip, I have tried updating but so far I am stuck with 2.1...
Making a new virtual environment in which I re-installed everything solved my issue but I do not know why… Weird.
1
huggingface
Beginners
Binary classification of text files in two directories
https://discuss.huggingface.co/t/binary-classification-of-text-files-in-two-directories/10807
I am trying to do transfer learning on GPT-Neo to distinguish scam websites and normal websites from their content and I am completely confused as to what I should do next. I have already used some code to scrape websites content and parsed them using bs4. Now only the website text is stored in different directories us...
Yes, that’s already a good idea. You can indeed make 1 dataset with 2 columns, text and label. Next, there are several options: Either you create your dataset as a csv file, and then you turn it into a HuggingFace Dataset object, as follows: from datasets import load_dataset dataset = load_dataset('csv', data_files='...
0
huggingface
Beginners
Tokenizing two sentences with the tokenizer
https://discuss.huggingface.co/t/tokenizing-two-sentences-with-the-tokenizer/10858
Hello, everyone. I’m working on an NLI task (such as MNLI, RTE, etc.) where two sentences are given to predict if the first sentence entails the second one or not. I’d like to know how the huggingface tokenizer behaves when the length of the first sentence exceeds the maximum sequence length of the model. I’m using enc...
Hi, As explained in the docs 2, you can specify several possible strategies for the truncation parameter, including 'only_first'. Also, the encode_plus method is outdated actually. It is recommended to just call the tokenizer, both on single sentence or pair of sentences. TLDR: inputs = tokenizer(text_a, text_b, trunca...
1
huggingface
Beginners
How to use the test set in those beginner examples?
https://discuss.huggingface.co/t/how-to-use-the-test-set-in-those-beginner-examples/10824
Hi, maybe a stupid question but I cant find the answer ether in Doc or on Google. In these examples notebooks/text_classification.ipynb at master · huggingface/notebooks · GitHub the datasets contain a test set, but the examples finished after how to train and evaluate(on validation set if I understand it correctly). B...
trainer.evaluate takes a dataset, so you can pass the test split if you want to evaluate on the test set. Or there is trainer.predict if you just want the predictions.
0
huggingface
Beginners
Fine-tuning MT5 on XNLI
https://discuss.huggingface.co/t/fine-tuning-mt5-on-xnli/7892
Hi there, I am trying to fine-tune a MT5-base model to test it over the Spanish portion of the XNLI dataset. My training dataset is the NLI dataset machine translated to Spanish by a MarianMT model, so the quality isn’t the best but I have still managed to get good results while training it with other models shuch as x...
Hello, I had the same issue with mT5 (both small and base) on BoolQ dataset (~9.5k train samples) and found out something that may be useful to you. No matter what settings I used, how long I trained, and whether I oversampled the minority class on training set, all predictions on validation set were the same. Interest...
0
huggingface
Beginners
GPT-2 Perplexity Score Normalized on Sentence Lenght?
https://discuss.huggingface.co/t/gpt-2-perplexity-score-normalized-on-sentence-lenght/5205
I am using the following code to calculate the perplexity of sentences and I need to know whether the score is normalized on sentence length. If not, what do I need to change to normalize it? Thanks! import torch import sys import numpy as np from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pre-trained ...
Hey, did you find an answer to your question? What is the right way (if there is a need) to normalize the perplexity number based on sentence length? Should I divide by the number of tokens ? I have a reason to believe that they must already be doing it on the inside in the loss computation. Not sure though
0
huggingface
Beginners
Error in spaces/akhaliq/T0pp_11B
https://discuss.huggingface.co/t/error-in-spaces-akhaliq-t0pp-11b/10791
I want to try the model but there is error!! Why??
The demo is calling https://huggingface.co/bigscience/T0pp_11B 1 which does not exist at the moment.
0
huggingface
Beginners
How can I reset the API token?
https://discuss.huggingface.co/t/how-can-i-reset-the-api-token/10763
Is there a way to reset the API token I leaked mine when pushing the code to the online repository.
Neil46: API token I leaked mine when pushing the code cc @pierric @julien-c
0
huggingface
Beginners
Dimension mismatch when training BART with Trainer
https://discuss.huggingface.co/t/dimension-mismatch-when-training-bart-with-trainer/6430
Hi all, I encountered a ValueError when training the facebook/bart-base Transformer with a sequence classification head. It seems that the dimensions of the predictions are different to e.g. the bart-base-uncased model for sequence classification. I am using transformers version 4.6.1. Here is an example script which y...
Hi, @DavidPfl, I am facing a similar issue. Were you able to fix this? Thanks!
0
huggingface
Beginners
Fine-tuning: Token Classification with W-NUT Emerging Entities
https://discuss.huggingface.co/t/fine-tuning-token-classification-with-w-nut-emerging-entities/9054
Issue I’d like to run the sample code for Token Classification with W-NUT Emerging Entities 2 on Google Colaboratory, but I cannot run it both CPU and GPU environment. How can I check default values of Trainer for each pre-trained model? Errors I didn’t set Target on my code. Where can I fix it and what number is appr...
Did you get any solution for this? I am also facing same issue
0
huggingface
Beginners
What is the purpose of this fine-tuning?
https://discuss.huggingface.co/t/what-is-the-purpose-of-this-fine-tuning/10729
Hi, I found 🤗 Transformers Notebooks — transformers 4.12.0.dev0 documentation and then Google Colab . The notebook will create examples which have the same text in the input and the labels. What is the purpose of such a model? Is it training some autoencoder task? I would think a more interesting challenge would be: G...
As mentioned in the notebooks, the task is causal language modeling at first, so predict the next word. They also explicitly say that: First note that we duplicate the inputs for our labels. This is because the model of the Transformers library apply the shifting to the right, so we don’t need to do it manually. Whi...
0
huggingface
Beginners
BertForTokenClassification with IOB2 Tagging
https://discuss.huggingface.co/t/bertfortokenclassification-with-iob2-tagging/10603
Dear members of this forum, I am using BertForTokenClassification for named entity recognition. The labels are encoded using the beginning-inside-outside tagging format (IOB2 format to be precise). My overall setup works. However I am observing two things where I don’t know the proper solution to: In order to obtain ...
I think I found a solution to the first problem I described. The option aggregation_strategy in TokenClassificationPipeline 3 lists all the possible options to deal with inconsistencies.
0
huggingface
Beginners
Is there a way to know how many epoch or steps the model has trained with Trainer API?
https://discuss.huggingface.co/t/is-there-a-way-to-know-how-many-epoch-or-steps-the-model-has-trained-with-trainer-api/10702
For instance, I want to add a new loss after 5 epochs’ training.
All the relevant information is kept in the Trainer state 20
0
huggingface
Beginners
I am beginner, I need guide and help
https://discuss.huggingface.co/t/i-am-beginner-i-need-guide-and-help/10725
Hello as you see in the title I am beginner to transformer and python. my task is : Abstractive summarization in Arabic language by using transformer (fine-tune a model) as a start point, I was trying to apply the coding in this link Deep-Learning/Tune_T5_WikiHow-Github.ipynb at master · priya-dwivedi/Deep-Learning · G...
You can start by following our free course 6. It will teach you everything about Transformers, but note that it assumes some basic knowledge about deep learning.
0
huggingface
Beginners
Using ViTForClassification for regression?
https://discuss.huggingface.co/t/using-vitforclassification-for-regression/10716
Hi, I would like to use the ViT model for classification and adapt it to a regression task, is it feasible ? Can the model work just by changing the loss function ? How can I define the classes in the _info method of my custom dataset since there is an infinity of them possible ? What are all the other changes to make ...
If you set the num_labels of the config to 1, it will automatically use the MSE loss for regression, as can be seen here 7. So yes, it’s totally possible.
0
huggingface
Beginners
Does HuBERT need text as well as audio for fine-tuning? / How to achieve sub-5% WER?
https://discuss.huggingface.co/t/does-hubert-need-text-as-well-as-audio-for-fine-tuning-how-to-achieve-sub-5-wer/6905
There’s a fine-tuning guide provided here that was for wav2vec2: facebook/hubert-xlarge-ll60k · Hugging Face 26 However, I’m interested in achieving the actual performance of wav2vec2 (of 3% WER not 18%). Because this wav2vec2 implementation does not use a language model it suffers at 18%. However, with HuBERT, if I un...
which parts did you change from the Wav2vec2 example to get hubert to work?
0
huggingface
Beginners
How do make sure I am using the transformer version/code from source?
https://discuss.huggingface.co/t/how-do-make-sure-i-am-using-the-transformer-version-code-from-source/9511
How do I know whether my jupyter notebook is using the transformer version/code that I cloned from github? My steps: I did fork the transformers repo in my github account then I did: git clone --recursive https://github.com/myaccount/transformers.git cd transformers/ conda create -n hf-dev-py380 python=3.8.0 conda acti...
Just to complement my question, the documentation says that pip install -e . command would link the folder I cloned the repository to my python library paths. So the python packages would get installed into the directory I used to clone the repo, which is …/transfomers/. But still I don’t see the site-packages folder t...
0
huggingface
Beginners
ERROR: could not find a version that satisfies the requirement torch==1.9.1
https://discuss.huggingface.co/t/error-could-not-find-a-version-that-satisfies-the-requirement-torch-1-9-1/10670
Hi I am having a problem when I type Pip install -requirement.txt ERROR: could not find a version that satisfies the requirement torch==1.9.1+cu111(from versions 0.1.2, 0.1.2post1, 0.1.2post2 ERROR: No matching distribution found for torch==1.9.1+cu111
I use python 3.10 btw
0
huggingface
Beginners
Token classification
https://discuss.huggingface.co/t/token-classification/10680
huggingface.co Fine-tuning with custom datasets 1 This tutorial will take you through several examples of using 🤗 Transformers models with your own datasets. The guide shows one of many valid workflows for u... i followed token classification tutorial on above link trained c...
You can check out this thread 7 I just wrote on how to convert predictions to actual labels for token classification models.
0
huggingface
Beginners
Evaluate question answering with squad dataset
https://discuss.huggingface.co/t/evaluate-question-answering-with-squad-dataset/10586
Hello everybody I want to build question answering system by fine tuning bert using squad1.1 or squad2.0 i would like to ask about evaluating question answering system, i know there is squad and squad_v2 metrics, how can we use them when fine-tune bert with pytorch? thank you
This example should hopefully answer your question. github.com huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py 5 #!/usr/bin/env python # coding=utf-8 # Copyright 2020 The HuggingFace Team All rights reserved. # # Licensed under the Apache License, Version 2...
0
huggingface
Beginners
Moving my own trained model to huggingface hub
https://discuss.huggingface.co/t/moving-my-own-trained-model-to-huggingface-hub/10622
Hi , I trained a BERT and Pytorch model with my data set on Google Colab, now I want to move the trained model to Hub so I can use it on my account like other pre-trained models. I do not know how to save and move the model I trained to huggingface. Thank you
Hi @staceythompson, We have a guide on how to upload models via usual Git approach here. If you want programmatic access, you can also use our huggingface_hub Python library. There’s documentation on how to upload models here.
0
huggingface
Beginners
How does the GPT-J inference API work?
https://discuss.huggingface.co/t/how-does-the-gpt-j-inference-api-work/10337
Hi All. I started a 7 day trial for the startup plan. I need to use GPT-J through HF inference API. I pinned it in on org account to work on GPU and, after sending a request, all I get back is a single generated word. The max token param is set to 100. Could you please let me know how should I make it generate more tha...
Hi, Normally it should become available. I’ll ask the team and get back to you.
0
huggingface
Beginners
“Initializing global attention on CLS token” on Longformer Training
https://discuss.huggingface.co/t/initializing-global-attention-on-cls-token-on-longformer-training/10601
I have this text classification task that follows the mnli run_glue.py task. The premise is a text that is on average 2k tokens long and the hypothesis is a text that is 200 tokens long. The labels remain the same (0 for entailment, 1 for neutral, 2 for contradiction). I set the train and eval batch size to 1 as anythi...
Oh thank goodness, the output stated showing training iterations.
0
huggingface
Beginners
Hyperparameter tuning practical guide?
https://discuss.huggingface.co/t/hyperparameter-tuning-practical-guide/10297
Hi i have been having problems doing parameter tuning with google colab, where its alawys gpu that runs out of memory. Is there any practical advice you could give me for tuning bert models? In terms of envoirment settings i need for example number of gpu so i don’t run out of mem It is to be noted that when doing tuni...
If your GPU can only take 16 as batch_size then make sure that multiplication of batch_size and gradient_accumulation does not go beyond 16. You need to specify range for both these parameters such that any combination of elements from both ranges does not take the effective batch_size beyond 16.
0
huggingface
Beginners
Questions when doing Transformer-XL Finetune with Trainer
https://discuss.huggingface.co/t/questions-when-doing-transformer-xl-finetune-with-trainer/5280
Hi everyone, Nice to see you here. I’m new to the Transformer-XL model. I’m following Fine-tuning with custom datasets 1 to finetune Transformer-XL with Trainer.(sequence classification task) First, I used exactly the same way as the instruction above except for: tokenizer = TransfoXLTokenizer.from_pretrained(‘transf...
Note that TransformerXL is the only model of the library that does not work with Trainer as the loss it returns is not reduced (it’s an array and not a scalar). You might get away with it by implementing your own subclass of Trainer and override the compute_loss function to convert that array to a scalar.
0
huggingface
Beginners
Batch[k] = torch.tensor([f[k] for f in features]) ValueError: expected sequence of length 3 at dim 1 (got 4)
https://discuss.huggingface.co/t/batch-k-torch-tensor-f-k-for-f-in-features-valueerror-expected-sequence-of-length-3-at-dim-1-got-4/1354
Hi there, I am trying to build a multiple-choice question solver and I am getting the following error. Any thoughts what could be the cause of this error? File "../src/run_multiple_choice.py", line 195, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None Fi...
Looks like my instances were not of the same size. Making them the same size fixes the problem.
0
huggingface
Beginners
Why aren’t all weights of BertForPreTraining initialized from the model checkpoint?
https://discuss.huggingface.co/t/why-arent-all-weights-of-bertforpretraining-initialized-from-the-model-checkpoint/10509
When I load a BertForPretraining with pretrained weights with model_pretrain = BertForPreTraining.from_pretrained('bert-base-uncased') I get the following warning: Some weights of BertForPreTraining were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder...
mgreenbe: BertForPreTraining The BertForPreTraining model is BERT with 2 heads on top (the ones used for pre-training BERT, namely next sentence prediction and masked language modeling). The bert-base-uncased checkpoint on the hub only includes the language modeling head (it’s actually suited to be loaded into a Ber...
0
huggingface
Beginners
Adding Blenderbot 2.0 to Huggingface
https://discuss.huggingface.co/t/adding-blenderbot-2-0-to-huggingface/10503
I noticed that Huggingface has the original Blenderbot model but not the current new version of it. I was wondering how we can possibly add it to Huggingface?
Contributing a model to HuggingFace Transformers involves first forking the original Github repository 7, in order to understand the model, do a basic forward pass, etc. Next, you can start implementing the model. This 3 and this guide 7 explain in detail how to do this. If you want to start working on this and you nee...
1
huggingface
Beginners
When is a generative model said to overfit?
https://discuss.huggingface.co/t/when-is-a-generative-model-said-to-overfit/10489
If I train a causal language model, should I be worried about overfitting? If so, what would that imply? That it cannot generalize well to unseen prompts? I am used to validating on downstream tasks and selecting the best checkpoint there where validation loss is not worse than training loss (overfitting), but I am not...
For generative models, one typically measures the perplexity on a held-out dataset. As long as perplexity keeps improving, keep training.
0
huggingface
Beginners
Generate raw word embeddings using transformer models like BERT for downstream process
https://discuss.huggingface.co/t/generate-raw-word-embeddings-using-transformer-models-like-bert-for-downstream-process/2958
Hi, I am new to using transformer based models. I have a few basic questions, hopefully, someone can shed light, please. I’ve been training GloVe and word2vec on my corpus to generate word embedding, where a unique word has a vector to use in the downstream process. Now, my questions are: Can we generate a similar emb...
Yes you can get a word embedding for a specific word in a sentence. You have to take care though, because in language models we often use a subword tokenizer. It chops words into smaller pieces. That means that you do not necessarily get one output for every word in a sentence, but probably more than one, namely one fo...
0
huggingface
Beginners
Logs of training and validation loss
https://discuss.huggingface.co/t/logs-of-training-and-validation-loss/1974
Hi, I made this post to see if anyone knows how can I save in the logs the results of my training and validation loss. I’m using this code: *training_args = TrainingArguments(* * output_dir='./results', # output directory* * num_train_epochs=3, # total number of training epochs* * per_de...
Hi! I was also recently trying to save my loss values at each logging_steps into a .txt file. There might be a parameter I am unaware of, but meanwhile I pulled from git the latest version of the transformer library and slightly modified the trainer.py to include in def log(self, logs: Dict[str, float]) -> None: the fo...
0
huggingface
Beginners
Easily save DatasetDict as community dataset?
https://discuss.huggingface.co/t/easily-save-datasetdict-as-community-dataset/10467
I expected that dataset_dict.save_to_disk("./my_dataset") would produce a suitable format, but it appears not. I can’t seem to find a simple way of getting from a DatasetDict object to a community dataset. Does this not exist?
Hi ! We are working on it
0
huggingface
Beginners
How can I put multiple questions in the same context at once using Question-Answering technique (i’m using BERT)?
https://discuss.huggingface.co/t/how-can-i-put-multiple-questions-in-the-same-context-at-once-using-question-answering-technique-im-using-bert/10416
Is that possible? If so, how can I do that?
Yes that’s possible, like so: from transformers import BertTokenizer, BertForQuestionAnswering tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForQuestionAnswering.from_pretrained('bert-base-uncased') context = "Jim Henson was a nice puppet" questions = ["Who was Jim Henson?", "What is Jim'...
1
huggingface
Beginners
Do you train all layers when fine-tuning T5?
https://discuss.huggingface.co/t/do-you-train-all-layers-when-fine-tuning-t5/1034
From my beginner-level understanding, when it comes to BERT, sometimes people will train just the last layer, or sometimes they’ll train all layers a couple epochs and then the last layer a few more epochs. Does T5 have any similar practices? Or is it normal to just train the whole thing when fine-tuning? And very tang...
I haven’t seen much experiments for this, but IMO it’s better to fine-tune the whole model. Also when you pass labels argument to T5ForConditionalGeneration's forward method then it calculates the loss for you and returns it as the first value in the returned tuple . And you can use the finetune.py script here 41 to f...
0
huggingface
Beginners
RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 11.17 GiB total capacity; 10.62 GiB already allocated; 145.81 MiB free; 10.66 GiB reserved in total by PyTorch)
https://discuss.huggingface.co/t/runtimeerror-cuda-out-of-memory-tried-to-allocate-384-00-mib-gpu-0-11-17-gib-total-capacity-10-62-gib-already-allocated-145-81-mib-free-10-66-gib-reserved-in-total-by-pytorch/444
Hi Huggingface team, I am trying to fine-tune my MLM RoBERTa model on a binary classification dataset. I’m able to successfully tokenize my entire dataset, but during training, I keep getting the same CUDA memory error. I’m sure as to where the memory is taken up, but have attached the entire notebook here 26 for refer...
You can try lowering your batch size,reserved by pytorch means that the memory is used for the data, model , gradients etc
0
huggingface
Beginners
Can we use tokenizer from one architecture and model from another one?
https://discuss.huggingface.co/t/can-we-use-tokenizer-from-one-architecture-and-model-from-another-one/10377
I’ve a Bert tokenizer, which is pre-trained on some dataset. Now I want to fine tune some task in-hand with a Roberta model. So in this scenario Can I use Bert tokenizer output as input to Roberta Model? Does such kind of setup makes sense between autoregressive and non-autoregressive models, i.e., using Bert tokenize...
hi sps. I think it would be possible to use a Bert tokenizer with a Roberta Model, but you would have to train the Roberta model from scratch. You wouldn’t be able to take advantage of transfer learning by using a pre-trained Roberta. Why would you want to do that? You might run into problems with things like the sep...
0
huggingface
Beginners
RecursionError: Maximum recursion depth exceeded in comparison
https://discuss.huggingface.co/t/recursionerror-maximum-recursion-depth-exceeded-in-comparison/10230
I got this error : RecursionError: maximum recursion depth exceeded in comparison when I was trying to run this line: bert = TFAutoModel.from_pretrained('bert-base-cased') Also, I increased the maximum recursion amount in sys. I wanted this to fine-tune a model.
Full stack trace: RecursionError Traceback (most recent call last) /tmp/ipykernel_1850345/304243349.py in <module> 5 from transformers import AutoModel 6 #bert = AutoModel.from_pretrained('bert-base-cased') ----> 7 bert = TFAutoModel.from_pretrained('bert-base-uncased') 8 ...
0
huggingface
Beginners
Keyerror when trying to download GPT-J-6B checkpoint
https://discuss.huggingface.co/t/keyerror-when-trying-to-download-gpt-j-6b-checkpoint/10395
model = AutoModelForCausalLM.from_pretrained(“EleutherAI/gpt-j-6B”, torch_dtype=torch.float32) results in a keyerror ‘gptj’ when attempting to download the checkpoint. running transformers library version 4.10.2 similar to topic: How to get "EleutherAI/gpt-j-6B" working? Models I’m trying ...
GPT-J has been merged and is part of version 4.11, so you should be able to update your transformers version to the latest one and use GPT-J.
1
huggingface
Beginners
Import Error for timm with facebook/detr-resnet-50
https://discuss.huggingface.co/t/import-error-for-timm-with-facebook-detr-resnet-50/10372
I am working through implementing the How to Use section from the Facebook Detr Resnet 50 model card here: https://huggingface.co/facebook/detr-resnet-50 and am getting the error below when calling DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50'). Even after I pip install timm. Any suggestions or help ...
What is your environment? In Colab notebooks, it might help to restart the runtime.
1
huggingface
Beginners
Using Hugging Face dataset class as pytorch class
https://discuss.huggingface.co/t/using-hugging-face-dataset-class-as-pytorch-class/10385
Hi, I have created a custom dataset class using hugging face, and for some reason I would like to use this class as a pytorch dataset class. (with get_item etc…) Is it possible ? Thanks
This is possible by default. What exactly do you want to do? You can simply use such dataset in a PT dataloader as well, as long as you set the format to torch. For instance: dataset.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask', 'labels'])
0
huggingface
Beginners
Key error: 0 in DataCollatorForSeq2Seq for BERT
https://discuss.huggingface.co/t/key-error-0-in-datacollatorforseq2seq-for-bert/7260
Hello everyone, I am trying to fine-tune a german BERT2BERT model for text summarization unsing bert-base-german-cased and want to use dynamic padding. However, when calling Trainer.train() I receive an error, that tensors cannot be created and I should use padding. I was able to trace this error back to my DataCollato...
This is because the datasets library returns a slice of the dataset as a dictionary with lists for each key. The data collator however expects a list of dataset elements, so a list of dictionaries. Practically, I think you need to do: samples = [tokenized_traindata[i] for i in range(8)] batch = data_collator(samples)
0
huggingface
Beginners
Loading model from checkpoint after error in training
https://discuss.huggingface.co/t/loading-model-from-checkpoint-after-error-in-training/758
Let’s say I am finetuning a model and during training an error is encountered and the training stops. Let’s also say that, using Trainer, I have it configured to save checkpoints along the way in training. How would I go about loading the model from the last checkpoint before it encountered the error? For reference, he...
The checkpoint should be saved in a directory that will allow you to go model = XXXModel.from_pretrained(that_directory).
0
huggingface
Beginners
Filtering Dataset
https://discuss.huggingface.co/t/filtering-dataset/10228
I’m trying to filter a dataset based on the ids in a list. This approach is too slow. The dataset is an Arrow dataset. responses = load_dataset('peixian/rtGender', 'responses', split = 'train') # post_id_test_list contains list of ids responses_test = responses.filter(lambda x: x['post_id'] in post_id_test_list)
Hi baumstan. I’m not sure I understand the question. Why does it matter if it is slow? I would expect you to create and then save your train/test datasets only once, before you start using your model. If it takes a long time, just leave it running. Are you trying to use a dynamic post_id_test_list, or to train with ...
0
huggingface
Beginners
Issue uploading model: “fatal: cannot exec ‘.git/hooks/pre-push’: Permission denied”
https://discuss.huggingface.co/t/issue-uploading-model-fatal-cannot-exec-git-hooks-pre-push-permission-denied/2947
Hi there, I’m trying to upload my first model to the model hub. I’ve followed step-by-step the docs here 3, but encountered different issues: I got it to sort of work on my local machine, but it was extremely slow (20~kbit per sec) and I had to abandon it. I saw this topic and that git lfs install is important, but it...
Hi @MoritzLaurer, the doc could be improved but in your notebook it looks like you have two git repos (one you create with git init, the other you clone from huggingface.co) whereas you only should have one. Here’s a minimal Colab here: https://colab.research.google.com/drive/1TFgta6bYHte2lLiQoJ0U0myRd67bIuiu?usp=shari...
0
huggingface
Beginners
What is the difference between forward() and generate()?
https://discuss.huggingface.co/t/what-is-the-difference-between-forward-and-generate/10235
Hi! It seems like some models implement both functions and semantically they behave similarly, but might be implemented differently? What is the difference? In both cases, for an input sequence, the model produces a prediction (inference)? Thank you, wilornel
Hi, forward() can be used both for training and inference. generate() can only be used at inference time, and uses forward() behind the scenes. It is used for several decoding strategies such as beam search, top k sampling, and so on (a detailed blog post can be found here 5).
1
huggingface
Beginners
Questions about the shape of T5 logits
https://discuss.huggingface.co/t/questions-about-the-shape-of-t5-logits/10207
The shape of logits in output is (batch,sequence_length,vocab_size). I don’t understand the sequence_length part. I thought decoder should predict one word at a time and the logits should be (batch,vocab_size) . Thank you in advance for any replies!
Hi, Yes, but you always have a sequence length dimension. At the start of generation, we give the decoder start token to the T5 decoder. Suppose you have trained a T5 model to translate language from English to French, and that we now want to test it on the English sentence “Welcome to Paris”. In that case, the T5 enco...
0
huggingface
Beginners
ImportError while loading huggingface tokenizer
https://discuss.huggingface.co/t/importerror-while-loading-huggingface-tokenizer/10193
My broad goal is to be able to run this Keras demo. 1 I’m trying to load a huggingface tokenizer using the following code: import os import re import json import string import numpy as np import pandas as pd import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tokenizers import ...
I solved this issue by installing the C++ Visual Studio Compiler.
0
huggingface
Beginners
Collapsing Wav2Vec2 pretraining loss
https://discuss.huggingface.co/t/collapsing-wav2vec2-pretraining-loss/10104
I’m trying to pretrain a Wav2Vec2 model based on the example given here 1`. I was initially getting a contrastive loss like the graph on the left which seemed very slow so I upped the learning rate and got the graph on the right after only a few steps. image880×426 33.7 KB I’m not familiar with the nuts and bolts of ...
The solution in the end was to set return_attention_mask to True in the feature extractor, or use a pretrained feature extractor and model that prefers attention masks (i.e. not wav2vec2-base).
0
huggingface
Beginners
Advise on model design, fine-tune model to output text given numerical values
https://discuss.huggingface.co/t/advise-on-model-design-fine-tune-model-to-output-text-given-numerical-values/10088
I have recently done some work on gesture recognition using sensors attached to e.g. gloves. With a defined set of distinct gestures the model works fairly well. However an idea that sprung up is if it would be possible to use pretrained “general knowledge” models to also predict other gestures. Deep down in, lets say,...
Hi johank. (I am not an expert in any of these areas.) I don’t think GPT-2 has any knowledge of anything “Deep down”. The way the model works is only probabilistic. It doesn’t automatically “know” even simple things like sizes. If you ask it how many pints to a gallon, it might be able to tell you, but it might al...
0
huggingface
Beginners
How to determine if a sentence is correct?
https://discuss.huggingface.co/t/how-to-determine-if-a-sentence-is-correct/10036
Is there any way to calculate if a sentence is correct. I have tried to calculate sentence perplexity using gtp2 as here - GPT-2 Perplexity Score Normalized on Sentence Lenght? 3. So there i get quite close results, considering it is obvious that the other sentence is wrong by all means. I am a man. 50.63967 I is an ma...
Although I cannot vouch for their quality, there are a number of grammar correction models in model hub: Models - Hugging Face 8 They seem to finetune T5 or GPT as you mentioned. However, there will never be a guarantee that the model output is 100% grammatically correct. I think a rule-based approach suits grammar the...
0
huggingface
Beginners
Fine-Tune BERT with two Classification Heads “next to each other”?
https://discuss.huggingface.co/t/fine-tune-bert-with-two-classification-heads-next-to-each-other/9984
I am currently working on a project to fine-tune BERT models on a multi-class classification task with the goal to classify job ads into some broader categories like “doctors” or “sales” via AutoModelForSequenceClassification (which works quite well ). Now I am wondering, whether it would be possible to add a second c...
Sure, you can just use any default model, e.g. BertModel and add your custom classification heads on top of that. Have a look at the existing iclassification implementation. You can basically duplicate that, but add another classifier layer. Of course you’ll also have to adapt the forward method accordingly. ...
0
huggingface
Beginners
How to fine-tune a pre-trained model and then get the embeddings?
https://discuss.huggingface.co/t/how-to-fine-tune-a-pre-trained-model-and-then-get-the-embeddings/10061
I would like to fine-tune a pre-trained model. This is the model: from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT") This is the data (I know it is not clinical but let’s rol...
Please, before asking questions look on the internet for a minute or two. This is a VERY common use case, as you may have expected. It takes us too much time to keep repeating all the same questions. Thanks. The first hit that I got on Google already gives you a tutorial on fine-tuning: Fine-tuning a pretrained model —...
0