docs stringclasses 4
values | category stringlengths 3 31 | thread stringlengths 7 255 | href stringlengths 42 278 | question stringlengths 0 30.3k | context stringlengths 0 24.9k | marked int64 0 1 |
|---|---|---|---|---|---|---|
huggingface | Beginners | Training stops when I try Fine-Tune XLSR-Wav2Vec2 for low-resource ASR | https://discuss.huggingface.co/t/training-stops-when-i-try-fine-tune-xlsr-wav2vec2-for-low-resource-asr/8981 | Hi,
I’m learning Wav2Vec2 according the blog link:
huggingface.co
Fine-Tune XLSR-Wav2Vec2 for low-resource ASR with 🤗 Transformers 1
And I download the ipynb file and try run it locally.
Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_🤗_Transformers.ipynb
All looks file... | It probably stops cause u don’t have enough resources to run the script, I recommend trying to run the script on google collab | 0 |
huggingface | Beginners | Wav2Vec2ForCTC and Wav2Vec2Tokenizer | https://discuss.huggingface.co/t/wav2vec2forctc-and-wav2vec2tokenizer/3587 | Having installed transformers and trying:
import transformers
import librosa
import soundfile as sf
import torch
from transformers import Wav2Vec2ForCTC
from transformers import Wav2Vec2Tokenizer
#load model and tokenizer
tokenizer = Wav2Vec2Tokenizer.from_pretrained(“facebook/wav2vec2-base-960h”)
model = Wav2Vec2ForCT... | This probably means you don’t have the latest version. You should check your version of Transformers with
import transformers
print(transformers.__version__)
and if you don’t see at least 4.3.0, you will need to upgrade your install. | 0 |
huggingface | Beginners | Trainer .evaluate() method returns one less prediction, but training runs fine (GPT-2 fine-tuning) | https://discuss.huggingface.co/t/trainer-evaluate-method-returns-one-less-prediction-but-training-runs-fine-gpt-2-fine-tuning/12846 | I’ve been breaking my head about this bug in my code for two days now. I have a set of german texts that I want to classify into one of 10 classes. The training runs smoothly, I have problems with the evaluation. Obviously I don’t share the whole texts, let me know if that is required, but they are confidential, so I’d... | I have somehow solved the issue - not sure why, but it runs when my custom trainer is the following. I also updated torch to the newest version, i.e. 1.10
class TrainerCustom(transformers.Trainer):
# def __init__(self):
# super().__init__()
def __init__(self, weights, *args, **kwargs):
super().... | 0 |
huggingface | Beginners | Extremely confusing or non-existent documentation about the Seq2Seq trainer | https://discuss.huggingface.co/t/extremely-confusing-or-non-existent-documentation-about-the-seq2seq-trainer/12880 | I’ve been trying to train a model to translate database metadata + human requests into valid SQL.
Initially, I used a wiki SQL base + a custom pytorch script (worked fine) but I decided I want to train my own from scratch and I’d better go with the “modern” method of using a trainer.
The code I currently have is:
... | Have you tried following the relevant course sections 6? (I linked to translation but summarization should be the same as well).
Basically you are supplying raw datasets to the Seq2SeqTrainer and this can’t work, as it will need the inputs to the models (input_ids, labels, attention_mask etc.) so you need to tokenize y... | 0 |
huggingface | Beginners | Question regarding TF DistilBert For Sequence Classification | https://discuss.huggingface.co/t/question-regarding-tf-distilbert-for-sequence-classification/12882 | I have successfully fine tuned “TF DistilBert For Sequence” Classification to distinguish comments that are toxic vs. not in my datasets. Is there a way to use the same model to gauge which sentence in a pair of toxic sentences is more (or less) toxic? Is there a way to access the probability produced by the classifie... | Hi,
You can access the probability as follows:
from transformers import DistilBertTokenizer, TFDistilBertForSequenceClassification
import tensorflow as tf
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
... | 0 |
huggingface | Beginners | PEGASUS&ProphetNet EncoderDecoderModel gives “Request-URI Too Large for url” error | https://discuss.huggingface.co/t/pegasus-prophetnet-encoderdecodermodel-gives-request-uri-too-large-for-url-error/4347 | Hello, I am trying to set up an EncoderDecoderModel using PEGASUS encoder and ProphetNet decoder.
First, I initialize a PegasusModel and access its encoder:
from transformers import PegasusModel, PegasusConfig, EncoderDecoderModel
pegasus = PegasusModel(PegasusConfig()).encoder
Then I try to pass that to the EncoderD... | Hello there! Were you able to resolve this issue? I am facing a similar issue with a model | 0 |
huggingface | Beginners | Training t5-based seq to seq suddenly reaches loss of `nan` and starts predicting only `<pad>` | https://discuss.huggingface.co/t/training-t5-based-seq-to-seq-suddenly-reaches-loss-of-nan-and-starts-predicting-only-pad/12884 | I’m trying to train a t5 based LM head model (mrm8488/t5-base-finetuned-wikiSQL) using my custom data to turn text into SQL (based roughly on the SPIDER dataset).
The current training loop I have is something like this:
parameters = self.model.parameters()
optimizer = AdamW(parameters, lr=1e-5) # imported from `transfo... | I also cross-posted this on stack overflow, in case anyone is helped by that: python - How to avoid huggingface t5-based seq to seq suddenly reaching a loss of `nan` and start predicting only `<pad>`? - Stack Overflow | 0 |
huggingface | Beginners | Warm-started encoder-decoder models (Bert2Gpt2 and Bert2Bert) | https://discuss.huggingface.co/t/warm-started-encoder-decoder-models-bert2gpt2-and-bert2bert/12728 | I am working on warm starting models for the summarization task based on @patrickvonplaten 's great blog: Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models. However, I have a few questions regarding these models, especially for Bert2Gpt2 and Bert2Bert models:
1- As we all know, the summarizat... | Hi,
looking at the files: Ayham/roberta_gpt2_summarization_cnn_dailymail at main
It indeed looks like only the weights (pytorch_model.bin) and model configuration (config.json) are uploaded, but not the tokenizer files.
You can upload the tokenizer files programmatically using the huggingface_hub library. First, make s... | 1 |
huggingface | Beginners | Passing gradient_checkpointing to a config initialization is deprecated | https://discuss.huggingface.co/t/passing-gradient-checkpointing-to-a-config-initialization-is-deprecated/12851 | When initializing a wav2vec2 model, as follows:
feature_extractor = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base')
wav_to_vec_model = Wav2Vec2Model.from_pretrained('facebook/wav2vec2-base')
I get the following warning:
UserWarning: Passing gradient_checkpointing to a config initialization is deprecated a... | Not sure why this pretrained model has gradient_checkpointing enabled in its config @patrickvonplaten ? It will make everyone who wants to fine-tune it use gradient checkpointing by default which is not something we want. | 0 |
huggingface | Beginners | How to finetune RAG model with mini batches? | https://discuss.huggingface.co/t/how-to-finetune-rag-model-with-mini-batches/12724 | Dear authors of RAG model,
I know I can finetune with the rag with following example.
retriever = RagRetriever.from_pretrained(rag_example_args.rag_model_name, index_name="custom", passages_path=passages_path, index_path=index_path)
model = RagSequenceForGeneration.from_pretrained(rag_example_args.rag_model_name, retri... | Hi ! I think you can just pass a list of questions and answers to the tokenizer, and the reste of the code should work fine | 0 |
huggingface | Beginners | EvalPrediction returning one less prediction than label id for each batch | https://discuss.huggingface.co/t/evalprediction-returning-one-less-prediction-than-label-id-for-each-batch/6958 | Hi there,
I am attempting to recreate R2Bert (see paper here: https://www.aclweb.org/anthology/2020.findings-emnlp.141.pdf 1) which combines regression and ranking as part of the loss function when training a model to correctly predict an essay score. I have successfully built the model with native Pytorch to train. Ho... | hey @cameronstronge, looking at your error i think the problem is that both your predictions and ground truth labels are floats, while your compute_accuracy function expects integers.
if you fix that, doe the problem resolve itself? | 0 |
huggingface | Beginners | Xnli is not loading | https://discuss.huggingface.co/t/xnli-is-not-loading/12799 | I’m trying to load the xnli dataset like this:
xnli = nlp.load_dataset(path=‘xnli’)
and i got an error:
ConnectionError: Couldn’t reach https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip
Can someone tell me what’s the problem and can i get this dataset some other way?
Thanks in advance | Hi,
the nlp project was renamed to datasets over a year ago, so I’d suggest you install that package and let us know if you still have issues downloading the dataset. | 0 |
huggingface | Beginners | How to insert a end-sequence | https://discuss.huggingface.co/t/how-to-insert-a-end-sequence/9935 | I am new to HuggingFaces and I am trying to use the GPT-Neo model to generate the next sentence in a conversation (basically like a chatbot).
I tried around with GPT-3 before, and there I was using “Me:” as an End-Sequence to ensure the model would stop generating when it genrated the Text “Me:” (which indicates that i... | Just hopping in to say I have the exact same question in the hopes it’ll encourage someone to answer. | 0 |
huggingface | Beginners | Sample evaluation script on custom dataset | https://discuss.huggingface.co/t/sample-evaluation-script-on-custom-dataset/12654 | Hey, I have a custom dataset. can you send a sample script to get the accuracy on such a dataset? I was going through examples and I couldn’t get a code that does that. Can someone send me a resource?
my dataset is of the format-
premise , hypothesis, label(0 or 1)
and my model is deberta
Thanks
@lewtun | Hey @NDugar if you’re using the Trainer my suggestion would be to run Trainer.predict(your_test_dataset) so you can get all the predictions. Then you should be able to feed those into the accuracy metric in a second step (or whatever metric you’re interested in).
If you’re still having trouble, I suggest providing a mi... | 1 |
huggingface | Beginners | How to edit different classes in transformers and have the transformer installed with the changes? | https://discuss.huggingface.co/t/how-to-edit-different-classes-in-transformers-and-have-the-transformer-installed-with-the-changes/12781 | I wanted to edit some classes in the transformers for example BertEmbeddings transformers/modeling_bert.py at 4c32f9f26e6a84f0d9843fec8757e6ce640bb44e · huggingface/transformers · GitHub
and pre-train Bert from scratch on a custom dataset. But I am stuck on how to make the edit work.
The process I am following is:
Clo... | Apparently, the problem was the editable version
This works:
cd transformers
pip install .
editable was not what I thought at first. | 1 |
huggingface | Beginners | Error occurs when loading additional parameters in multi-gpu training | https://discuss.huggingface.co/t/error-occurs-when-loading-additional-parameters-in-multi-gpu-training/12667 | I’m training plugins (e.g. adapter) on top of the language model on multiple GPUs using huggingface Accelerate. However, strange things occur when I try to load additional parameters into the model. The training cannot move on successfully and I find the process state not as expected.
My running command is like this:
... | Turn out to be a stupid mistake by me.
embed_pool = torch.load(os.path.join(args.saved_plugin_dir, 'embed_pool.pth'))
should be changed to
embed_pool = torch.load(os.path.join(args.saved_plugin_dir, 'embed_pool.pth'), map_location=torch.device('cpu'))
torch.load() will automatically map the file to device:0, and this... | 1 |
huggingface | Beginners | <extra_id> when using fine-tuned MT5 for generation | https://discuss.huggingface.co/t/extra-id-when-using-fine-tuned-mt5-for-generation/3535 | Hi, I am trying to summarize the text in Japanese.
And I found that recently you updated a new script for fine-tuning Seq2Seq model.
github.com
huggingface/transformers 30
master/examples/seq2seq
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.... | Having the same issue: the extra Id token kind of replace the first word in a sentence. Anyone knows why? | 0 |
huggingface | Beginners | How to change the batch size in a pipeline? | https://discuss.huggingface.co/t/how-to-change-the-batch-size-in-a-pipeline/8738 | Hello!
Sorry for the simple question but I was wondering how can I change the batch size when I load a pipeline for sentiment classification.
I use classifier = pipeline('sentiment-analysis') but the list of sentences I feed the classifier is too big to be processed in one batch.
Thanks! | You can do it in the method call:
examples = ["I hate everyone" ] * 100
classifier(examples, batch_size=10) | 1 |
huggingface | Beginners | Data augmentation for image (ViT) using Hugging Face | https://discuss.huggingface.co/t/data-augmentation-for-image-vit-using-hugging-face/9750 | Hi everyone,
I am currently doing the training of a ViT on a local dataset of mine. I have used the dataset template of hugging face to create my own dataset class.
To train my model I use pytorch functions (Trainer etc…), and I would like to do some data augmentation on my images.
Does hugging face allow data augmenta... | Hi,
the feature extractors (like ViTFeatureExtractor) are fairly minimal, and typically only support resizing of images and normalizing the channels. For all kinds of image augmentations, you can use torchvision’s transforms 3 or albumentations 1 for example. | 0 |
huggingface | Beginners | Are dynamic padding and smart batching in the library? | https://discuss.huggingface.co/t/are-dynamic-padding-and-smart-batching-in-the-library/10404 | my code:
return tokenizer(list(dataset['sentense']),
padding = True,
truncation = True,
max_length = 128 )
training_args = TrainingArguments(
output_dir='./results', # output directory
save_total_limit=5, # nu... | Hi,
This video makes it quite clear: What is dynamic padding? - YouTube 21
In order to use dynamic padding in combination with the Trainer, one typically postpones the padding, by only specifying truncation=True when preprocessing the dataset, and then using the DataCollatorWithPadding when defining the data loaders, w... | 0 |
huggingface | Beginners | How to properly add new vocabulary to BPE tokenizers (like Roberta)? | https://discuss.huggingface.co/t/how-to-properly-add-new-vocabulary-to-bpe-tokenizers-like-roberta/12635 | I would like to fine-tune RoBERTa on a domain-specific English-based vocabulary.
For that, I have done a TF-IDF on a corpus of mine, and extracted 500 words that are not yet in RoBERTa tokenizer.
As they represent only 1 percent of total tokenizer length, I don’t want to train the tokenizer from scratch.
So I just did ... | Hello Pataleros,
I stumbled on the same issue some time ago. I am no huggingface savvy but here is what I dug up
Bad news is that it turns out a BPE tokenizer “learns” how to split text into tokens (a token may correspond to a full word or only a part) and I don’t think there is any clean way to add some vocabulary aft... | 0 |
huggingface | Beginners | Multilabel text classification Trainer API | https://discuss.huggingface.co/t/multilabel-text-classification-trainer-api/11508 | Hi all,
Can someone help me to do a multilabel classification with the Trainer API ? | Sure, all you need to do is make sure the problem_type of the model’s configuration is set to multi_label_classification, e.g.:
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=10, problem_type="multi_label_classification")
Th... | 1 |
huggingface | Beginners | Memory Efficient Dataset Creation for NSP Training | https://discuss.huggingface.co/t/memory-efficient-dataset-creation-for-nsp-training/12385 | We want to fine tune BERT with Next Sentence Prediction (NSP) objective and we have a list of files which contains the conversations. To prepare the training dataset for the fine tuning, currently we read through all the files, load all conversation sentences into memory, create positive examples for adjacent sentences... | Hi,
Instead of generating a dataset with load_dataset, it should be easier to create dataset chunks with Dataset.from_dict, which we can then save to disk with save_to_disk, reload and concatenate to get a memory-mapped dataset.
The code could look as follows:
# distribute files in multiple dirs (chunkify dir) to avoi... | 0 |
huggingface | Beginners | Wav2vec2 finetuned model’s strange truncated predictions | https://discuss.huggingface.co/t/wav2vec2-finetuned-models-strange-truncated-predictions/12319 | What is your question?
I’m getting strange truncation of prediction at different steps of training. Please help to understand what is the issue?
At the first steps of training like 800-1600 (2-3 epochs) I’m getting predictions of valid length and words count but with low accuracy (which is ok at the first steps), After... | @patrickvonplaten kindly asking you to shed some light on this issue. what could be the possible reasons? | 0 |
huggingface | Beginners | Need help to give inputs to my fine tuned model | https://discuss.huggingface.co/t/need-help-to-give-inputs-to-my-fine-tuned-model/12582 | I finetuned a distilbert-base-uncased model in google colab. I also downloaded it (h5 file) to my laptop. But I don’t understand how to load it on my laptop and give some inputs to check how it performs. | Hi,
You can check out the code example in the docs of TFDistilBertForSequenceClassification.
The model outputs logits, which are unnormalized scores for each of the classes, for every example in the batch. It’s a tensor of shape (batch_size, num_labels).
To turn it into an actual prediction, one takes the highest score... | 0 |
huggingface | Beginners | Accelerated Inference API Automatic Speech Recognition | https://discuss.huggingface.co/t/accelerated-inference-api-automatic-speech-recognition/8239 | Hi I’m trying to use the Automatic Speech Recognition API but the docs are … light.
When I copy/paste the example code from the docs (below for convenience):
import json
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/facebook/wav2vec2-base-960... | @boxabirds I got same problem. looks like no-one responded to your post… did you work out the error? | 0 |
huggingface | Beginners | Fine-tune BERT and Camembert for regression problem | https://discuss.huggingface.co/t/fine-tune-bert-and-camembert-for-regression-problem/332 | I am fine tuning the Bert model on sentence ratings given on a scale of 1 to 9, but rather measuring its accuracy of classifying into the same score/category/bin as the judges, I just want BERT’s score on a continuous scale, like 1,1.1,1.2… to 9. I also need to figure out how to do this using CamemBERT as well. What a... | Hi @sundaravel, you can check the source code for BertForSequenceClassification here 861. It also has code for regression problem.
Specifically for regression your last layer will be of shape (hidden_size, 1) and use MSE loss instead of cross entropy | 0 |
huggingface | Beginners | Loss error for bert token classifier | https://discuss.huggingface.co/t/loss-error-for-bert-token-classifier/12460 | So i am doing my first berttoken classifier. I am using a german polyglot dataset meaning tokenised words and lists of ner labels.
a row is [‘word1’,‘word2’…] [‘ORG’,‘LOC’…]
This is my code
tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')
encoded_dataset = [tokenizer(item['words'], is_split_into_word... | Could you post the error ? | 0 |
huggingface | Beginners | Why do probabilities output for a model does not correspond to label predicted by the finetune model? | https://discuss.huggingface.co/t/why-do-probabilities-output-for-a-model-does-not-correspond-to-label-predicted-by-the-finetune-model/12464 | slight_smile:
Hello, I finetune a model from huggigface on a classification task : a multiclassification with 3 labels encoded as : 0,1, and 2. I use the crossentropy loss function for the computing of the loss .
When training I tried to get the probabilities but I observe that the probabilities does not correspond t... | From your probabilities, it looks like the predicted labels should be 1 as you expected, since it’s the highest probability.
How are you generating 2 as the predicted label? | 0 |
huggingface | Beginners | Finetuning GPT2 with user defined loss | https://discuss.huggingface.co/t/finetuning-gpt2-with-user-defined-loss/163 | I have a dataset of scientific abstracts that I would like to use to finetune GPT2. However, I want to use a loss between the output of GPT2 and an N-grams model I have to adjust the weights. Is it possible to do this using huggingface transformers and if so, how? Thank you in advance!
EDIT:
Let me be a little more exp... | GPT2’s forward has a labels argument that you can use to automatically get the standard LM loss, but you don’t have to use this. You can take the model outputs and define any loss you’d like, whether using PyTorch or TF2. If you want to use Trainer, just define your own PT module that returns your custom loss as the fi... | 0 |
huggingface | Beginners | Python crashes without error message when I try to use this custom tokenizer | https://discuss.huggingface.co/t/python-crashes-without-error-message-when-i-try-to-use-this-custom-tokenizer/12443 | I’m hoping to retrain a GPT-2 model from scratch, where the sentences are protein chains, and the words are single-ASCII-character representation of amino acids, e.g. “A” for alanine and “B” for asparagine. There are no spaces or other separators between words.
Due to constraints in other parts of my code, I would stro... | It’s on me; the issue was solved with a single line of code:
tokenizer.add_special_tokens(['>', '=', 'X', '_']) | 1 |
huggingface | Beginners | KeyError: Field “..” does not exist in table schema | https://discuss.huggingface.co/t/keyerror-field-does-not-exist-in-table-schema/12367 | Hi everyone! I’m trying to run the run_ner.py 1 script to perform a NER task on a custom dataset. The dataset was originally composed by 3 tsv files that I converted in csv files in order to run that script. Unfortunately, I got this error:
Traceback (most recent call last):
File “C:\Users\User\Desktop\NLP\run_ner.py”... | I solved the problem, the csv files generated with pandas needed a post processing in Excel: words and labels had to be in two separated columns.
They were like:
col A
word,label
They have to be:
col A col B
word label | 1 |
huggingface | Beginners | Getting an error when loading up model | https://discuss.huggingface.co/t/getting-an-error-when-loading-up-model/12453 | image842×775 36.1 KB
I had used the push_to_hub api like normal when updating it but all of sudden I am getting this error? Any help? | Please put your code so that we may view how to proceed ? just an image is not enough. | 0 |
huggingface | Beginners | How to get probabilities per label in finetuning classification task? | https://discuss.huggingface.co/t/how-to-get-probabilities-per-label-in-finetuning-classification-task/12301 | Hello, I foloow the huggingface web site to finetune FlauBert for classification task. What I would like to know is how to get probabilities for the classification. Something like this [0.75,0.85,0.25], because I have 3 classes, so far when priinting the results I get this : but it seems to correspond to the logits a... | Hi,
What models in the Transformers library output are called logits (they are called predictions in your case), these are the unnormalized scores for each class, for every example in a batch. You can turn them into probabilities by applying a softmax operation on the last dimension, like so:
import tensorflow as tf
p... | 0 |
huggingface | Beginners | Character-level tokenizer | https://discuss.huggingface.co/t/character-level-tokenizer/12450 | Hi,
I would like to use a character-level tokenizer to implement a use-case similar to minGPT play_char that could be used in HuggingFace hub.
My question is: is there an existing HF char-level tokenizer that can be used together with a HF autoregressive model (a.k.a. GPT-like model)?
Thanks! | Hi,
We do have character-level tokenizers in the library, but those are not for decoder-only models.
Current character-based tokenizers include:
CANINE 11 (encoder-only)
ByT5 8 (encoder-decoder) | 0 |
huggingface | Beginners | Modify generation params for a model in the Model hub | https://discuss.huggingface.co/t/modify-generation-params-for-a-model-in-the-model-hub/12402 | Hello, I would like to increase the max_length param on my model nouamanetazi/cover-letter-t5-base · Hugging Face 1 but I can’t seem to find how to do it in docs. Should I edit the config.json file? Can anyone provide an example please. | Hey there! You can change the inference parameters as documented here 1.
In short, you can do something like this in the metadata of the model card (README.md file)
inference:
parameters:
temperature: 0.7 | 1 |
huggingface | Beginners | How to reset a layer? | https://discuss.huggingface.co/t/how-to-reset-a-layer/4065 | Hi, I am looking for a solution to reset a layer in the pre-trained model. For example, like in a BART model, if I am going to reset the last layer of the decoder, how should I implement it?
I notice we have the _init_weights(), which should be helpful. So I am wondering if the code should be like:
# load the pre-train... | I am also stuck on this. Looking at the code of _init_weights, it looks like it expects individual modules like nn.Linear.
This would require looping over all the modules of your model that you would like to re-initialize and passing them to _init_weights. But this might not translate to a new model, as their layer str... | 0 |
huggingface | Beginners | Fine tuning Wav2vec for wolof | https://discuss.huggingface.co/t/fine-tuning-wav2vec-for-wolof/12279 | I’m training fine tuning wav2vec model , the training is going on but i see any log output | can you share a screen ?? | 0 |
huggingface | Beginners | ‘Type Error: list object cannot be interpreted as integer’ while evaluating a summarization model (seq2seq,BART) | https://discuss.huggingface.co/t/type-error-list-object-cannot-be-interpreted-as-integer-while-evaluating-a-summarization-model-seq2seq-bart/11590 | Hello all, I have been using this code:-
colab.research.google.com
Google Colaboratory 2
to learn training a summarization model. However, since I needed an extractive model, I replaced ‘sshleifer/distilbart-xsum-12-3’ with “facebook/bart-large-cnn” for both
A... | Hey @gildesh I’m not sure why you say BART will provide extractive summaries - my understanding is that it is an encoder-decoder Transformer, so the decoder will generate summaries if trained to do so.
In any case, the reason why you get an error with predict_with_generate=False is because the Trainer won’t call the mo... | 0 |
huggingface | Beginners | Convert transformer to SavedModel | https://discuss.huggingface.co/t/convert-transformer-to-savedmodel/353 | Hi! I found out that this is common unresolved problem.
So, I need to convert transformers’ DistilBERT to TensorFlows SavedModel format. I've converted it, but I cant inference it.
Conversion code
import tensorflow as tf
from transformers import TFAutoModel, AutoTokenizer
dir = "distilbert_savedmodel"
model = TFAutoMo... | In pytorch, you could save the model with something like
torch.save(model.state_dict(),’/content/drive/My Drive/ftmodelname’ )
Then you could create a model using the pre-trained weights
tuned_model = BertForSequenceClassification.from_pretrained(‘bert-base-uncased’,
num_labels=NCLASSES,
output_attentions=True)
and... | 0 |
huggingface | Beginners | TrOCR repeated generation | https://discuss.huggingface.co/t/trocr-repeated-generation/12361 | @nielsr I am using microsoft/trocr-large-printed
there is a slight issue, the model generates repeated predictions on my dataset.
if you see the left is ground_truth and the right is the model prediction
after generating the right text, it does not stop and goes on repeating the same.
Do you know what might be the iss... | Hi,
After investigation, it turns out the generate() method currently does not take into account config.decoder.eos_token_id, only config.eos_token_id.
You can fix it by setting model.config.eos_token_id = 2.
We will fix this soon. | 1 |
huggingface | Beginners | Longformer for text summarization | https://discuss.huggingface.co/t/longformer-for-text-summarization/478 | Hello! Does anyone know how to summarize long documents/news articles using the Longformer library? I am aware that using T5, the token limit is 512.
I would really appreciate any help in this area! Thank you | Hi, it’s possible to use Longformer for summerization, the way its done now, is taking BART model and then replacing it’s self attention with longformer sliding window attention so that it can take longer sequences. Check this two issues, first 75, second 49, and this branch 60 of longformer repo | 0 |
huggingface | Beginners | Tensorflow training failes with “Method `strategy` requires TF error” | https://discuss.huggingface.co/t/tensorflow-training-failes-with-method-strategy-requires-tf-error/12330 | I am doing the tensorflow example from here:
https://huggingface.co/transformers/custom_datasets.html
I get the error
“Method strategy requires TF”.
after some digging I find the issue is in
https://github.com/huggingface/transformers/blob/69e16abf98c94b8a6d2cf7d60ca36f13e4fbee58/src/transformers/file_utils.py#L82 2
... | Adding the full code to reproduce. This is run on a SageMaker jupyter instance using the tensorflow_python36 kernel.
!pip install "sagemaker>=2.48.0" "transformers==4.6.1" "datasets[s3]==1.6.2" --upgrade
!wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
from pathlib impor... | 0 |
huggingface | Beginners | BART Paraphrasing | https://discuss.huggingface.co/t/bart-paraphrasing/312 | I’ve been using BART to summarize, and I have noticed some of the outputs resembling paraphrases.
Is there a way for me to build on this, and use the model for paraphrasing primarily?
from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig
import torch
model = BartForConditionalGeneration.from... | hi @zanderbush, sure BART should also work for paraphrasing. Just fine-tune it on a paraphrasing dataset.
There’s a small mistake in the way you are using .generate. If you want to do sampling you’ll need to set num_beams to 0 and and do_sample to True . And set do_sample to false and num_beams to >1 for beam search... | 0 |
huggingface | Beginners | Cost to fine tune large transformer models on the cloud? | https://discuss.huggingface.co/t/cost-to-fine-tune-large-transformer-models-on-the-cloud/12355 | hi folks
curious if anyone has experience fine tuning RoBERTa for purposes of text classification for sentiment analysis on a dataset of ~1000 sentences on a model like RoBERTa or BERT large?
similarly, any idea how much it would cost to further pretrain the language model first on 1GB of uncompressed text?
thank you,
... | Didn’t use RoBERTa, did use BERT. Finetuning BERT can be done with google colab in decent time, i.e. is sort of free.
Pretraining I cannot say in advance. 1 GB of text data is a lot. Try 10MB for a few epochs first to make a rough estimation. Results are also not guaranteed to improve | 0 |
huggingface | Beginners | Fine-tuning T5 on Tensorflow | https://discuss.huggingface.co/t/fine-tuning-t5-on-tensorflow/12253 | Hi NLP Gurus,
I recently go trough the brand new Hugging Face course and decide to pick a project from the project list: Personal Writing Assistant. In this project, Lewis propose to use T5 and the JFleg datasets. I struggle a lot to have something close to work but I’m block at the training stage. Important point: I’m... | I already did a lot of research and found this:
this issue 2 but unfortunately without real answer
And this one 1 same | 0 |
huggingface | Beginners | Chatbot with a knowledge base & mining a knowledge base automatically | https://discuss.huggingface.co/t/chatbot-with-a-knowledge-base-mining-a-knowledge-base-automatically/12048 | Hello everybody
I’m currently developing a chatbot using transformer models (e.g., GPT 2 or BlenderBot). I would like to incorporate a Knowledge Base to give the chatbot a persona. That means a few sentences describing who it is. For example, the knowledge base could contain the sentences “I am an artist”, “I have two ... | You should look into prompt engineering, from experience it might be a bit difficult to get GPT2 to catch your prompt correctly, so if you are able I would go with a bigger model.
(Any article about prompt engineering will tell you this but, make sure you make the prompt read as something you would see in a book)
As fo... | 0 |
huggingface | Beginners | TypeError: ‘>’ not supported between instances of ‘NoneType’ and ‘int’ - Error while training distill bert | https://discuss.huggingface.co/t/typeerror-not-supported-between-instances-of-nonetype-and-int-error-while-training-distill-bert/10137 | Hi,
I had an error while finetuning distilbert model.Screen shot is given
image1356×371 21.3 KB
Screenshot of Code is(except data preprocessing):
import pandas as pd
import numpy as np
import seaborn as sns
import transformers
from transformers import AutoTokenizer,TFBertModel,TFDistilBertModel, DistilBertConfig
tok... | I came across same problem, which seems to be an issue according to a stackoverflow answer:deep learning - HUGGINGFACE TypeError: '>' not supported between instances of 'NoneType' and 'int' - Stack Overflow 22 | 0 |
huggingface | Beginners | How can I pad the vocab to a set multiple? | https://discuss.huggingface.co/t/how-can-i-pad-the-vocab-to-a-set-multiple/12290 | Probably an easy one, but not having any luck in finding the solution, so thought I’d make a post.
To use tensor cores effectively with mixed precision training a NVIDIA guide recommends to “pad vocabulary to be a multiple of 8”.
I’ve searched the tokenizers documentation for answers but haven’t found much luck. The cl... | You can resize the embedding matrix of a Transformer model using the resize_token_embeddings method (see docs 1). | 1 |
huggingface | Beginners | Calculation cross entropy for batch of two tensors | https://discuss.huggingface.co/t/calculation-cross-entropy-for-batch-of-two-tensors/12338 | I’d like to calculate cross entropy for batch of two tensors:
x = torch.tensor([[[ 2.1137, -1.3133, 0.7930, 0.3330, 0.9407],
[-0.8380, -2.0299, -1.1218, 0.3150, 0.4797],
[-0.7439, 0.0753, -0.1121, 0.0096, -1.2621]]])
y = torch.tensor([[1,2,3]])
loss = nn.CrossEntropyLoss()(x, y)
bu... | Try
x = torch.tensor([[ 2.1137, -1.3133, 0.7930, 0.3330, 0.9407],[-0.8380, -2.0299, -1.1218, 0.3150, 0.4797],[-0.7439,0.0753,-0.1121,0.0096,-1.2621]])
y = torch.tensor([1,2,3])
loss = torch.nn.CrossEntropyLoss()(x, y) | 0 |
huggingface | Beginners | Concurrent inference on a single GPU | https://discuss.huggingface.co/t/concurrent-inference-on-a-single-gpu/12046 | Hello
I’m building a chatbot using a transformer model (e.g., GPT 2 or BlenderBot) and I would like to let it run on a server (Windows or Linux). The server has one 11GB GPU. If there is only one inference of the chatbot model at the same time there is no problem. But if there are several concurrent calls, the calls ne... | Does somebody have any suggestions? I’m happy about every input. | 0 |
huggingface | Beginners | Generate sentences from keywords only | https://discuss.huggingface.co/t/generate-sentences-from-keywords-only/12315 | Hi everyone, I am trying to generate sentences from a few keywords only. May I know which model and function I shall use? Thank you and I am a very beginner.
E.g.,
Input: “dinner”, “delicious”, “wonderful”, “steak”
Output: “We had a wonderful dinner yesterday and the steak was super delicious.” | Seq2seq models like T5 and BART are well-suited for this. You can fine-tune them on (list of keywords, sentence) pairs in a supervised manner. | 0 |
huggingface | Beginners | How to turn WanDB off in trainer? | https://discuss.huggingface.co/t/how-to-turn-wandb-off-in-trainer/6237 | I am trying to use the trainer to fine tune a bert model but it keeps trying to connect to wandb and I dont know what that is and just want it off. is there a config I am missing? | import os
os.environ[“WANDB_DISABLED”] = “true”
This works for me. | 0 |
huggingface | Beginners | glue_data/MNLI dataset | https://discuss.huggingface.co/t/glue-data-mnli-dataset/12265 | Hi,
I am trying to download the MNLI data set from hugging face, but I can’t. I can see the preview of the data but can’t download it. Does anybody have any idea?
Thanks. | Hi,
could you please open an issue 1 on GH where you copy and paste the error stack trace you are getting? | 0 |
huggingface | Beginners | Why TrOCR processor has a feature extractor? | https://discuss.huggingface.co/t/why-trocr-processor-has-a-feature-extractor/11939 | When we are using an image transformer, why do we need a feature extractor (TrOCR processor is Feature Extractor + Roberta Tokenizer)?
And I saw the output image given by the processor, it’s the same as the original image, just the shape is changed, it resized smaller.
@nielsr is the processor doing any type of image ... | Yes feature extractors also have a from_pretrained method, to just load the same configuration as the one of a particular checkpoint on the hub.
e.g. if you do ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224"), it will make sure the size attribute of the feature extractor is set to 224. You could of co... | 1 |
huggingface | Beginners | RoBERTa MLM fine-tuning | https://discuss.huggingface.co/t/roberta-mlm-fine-tuning/1330 | Hello,
I want to fine-tune RoBERTa for MLM on a dataset of about 200k texts. The texts are reviews from online forums ranging from basic conversations to technical descriptions with a very specific vocabulary.
I have two questions regarding data preparation:
Can I simply use RobertaTokenizer.from_pretrained("roberta-b... | Hello there,
I am currently trying to do the same : fine-tune Roberta on a very specific vocabulary of mine (let’s say : biology stuff).
About your first question, you should at least add some new words, specific to your vocabulary, in the Tokenizer vocabulary. See this discussion : how can i finetune BertTokenizer?... | 0 |
huggingface | Beginners | TrOCR inference | https://discuss.huggingface.co/t/trocr-inference/12237 | @nielsr, i am using trocr-printed for inference using the below code and its working fine except for
the len of (tuple_of_logits) , it’s always 19, no matter what batch_size i use, even when i override the
model.decoder.config.max_length from 20 to 10, the len(tuple_of_logits) is always 19.
can you please help me figu... | Hi,
You can adjust the maximum number of tokens by specifying the max_length parameter of the generate method:
outputs = model.generate(pixel_values, output_scores=True, return_dict_in_generate=True, max_length=10)
Note that this is explained in the docs 2. | 1 |
huggingface | Beginners | Diff between trocr-printed and trocr-handwritten | https://discuss.huggingface.co/t/diff-between-trocr-printed-and-trocr-handwritten/12239 | what’s the Diff between trocr-printed and trocr-handwritten, other than the dataset they were trained on, because I inference an image with both the model and found handwritten gave correct output but printed missed a character | so the handwritten model prediction being correct and printed model being incorrect would be random | 1 |
huggingface | Beginners | Plot Loss Curve with Trainer() | https://discuss.huggingface.co/t/plot-loss-curve-with-trainer/9767 | Hey,
I am fine tuning a BERT model for a Multiclass Classification problem. While training my losses seem to look a bit “unhealthy” as my validation loss is always smaller (eval_steps=20) than my training loss. How can I plot a loss curve with a Trainer() model? | Scott from Weights & Biases here. Don’t want to be spammy so will delete this if it’s not helpful. You can plot losses to W&B by passing report_to to TrainingArguments.
from transformers import TrainingArguments, Trainer
args = TrainingArguments(... , report_to="wandb")
trainer = Trainer(... , args=args)
More info he... | 0 |
huggingface | Beginners | Annotate a NER dataset (for BERT) | https://discuss.huggingface.co/t/annotate-a-ner-dataset-for-bert/9687 | I am working on annotating a dataset for the purpose of named entity recognition.
In principle, I have seen that for multi-phrase (not single word) elements, annotations work like this (see this example below):
Romania ( B-CNT )
United States of America ( B-CNT C-CNT C-CNT C-CNT )
where B-CNT stands for “beginning-co... | Did you find a solution to this problem? I am working on this right now and want to label entities that are multiword. So far I have just labelled them all as individual words but its a pretty bad way to do this. | 0 |
huggingface | Beginners | Tokenizer.batch_encode_plus uses all my RAM | https://discuss.huggingface.co/t/tokenizer-batch-encode-plus-uses-all-my-ram/4828 | I only have 25GB RAM and everytime I try to run the below code my google colab crashes. Any idea how to prevent his from happening. Batch wise would work? If so, how does that look like?
max_q_len = 128
max_a_len = 64
def batch_encode(text, max_seq_len):
return tokenizer.batch_encode_plus(
text.tolist(),
... | Are you positive it’s actually the encoding that does it and not some other part of your code? Maybe you can show us the traceback? | 0 |
huggingface | Beginners | ValueError: Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 8])) | https://discuss.huggingface.co/t/valueerror-target-size-torch-size-8-must-be-the-same-as-input-size-torch-size-8-8/12133 | I’m having trouble getting my model to train. It keeps returning the error:
ValueError: Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 8]))
for the given code:
from transformers import BertTokenizer, BertForSequenceClassification, TrainingArguments, Trainer
from datasets import load_datase... | Can you print out one or two examples of small_train_dataset?
In that way, you can verify whether the data is prepared correctly for the model. You can for example decode the input_ids back to text, verify the labels, etc. | 0 |
huggingface | Beginners | How to become involved in the community? | https://discuss.huggingface.co/t/how-to-become-involved-in-the-community/12168 | Hello! I am sure this is a widely asked question but I would appreciate some insight! I am a developer coming from using GPT-3, and I have some understanding of transformer models and such. I work with CV and NLP, and wanted to look into HuggingFace as I have heard it’s name everywhere.
I am looking to develop my own N... | Hey, welcome to HuggingFace. If you want to learn the best way is to go through the course (rather short, but provide ample knowledge) - Transformer models - Hugging Face Course 2 and if you got doubts ask them on the Forum.
Join the discord for discussions, interaction with the community and much more.
Have a good day... | 0 |
huggingface | Beginners | Accuracy metric throws during evaluation on sequence classification task | https://discuss.huggingface.co/t/accuracy-metric-throws-during-evaluation-on-sequence-classification-task/12103 | I’m fine-tuning a BertFor SequenceClassification model and I want to compute the accuracy on the evaluation set after each training epoch. However, the evaluation step fails with:
TypeError: 'list' object is not callable" when calling evaluate
Here’s a minimal example showing the error (see also this Colab notebook):
... | compute_metrics should be a function that takes a namedtuple (of type EvalPredictions) and returns a dictionary metric nane/metric value.
Look at the text classificatione example 1 or the course section on the Trainer 1. | 1 |
huggingface | Beginners | Question-Answering/Text-generation/Summarizing: Fine-tune on multiple answers | https://discuss.huggingface.co/t/question-answering-text-generation-summarizing-fine-tune-on-multiple-answers/2778 | Hi all!
Looking to fine-tune a model for QA/Text-Generation (not sure how to frame this) and I’m wondering how to best prepare the dataset in a way that I can feed multiple answers to the same question?
My goal is to facilitate the creation of a unique answer to a given question that is based on the input answers. The ... | Maybe @valhalla has some pointers? | 0 |
huggingface | Beginners | Retrieving whole words with fill-mask pipeline | https://discuss.huggingface.co/t/retrieving-whole-words-with-fill-mask-pipeline/11323 | Hi,
I’ve recently discovered the power of the fill-mask pipeline from Huggingface, and while playing with it, I discovered that it has issues handling non-vocabulary words.
For example, in the sentence, “The internal analysis indicates that the company has reached a [MASK] level.”, I would like to know which one of the... | I’m not sure but you would average the probability of these tokens together and then compare with each other | 0 |
huggingface | Beginners | Reuse context for more questions in question answering | https://discuss.huggingface.co/t/reuse-context-for-more-questions-in-question-answering/9614 | I want to reuse the context in my QA system. I want to answer more questions on the same context and I want to avoid to load the context for any answer.
I’m tying to use the code below. There is a way to reuse the context, i.e. to load the context only once?
from transformers import pipeline
nlp_qa = pipeline(
'qu... | With most implementations, this is not possible. The models combine question and context such that both can attend to each other since the first layer.
There may be some implementation that only performs question/context attention as a last operation but I am not aware of it. | 0 |
huggingface | Beginners | Output Includes Input | https://discuss.huggingface.co/t/output-includes-input/6831 | Whenever I am generating text the input is included in the output. When the input is close to the maximum length the model barely produces any useful output.
Information
When using transformers.pipeline or transformers.from_pretrianed, the model is only generating the input, when the input is long. For example,
genera... | I am experiencing the same issue with another model … Help with this would be appreciated | 0 |
huggingface | Beginners | Converting an HF dataset to pandas | https://discuss.huggingface.co/t/converting-an-hf-dataset-to-pandas/12043 | Wondering if there is a way to convert a dataset downloaded using load_dataset to pandas? | Hi,
we have a method for that - Dataset.to_pandas. However, note that this will load the entire dataset into memory by default to create a DataFrame. If your dataset is too big to fit in RAM, load it in chunks as follows:
dset = load_dataset(...)
for df in dset.to_pandas(batch_size=..., batched=True):
# process dat... | 0 |
huggingface | Beginners | Img2seq model with pretrained weights | https://discuss.huggingface.co/t/img2seq-model-with-pretrained-weights/728 | Hi there,
I’m very new to this transformers business, and have been playing around with the HF code to learn things as I tinker.
the context:
One thing I would like to do is build an encoder/decoder out of a CNN and a Transformer Decoder, for generating text from images. My wife likes to pretend that she’s seen movies,... | Interesting project. My suggestion would be to take the Transformer based ViT and merge that with a decoder as a sequence to sequence function but with cross attention. | 0 |
huggingface | Beginners | What is temperature? | https://discuss.huggingface.co/t/what-is-temperature/11924 | I see the word “temperature” being used at various places like:
in Models — transformers 4.12.4 documentation 5
temperature ( float , optional, defaults to 1.0) – The value used to module the next token probabilities.
temperature scaling for calibration
temperature of distillation
can anyone please explain what... | This is very well explained in this Stackoverflow answer 11.
You can also check out our blog post 5 on generating text with Transformers, that also includes a description of the temperature. | 1 |
huggingface | Beginners | Loading WiC dataset for fine tuning | https://discuss.huggingface.co/t/loading-wic-dataset-for-fine-tuning/11936 | I’m attempting to load the WiC dataset given to me for a class final project to fine-tune a BERT model but keep getting errors.
I think it could be a problem with getitems but I’m not certain how I would change that to fit this dataset.
My Code is:
from transformers import BertTokenizer, BertForSequenceClassification, ... | Hi,
I see you are first working with a HuggingFace Dataset (that is returned by the load_dataset function), and that you are then converting it to a PyTorch Dataset.
Actually, the latter is not required. Also, you can tokenize your training and test splits in one go:
from transformers import BertTokenizer, BertForSeque... | 1 |
huggingface | Beginners | How to fine tune bert on entity recognition? | https://discuss.huggingface.co/t/how-to-fine-tune-bert-on-entity-recognition/11309 | I have a paragraph for example below
Either party may terminate this Agreement by written notice at any time if the other party defaults in the performance of its material obligations hereunder. In the event of such default, the party declaring the default shall provide the defaulting party with written notice setting ... | Named-entity recognition (NER) is typically solved as a sequence tagging task, i.e. the model is trained to predict a label for every word. Typically one annotates NER datasets using the IOB annotation format 1 (or one of its variants, like BIOES). Let’s take the example sentence from your paragraph. It would have to b... | 0 |
huggingface | Beginners | Zero shot classification for long form text | https://discuss.huggingface.co/t/zero-shot-classification-for-long-form-text/5536 | I’m looking to do topic prediction/classification on long form text (podcasts/transcripts) and I’m curious if anyone knows of a model for this? I’ve looked through the existing zero shot classification models but they all appear to be optimized for short form text like questions.
If anyone knows of such a model I would... | cc @joeddav who is the zero-shot expert here | 0 |
huggingface | Beginners | How to fine tune TrOCR model properly? | https://discuss.huggingface.co/t/how-to-fine-tune-trocr-model-properly/11699 | Machine learning neophyte here, so apologies in advance for a “dumb” question.
I have been trying to build a TrOCR model using the VisionEncoderDecoderModel with a checkpoint ‘microsoft/trocr-base-handwritten’ . I have tried the other ones too, but my fine tuning messes up the model instead of improving it. Wanted to ... | Hi,
Thanks for your interest in TrOCR! Actually, the checkpoint you are loading (i.e. microsoft/trocr-base-handwritten) is one that is already fine-tuned on the IAM dataset. So I guess further fine-tuning on this dataset is not really helpful.
Instead, it makes sense to start from a pre-trained-only checkpoint (namely ... | 1 |
huggingface | Beginners | Best model for entity recognition, text classification and sentiment analysis? | https://discuss.huggingface.co/t/best-model-for-entity-recognition-text-classification-and-sentiment-analysis/11776 | What is the best pre-trained model I can use on my python 2.6.6 web site for:
Entity Recognition,
Text Classification (topics, sub-topics, etc) and
Sentiment Analysis?
I’m not interested in paid subscriptions to apis.
I’m looking for open source solutions that I can run on my server.
Any help would be greatly appreciat... | I prefer myself as a random person before someone comes here to help you.
I guess a Bert or distilled Bert. I think a pre-trained model can do the job. | 0 |
huggingface | Beginners | Add default examples for the Inference API | https://discuss.huggingface.co/t/add-default-examples-for-the-inference-api/11757 | Hi there,
I recently fine-tuned a model and add it to the Hub.
My model is BERT based with a classification head used for sentiment analysis.
I’m wondering how to set-up default examples to be selected by users. The Hub has already set an example in english but my model uses Arabic language.
Here you can find my model.... | UPDATE:
I found the answer in another model README.md
Apparently, I had to add in the top of my README.md the following text:
---
language: ar # <-- my language
widget:
- text: "my example goes here in the requested language"
---
Hope this helps other beginners too and would be awesome if someone put the link... | 0 |
huggingface | Beginners | How to increase the length of the summary in Bart_large_cnn model used via transformers.Auto_Model_frompretrained? | https://discuss.huggingface.co/t/how-to-increase-the-length-of-the-summary-in-bart-large-cnn-model-used-via-transformers-auto-model-frompretrained/11622 | Hello, I used this code to train a bart model and generate summaries
(Google Colab)
However, the summaries are coming about to be only 200-350 characters in length.
Is there some way to increase that length?
What I thought was the following options: -
encoder_max_length = 256 # demo
decoder_max_length = 64
which are u... | Hello, can anyone help
Bump | 0 |
huggingface | Beginners | How to pass continuous input in addition to text into pretrained model? | https://discuss.huggingface.co/t/how-to-pass-continuous-input-in-addition-to-text-into-pretrained-model/11414 | Hi, from here 1 it looks like the inputs to a model/pipeline will need to be either the words or tokenizer output, which are dictionary outputs of id and mask. In my project, I need to use a combination of words + some extracted features. How would one do this? Thank you. | No i think in that case you would implement a featurizer yourself and would pass the final inputs_embeds to the model | 1 |
huggingface | Beginners | Huggingfac and GSoC22 | https://discuss.huggingface.co/t/huggingfac-and-gsoc22/11478 | Hello team,
I was wondering, Huggingfac have a big community and it’s one of the best open course organizations to my mind so why I never see it in the GSoC
actually, I want to contribute with Huggingface soon and at the same time I want to select my organization that I will start to contribute with for GSoC
So I comin... | Hi there! The GSoC timelines have not been published for 2022. Last year the proposals were submitted around end of March and organizations applications were around mid January, so it’s a bit early to know how we’ll participate in the GSoC program.
In any case, if you want to contribute with Hugging Face to the Open So... | 0 |
huggingface | Beginners | Why are transformers called transformers? | https://discuss.huggingface.co/t/why-are-transformers-called-transformers/11607 | hi all,
does anyone know why transformer models are called transformer models?
Is it related to some kind of meme like the inception module? | Because they were a radical new architecture, compared to RNNs and CNNs, i.e. they transformed the architecture landscape. | 0 |
huggingface | Beginners | How to train a model to do QnA - other than English language? | https://discuss.huggingface.co/t/how-to-train-a-model-to-do-qna-other-than-english-language/11584 | Please help me - point me to resources where I can learn –
How to train a model to understand a paragraph in different language
Train to answer questions
I tested a couple of models.-- but they are not working.
It looks that I need to train the model…but I don’t know how to. | If you want to perform QnA tasks in other languages, you have to either use a multilingual model or use the pretrained models trained on your target language. See this post 1 for more information. | 0 |
huggingface | Beginners | Not sure how to compute BLEU through compute_metrics | https://discuss.huggingface.co/t/not-sure-how-to-compute-bleu-through-compute-metrics/9653 | Here is my code
from transformers import Seq2SeqTrainer,Seq2SeqTrainingArguments, EarlyStoppingCallback, BertTokenizer,MT5ForConditionalGeneration
from transformers.data.data_collator import DataCollatorForSeq2Seq,default_data_collator
from torch.utils.data import Dataset, DataLoader
import pandas as pd
import math,os
... | Hi, I encounter the same situation that trying to use BLEU as the evaluation metric but having the same error as you. Did you find out a solution? | 0 |
huggingface | Beginners | Use custom LogitsProcessor in `model.generate()` | https://discuss.huggingface.co/t/use-custom-logitsprocessor-in-model-generate/11603 | This post is related to Whitelist specific tokens in beam search - #4 by 100worte 1
I see methods such as beam_search() and sample() has a parameter logits_processor, but generate() does not. As of 4.12.3, generate() seems to be calling _get_logits_processor() without any way to pass additional logits processors.
From ... | As it turns out, you cannot add a custom logits processor list to the model.generate(...) call. You need to use your own beam scorer… Similiar to this piece of code I had lying around from a research project.
bad_words_t = bad_words_ids
if extra_bad_words is not None:
bad_words_t += extra_bad_words
model_out=... | 0 |
huggingface | Beginners | Popping `inputs[labels]` when self.label_smoother is not None (in trainer.py) | https://discuss.huggingface.co/t/popping-inputs-labels-when-self-label-smoother-is-not-none-in-trainer-py/11589 | Hi,
I was training my seq2seq model (I’m using Seq2seqTrainer) with label-smoothing and have encountered an error that input_ids was required in my training dataset, whereas I checked that I put them in the dataset.
While debugging it, I found that when self.label_smoother is not None, the labels item was popped out f... | Hey @jbeh can you share a minimal reproducible example? For example, something simple that just shows:
How you load and tokenize the datasets
How you define the training arguments
How you define the trainer
That will help us understand better what is causing the issue | 0 |
huggingface | Beginners | KeyError: ‘loss’ even though my dataset has labels | https://discuss.huggingface.co/t/keyerror-loss-even-though-my-dataset-has-labels/11563 | Hi everyone! I’m trying to fine-tune on a NER task the Musixmatch/umberto-commoncrawl-cased-v1 model, on the italian section of the wikiann dataset. The notebook I’m looking up to is this: notebooks/token_classification.ipynb at master · huggingface/notebooks · GitHub.
Dataset’s initial structure is:
DatasetDict({
... | marcomatta:
It has no labels but the DataCollatorForTokenClassification should help me out generating them.
No, you need to preprocess your dataset to generate them. The data collator is only there to pad those labels as well as the inputs. Have a look at one of the token classification example script 1 or example n... | 0 |
huggingface | Beginners | How to view the changes in a model after training? | https://discuss.huggingface.co/t/how-to-view-the-changes-in-a-model-after-training/11490 | Hello,
I trained a BART model (facebook-cnn) for summarization and compared summaries with a pretrained model
model_before_tuning_1 = AutoModelForSeq2SeqLM.from_pretrained(model_name)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_data,
eval_dataset=validatio... | When fine-tuning a Transformer-based model, such as BART, all parameters of the model are updated. This means, any tensor that is present in model.parameters() can have updated values after fine-tuning.
The configuration of a model (config.json) before and after fine-tuning can be identical. The configuration just defi... | 0 |
huggingface | Beginners | Using jiant to evaluate sentence-transformer based model | https://discuss.huggingface.co/t/using-jiant-to-evaluate-sentence-transformer-based-model/11540 | Basically, the question is in the title, is it possible to use the NLP toolkit jiant to score models based on sentence transformers | I think that depends on the base model. So for example, a RoBERTa-based model is likely going to work just fine, while a ViT-based one won’t. | 1 |
huggingface | Beginners | Cuda OOM Error When Finetuning GPT Neo 2.7B | https://discuss.huggingface.co/t/cuda-oom-error-when-finetuning-gpt-neo-2-7b/6302 | I’m trying to finetune the 2.7B model with some data I gathered. I’m running on Google Colab Pro with a T-100 16GB. When I run:
from happytransformer import HappyGeneration, GENTrainArgs
model= HappyGeneration("GPT-NEO", "EleutherAI/gpt-neo-2.7B")
args = GENTrainArgs(num_train_epochs = 1, learning_rate =1e-5)
model.tra... | Check Keep getting CUDA OOM error with Pytorch failing to allocate all free memory - PyTorch Forums 8 for the pytorch part of it.
I am seeing something similar for XLM, but in my case pytorch config override are not getting recognized. Need to check if hugging face is overriding it internally. | 0 |
huggingface | Beginners | What is a “word”? | https://discuss.huggingface.co/t/what-is-a-word/11517 | Trying to understand the char_to_word method on transformers.BatchEncoding. The description in the docs is:
Get the word in the original string corresponding to a character in the original string of a sequence of the batch.
Is a “word” defined to be a collection of nonwhitespace characters mapped, by the tokenizer, t... | The notion of word depends on the tokenizer, and the text words are the result of the pre-tokenziation operation. Depending on the tokenizer, it can be split by whitespace, or by whitespace and punctuation, or other more advanced stuff | 0 |
huggingface | Beginners | Error occurs when saving model in multi-gpu settings | https://discuss.huggingface.co/t/error-occurs-when-saving-model-in-multi-gpu-settings/11407 | I’m finetuning a language model on multiple gpus. However, I met some problems with saving the model. After saving the model using .save_pretrained(output_dir), I tried to load the saved model using .from_pretrained(output_dir), but got the following error message.
OSError: Unable to load weights from pytorch checkpoin... | Is your training a multinode training? What may have happened is that you saved the model on the main process only, so only on one machine. The other machines then don’t find your model when you try to load it.
You can use the is_main_local_process attribute of the accelerator to save once per machine. | 0 |
huggingface | Beginners | Trainer never invokes compute_metrics | https://discuss.huggingface.co/t/trainer-never-invokes-compute-metrics/11440 | def compute_metrics(p: EvalPrediction):
print("***Computing Metrics***") # THIS LINE NEVER PRINTED
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.squeeze(preds) if is_regression else np.argmax(preds, axis=1)
if data_args.task_name is not None:
... | You can see the batches that will be passed to your model for evaluation with:
for batch in trainer.get_eval_dataloader(eval_dataset):
break
And see if it does contain the "labels" key. | 1 |
huggingface | Beginners | Changing a CML Language | https://discuss.huggingface.co/t/changing-a-cml-language/11489 | Hi all!
I just wanted to know if is there any shortcut for changing a causal language model language (in my case from English to Persian) - like fine-tuning a existing one - or I should train one from scratch? | You can start from a pre-trained English one and fine-tune it on another language. The effectiveness of this has been shown (among others) in the paper As Good as New. How to Successfully Recycle English GPT-2 to Make Models for Other Languages.
cc @wietsedv, who is the main author of that paper. | 1 |
huggingface | Beginners | T5 for closed book QA | https://discuss.huggingface.co/t/t5-for-closed-book-qa/11475 | How can I use T5 for abstractive QA, I don’t want to work on a SQUAD-like dataset, but rather get answers from general questions. Is there a prefix for this kind of QA for T5?
Thank you in advance! | Hi,
For open-domain question answering, no prefix is required. Google released several checkpoints (which you can find on our hub, such as this one 5) from their paper 2, you can use them as follows:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("goog... | 0 |
huggingface | Beginners | How to enable set device as GPU with Tensorflow? | https://discuss.huggingface.co/t/how-to-enable-set-device-as-gpu-with-tensorflow/11454 | Hi,
I am trying to figure out how I can set device to GPU with Tensorflow. With other frameworks I have been able to use GPU with the same instance. I have below code and error. It runs OOM with CPU instantly.
I know with Pytorch I can go as torch.cuda.set_device(), can you help me understand what’s the method in Tenso... | Normally, Keras automatically uses the GPU if it’s available.
You can check the available GPUs as follows:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) | 0 |
huggingface | Beginners | LayoutLM for table detection and extraction | https://discuss.huggingface.co/t/layoutlm-for-table-detection-and-extraction/7015 | Can the LayoutLM model be used or tuned for table detection and extraction?
The paper says that it works on forms, receipts and for document classification tasks. | @ujjayants LayoutLM is not designed for that but you may want to look at
GitHub
GitHub - Layout-Parser/layout-parser: A Unified Toolkit for Deep Learning... 28
A Unified Toolkit for Deep Learning Based Document Image Analysis - GitHub - Layout-Parser/layout-parser: A Unified Toolkit for... | 0 |
huggingface | Beginners | Imbalance memory usage on multi_gpus | https://discuss.huggingface.co/t/imbalance-memory-usage-on-multi-gpus/11423 | Hi,
I am using the Trainer API for training a Bart model.
training_args = Seq2SeqTrainingArguments(
output_dir='./models/bart',
evaluation_strategy = "epoch",
learning_rate=2e-5,
num_train_epochs=5,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
warmup_steps=500,... | The reason for this, as far as I know, that all the models in the GPUs 1-7 have a copy in the GPU 0. The computed gradients on GPUs 1-7 are brought back to the GPU 0 for the backward pass to synchronize all the copies. After backpropagation, the newly obtained model parameters are distributed again to the GPUs 1-7. For... | 1 |
huggingface | Beginners | Bert Data Preparation | https://discuss.huggingface.co/t/bert-data-preparation/11435 | I am trying to pre train a bert type model from scratch. It will be a bert tiny model .
I will append the wikipedia data along with some of my own data.
For wiki data I can download using hugging face datasets library. The question I have is what kind of cleaning I need to do after that. There are some non ascii charac... | Moving this topic to the Beginners category. | 0 |
huggingface | Beginners | Get label to id / id to label mapping | https://discuss.huggingface.co/t/get-label-to-id-id-to-label-mapping/11457 | hello,
I have been trouble finding where I can get label to id mapping after loading a dataset from the HuggingFace Hub. Is it already in the dataset object ? or is it done with the model ? Because I did not define the label2id configuration before training and I don’t know how to get back the corresponding label after... | It is done within the dataset AFAIK.
dataset = load_dataset(...)
dataset.features["label"].feature.names # All label names
dataset.features["label"].feature._int2str # Same as `.names`
dataset.features["label"].feature._str2int # Mapping from labels to integer | 1 |
huggingface | Beginners | Coreference Resolution | https://discuss.huggingface.co/t/coreference-resolution/11394 | Hi,
I’m quite familiar with the Huggingface ecosystem and I used it a lot.
However, I cannot find resources/models / tutorials for coreference resolution except for neuralcoref which last commit was years ago…
I also saw some models but there is not any clue on how to use them (I guess a TokenClassification Head ?)
Doe... | Hi,
I suggest to take a look at this repo: GitHub - mandarjoshi90/coref: BERT for Coreference Resolution 5
It includes multiple models (BERT, SpanBERT) fine-tuned on OntoNotes, an important benchmark for coreference resolution.
There’s also a demo notebook 2, showcasing how to run inference for a new piece of text to f... | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.