I can't make this answer my prompts
Hello, I'm having issues on making this model answer my prompts...
Basically this is my code:
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelWithLMHead
pipe = pipeline("text2text-generation", model="unicamp-dl/ptt5-large-t5-vocab")
tokenizer = AutoTokenizer.from_pretrained("unicamp-dl/ptt5-large-t5-vocab")
model = AutoModelWithLMHead.from_pretrained("unicamp-dl/ptt5-large-t5-vocab")
result = pipe("Como você está?")
print(result[0]['generated_text'])
The print only shows my own question... How can I make this work? I'm new to it. 😉
Hi @IDKWhy17 , thanks for the interest in our models. This checkpoint is a pretrained T5 (encoder–decoder), not a chat/instruction model, so it won’t answer arbitrary questions out of the box (it will most likely generate not useful text or just copy the input, like you experienced in your case).
To get meaningful outputs, you’ll need to fine-tune it on a supervised text-to-text dataset ("input" and "target") that matches your use case (question answering, summarization, etc.). Once fine-tuned, the same text2text-generation pipeline will start producing useful generations.