Question Answering
Transformers
PyTorch
TensorBoard
English
t5
text2text-generation
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("grasgor/flan-t5-small-wikitablequestions")
model = AutoModelForSeq2SeqLM.from_pretrained("grasgor/flan-t5-small-wikitablequestions")Quick Links
README.md exists but content is empty.
- Downloads last month
- 15
Model tree for grasgor/flan-t5-small-wikitablequestions
Base model
google/flan-t5-small
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="grasgor/flan-t5-small-wikitablequestions")