How to use google/gemma-4-31B-it-assistant with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-4-31B-it-assistant") model = AutoModelForCausalLM.from_pretrained("google/gemma-4-31B-it-assistant")
Can this be fine tuned somehow to decrease rejection on specific tasks?
· Sign up or log in to comment