How to use google/gemma-4-26B-A4B-it-assistant with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-4-26B-A4B-it-assistant") model = AutoModelForCausalLM.from_pretrained("google/gemma-4-26B-A4B-it-assistant")