Duplicated from google/gemma-4-26B-A4B-it-assistant
How to use BaybeDragon/gemma-4-26B-A4B-it-assistant with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BaybeDragon/gemma-4-26B-A4B-it-assistant") model = AutoModelForCausalLM.from_pretrained("BaybeDragon/gemma-4-26B-A4B-it-assistant")
The community tab is the place to discuss and collaborate with the HF community!