How to use google/gemma-4-E2B-it-assistant with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-4-E2B-it-assistant") model = AutoModelForCausalLM.from_pretrained("google/gemma-4-E2B-it-assistant")
The community tab is the place to discuss and collaborate with the HF community!