raazkumar/gemma-4-31B-it-mlx-2Bit
The Model raazkumar/gemma-4-31B-it-mlx-2Bit was converted to MLX format from google/gemma-4-31B-it using mlx-lm version 0.31.2.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("raazkumar/gemma-4-31B-it-mlx-2Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Generated by ML Intern
This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.
- Try ML Intern: https://smolagents-ml-intern.hf.space
- Source code: https://github.com/huggingface/ml-intern
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = 'tritesh/gemma-4-31B-it-mlx-2Bit'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
For non-causal architectures, replace AutoModelForCausalLM with the appropriate AutoModel class.
- Downloads last month
- 159
Model size
31B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
2-bit
# Make sure mlx-vlm is installed # pip install --upgrade mlx-vlm from mlx_vlm import load, generate from mlx_vlm.prompt_utils import apply_chat_template from mlx_vlm.utils import load_config # Load the model model, processor = load("tritesh/gemma-4-31B-it-mlx-2Bit") config = load_config("tritesh/gemma-4-31B-it-mlx-2Bit") # Prepare input image = ["http://images.cocodataset.org/val2017/000000039769.jpg"] prompt = "Describe this image." # Apply chat template formatted_prompt = apply_chat_template( processor, config, prompt, num_images=1 ) # Generate output output = generate(model, processor, formatted_prompt, image) print(output)