mlx-community/granite-34b-code-base-4bit
The Model mlx-community/granite-34b-code-base-4bit was converted to MLX format from ibm-granite/granite-34b-code-base using mlx-lm version 0.13.0.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/granite-34b-code-base-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 49
Hardware compatibility
Log In to add your hardware
Quantized
Datasets used to train mlx-community/granite-34b-code-base-4bit
Evaluation results
- pass@1 on MBPPself-reported47.200
- pass@1 on MBPP+self-reported53.100
- pass@1 on HumanEvalSynthesis(Python)self-reported48.200
- pass@1 on HumanEvalSynthesis(Python)self-reported54.900
- pass@1 on HumanEvalSynthesis(Python)self-reported61.600
- pass@1 on HumanEvalSynthesis(Python)self-reported40.200
- pass@1 on HumanEvalSynthesis(Python)self-reported50.000
- pass@1 on HumanEvalSynthesis(Python)self-reported39.600
- pass@1 on HumanEvalSynthesis(Python)self-reported42.700
- pass@1 on HumanEvalSynthesis(Python)self-reported26.200
- pass@1 on HumanEvalSynthesis(Python)self-reported47.000
- pass@1 on HumanEvalSynthesis(Python)self-reported26.800
- pass@1 on HumanEvalSynthesis(Python)self-reported36.600
- pass@1 on HumanEvalSynthesis(Python)self-reported25.000
- pass@1 on HumanEvalSynthesis(Python)self-reported20.100
- pass@1 on HumanEvalSynthesis(Python)self-reported30.500
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("mlx-community/granite-34b-code-base-4bit") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True)