How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="Blazed-Forge/Ateron_Symphony")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
pipe(text=messages)
# Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText

processor = AutoProcessor.from_pretrained("Blazed-Forge/Ateron_Symphony")
model = AutoModelForImageTextToText.from_pretrained("Blazed-Forge/Ateron_Symphony")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
inputs = processor.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Symphony

Gemini_Generated_Image_1682ve1682ve1682

This is an experimental merge of Gemma 4, made with simple linear method. Ties shown some issues, so we roll with it instead.

Models Merged

The following models were included in the merge:

  • AuriAetherwiing/G4-31B-Musica-v1
  • ConicCat/Gemma4-GarnetV2-31B

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./GarnetV2-31B
    parameters:
      weight: 0.75
  - model: ./G4-Musica-v1
    parameters:
      weight: 0.25
merge_method: linear
dtype: float32
out_dtype: bfloat16
Downloads last month
372
Safetensors
Model size
31B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Blazed-Forge/Ateron_Symphony