Text Generation
Transformers
Safetensors
English
gemma4
image-text-to-text
mergekit
Merge
roleplay
conversational
# Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("Blazed-Forge/Ateron_Symphony")
model = AutoModelForImageTextToText.from_pretrained("Blazed-Forge/Ateron_Symphony")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
Symphony
This is an experimental merge of Gemma 4, made with simple linear method. Ties shown some issues, so we roll with it instead.
Models Merged
The following models were included in the merge:
- AuriAetherwiing/G4-31B-Musica-v1
- ConicCat/Gemma4-GarnetV2-31B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ./GarnetV2-31B
parameters:
weight: 0.75
- model: ./G4-Musica-v1
parameters:
weight: 0.25
merge_method: linear
dtype: float32
out_dtype: bfloat16
- Downloads last month
- 372
Model tree for Blazed-Forge/Ateron_Symphony
Merge model
this model

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Blazed-Forge/Ateron_Symphony") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)