Image-Text-to-Text
Transformers
Safetensors
Korean
English
qwen3_5
korean
reasoning
multimodal
mix
conversational
Instructions to use ginigen-ai/Rogue-28B-MIX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ginigen-ai/Rogue-28B-MIX with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="ginigen-ai/Rogue-28B-MIX") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("ginigen-ai/Rogue-28B-MIX") model = AutoModelForImageTextToText.from_pretrained("ginigen-ai/Rogue-28B-MIX") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ginigen-ai/Rogue-28B-MIX with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ginigen-ai/Rogue-28B-MIX" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ginigen-ai/Rogue-28B-MIX", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/ginigen-ai/Rogue-28B-MIX
- SGLang
How to use ginigen-ai/Rogue-28B-MIX with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ginigen-ai/Rogue-28B-MIX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ginigen-ai/Rogue-28B-MIX", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ginigen-ai/Rogue-28B-MIX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ginigen-ai/Rogue-28B-MIX", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use ginigen-ai/Rogue-28B-MIX with Docker Model Runner:
docker model run hf.co/ginigen-ai/Rogue-28B-MIX
Rogue-28B-MIX
한국어 reasoning + multimodal mix 모델.
🏛️ 가문 계보
┌──────────────────────────────────────────┐
│ 증조부 (Great-Grandfather) │
│ Qwen-3.6-27B │
└──────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ 조부 (Grandfather) │
│ Darwin-3.6-28B │
└──────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ 아빠 (Father) │
│ FINAL-Bench/Darwin-28B-KR │
│ - 한국어 특화 reasoning 모델 │
└──────────────────────────────────────────┘
│
×× 교배 ××
│
┌──────────────────────────────────────────┐
│ 엄마 (Mother) │
│ NewenAI/QuettaLLMs-27B-Koreasoner-V3 │
│ - K-AI Leaderboard 1위 │
└──────────────────────────────────────────┘
│
▼
╔══════════════════════════════════════════╗
║ 자식 (Child) — 본 모델 ║
║ ginigen-ai/Rogue-28B-MIX ║
║ ║
║ - 친가의 reasoning 계승 ║
║ - 외가의 한국어 K-AI 지식 계승 ║
║ - <think> 추론 트레이스 보존 ║
║ - 멀티모달 헤드 보존 ║
╚══════════════════════════════════════════╝
🎓 학습 개요
- 친가 × 외가 모델 가중치 머지
- K-AI 도메인 데이터로 추가 SFT
- K-AI Leaderboard Docker 호환 형식 정비
📊 평가
한국어 공개 10 데이터셋, 100문제 × 1 seed.
| Dataset | Rogue-28B-MIX | 엄마(Quetta) |
|---|---|---|
| CLIcK | 84% | 85% |
| KMMLU History | 48% 🏆 | 45% |
| KMMLU Law | 25% | 26% |
| KMMLU Health | 81% 🏆 | 80% |
| HAERAE GK | 63% | 66% |
| HAERAE History | 89% | 90% |
| HAERAE Linguistics | 90% | 95% |
| KoBEST Hellaswag | 95% | 97% |
| KoBEST COPA | 98% | 99% |
| KoBEST BoolQ | 97% | 97% |
| Macro Avg | 77.0% | 78.0% |
K-AI Leaderboard 핵심 카테고리(의료·역사)에서 엄마 추월.
🎯 사용법
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ginigen-ai/Rogue-28B-MIX"
tokenizer = AutoTokenizer.from_pretrained(
model_id, trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
prompt = "한국의 추석에 대해 설명해주세요."
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(
messages, return_tensors="pt", add_generation_prompt=True
)
out = model.generate(
inputs.to(model.device),
max_new_tokens=512,
do_sample=False,
)
print(tokenizer.decode(out[0], skip_special_tokens=False))
🛠️ 사양
- 파라미터: 28B (multimodal)
- 양자화: bf16
- 컨텍스트: 8K (확장 가능)
- 언어: 한국어 + 영어
- 추론:
<think>reasoning trace - License: Apache 2.0
🤝 출처
- Downloads last month
- 125