Image-Text-to-Text
Transformers
Safetensors
Korean
English
qwen3_5
korean
reasoning
multimodal
mix
conversational
Instructions to use ginigen-ai/Rogue-28B-MIX with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ginigen-ai/Rogue-28B-MIX with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="ginigen-ai/Rogue-28B-MIX") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("ginigen-ai/Rogue-28B-MIX") model = AutoModelForImageTextToText.from_pretrained("ginigen-ai/Rogue-28B-MIX") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ginigen-ai/Rogue-28B-MIX with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ginigen-ai/Rogue-28B-MIX" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ginigen-ai/Rogue-28B-MIX", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/ginigen-ai/Rogue-28B-MIX
- SGLang
How to use ginigen-ai/Rogue-28B-MIX with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ginigen-ai/Rogue-28B-MIX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ginigen-ai/Rogue-28B-MIX", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ginigen-ai/Rogue-28B-MIX" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ginigen-ai/Rogue-28B-MIX", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use ginigen-ai/Rogue-28B-MIX with Docker Model Runner:
docker model run hf.co/ginigen-ai/Rogue-28B-MIX
fix lineage: Qwen-3.6-27B (great-gf) / Darwin-3.6-28B (gf) / Darwin-28B-KR (father), narrow boxes, no trade secrets
8f9b2f1 verified | license: apache-2.0 | |
| language: | |
| - ko | |
| - en | |
| library_name: transformers | |
| tags: | |
| - korean | |
| - reasoning | |
| - multimodal | |
| - mix | |
| base_model: | |
| - FINAL-Bench/Darwin-28B-KR | |
| - NewenAI/QuettaLLMs-27B-Koreasoner-V3 | |
| # Rogue-28B-MIX | |
| νκ΅μ΄ reasoning + multimodal mix λͺ¨λΈ. | |
| --- | |
| ## ποΈ κ°λ¬Έ κ³λ³΄ | |
| ``` | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β μ¦μ‘°λΆ (Great-Grandfather) β | |
| β Qwen-3.6-27B β | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β | |
| βΌ | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β μ‘°λΆ (Grandfather) β | |
| β Darwin-3.6-28B β | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β | |
| βΌ | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β μλΉ (Father) β | |
| β FINAL-Bench/Darwin-28B-KR β | |
| β - νκ΅μ΄ νΉν reasoning λͺ¨λΈ β | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β | |
| ΓΓ κ΅λ°° ΓΓ | |
| β | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β μλ§ (Mother) β | |
| β NewenAI/QuettaLLMs-27B-Koreasoner-V3 β | |
| β - K-AI Leaderboard 1μ β | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β | |
| βΌ | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| β μμ (Child) β λ³Έ λͺ¨λΈ β | |
| β ginigen-ai/Rogue-28B-MIX β | |
| β β | |
| β - μΉκ°μ reasoning κ³μΉ β | |
| β - μΈκ°μ νκ΅μ΄ K-AI μ§μ κ³μΉ β | |
| β - <think> μΆλ‘ νΈλ μ΄μ€ 보쑴 β | |
| β - λ©ν°λͺ¨λ¬ ν€λ 보쑴 β | |
| ββββββββββββββββββββββββββββββββββββββββββββ | |
| ``` | |
| --- | |
| ## π νμ΅ κ°μ | |
| 1. μΉκ° Γ μΈκ° λͺ¨λΈ κ°μ€μΉ λ¨Έμ§ | |
| 2. K-AI λλ©μΈ λ°μ΄ν°λ‘ μΆκ° SFT | |
| 3. K-AI Leaderboard Docker νΈν νμ μ λΉ | |
| --- | |
| ## π νκ° | |
| νκ΅μ΄ κ³΅κ° 10 λ°μ΄ν°μ , 100λ¬Έμ Γ 1 seed. | |
| | Dataset | Rogue-28B-MIX | μλ§(Quetta) | | |
| |:---|---:|---:| | |
| | CLIcK | 84% | 85% | | |
| | KMMLU History | **48%** π | 45% | | |
| | KMMLU Law | 25% | 26% | | |
| | KMMLU Health | **81%** π | 80% | | |
| | HAERAE GK | 63% | 66% | | |
| | HAERAE History | 89% | 90% | | |
| | HAERAE Linguistics | 90% | 95% | | |
| | KoBEST Hellaswag | 95% | 97% | | |
| | KoBEST COPA | 98% | 99% | | |
| | KoBEST BoolQ | 97% | 97% | | |
| | **Macro Avg** | **77.0%** | **78.0%** | | |
| K-AI Leaderboard ν΅μ¬ μΉ΄ν κ³ λ¦¬(μλ£Β·μμ¬)μμ μλ§ μΆμ. | |
| --- | |
| ## π― μ¬μ©λ² | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| import torch | |
| model_id = "ginigen-ai/Rogue-28B-MIX" | |
| tokenizer = AutoTokenizer.from_pretrained( | |
| model_id, trust_remote_code=True | |
| ) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_id, | |
| torch_dtype=torch.bfloat16, | |
| device_map="auto", | |
| trust_remote_code=True, | |
| ) | |
| prompt = "νκ΅μ μΆμμ λν΄ μ€λͺ ν΄μ£ΌμΈμ." | |
| messages = [{"role": "user", "content": prompt}] | |
| inputs = tokenizer.apply_chat_template( | |
| messages, return_tensors="pt", add_generation_prompt=True | |
| ) | |
| out = model.generate( | |
| inputs.to(model.device), | |
| max_new_tokens=512, | |
| do_sample=False, | |
| ) | |
| print(tokenizer.decode(out[0], skip_special_tokens=False)) | |
| ``` | |
| --- | |
| ## π οΈ μ¬μ | |
| - νλΌλ―Έν°: 28B (multimodal) | |
| - μμν: bf16 | |
| - 컨ν μ€νΈ: 8K (νμ₯ κ°λ₯) | |
| - μΈμ΄: νκ΅μ΄ + μμ΄ | |
| - μΆλ‘ : `<think>` reasoning trace | |
| - License: Apache 2.0 | |
| --- | |
| ## π€ μΆμ² | |
| - μλΉ : [FINAL-Bench/Darwin-28B-KR](https://huggingface.co/FINAL-Bench/Darwin-28B-KR) | |
| - μλ§: [NewenAI/QuettaLLMs-27B-Koreasoner-V3](https://huggingface.co/NewenAI/QuettaLLMs-27B-Koreasoner-V3) | |