Image-Text-to-Text
Transformers
Safetensors
Korean
English
qwen3_5
korean
reasoning
darwin
evolutionary-merge
sft
conversational
Instructions to use Warecube/Warecube-KO-27B-v3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Warecube/Warecube-KO-27B-v3 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Warecube/Warecube-KO-27B-v3") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("Warecube/Warecube-KO-27B-v3") model = AutoModelForImageTextToText.from_pretrained("Warecube/Warecube-KO-27B-v3") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Warecube/Warecube-KO-27B-v3 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Warecube/Warecube-KO-27B-v3" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Warecube/Warecube-KO-27B-v3", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/Warecube/Warecube-KO-27B-v3
- SGLang
How to use Warecube/Warecube-KO-27B-v3 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Warecube/Warecube-KO-27B-v3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Warecube/Warecube-KO-27B-v3", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Warecube/Warecube-KO-27B-v3" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Warecube/Warecube-KO-27B-v3", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use Warecube/Warecube-KO-27B-v3 with Docker Model Runner:
docker model run hf.co/Warecube/Warecube-KO-27B-v3
| license: apache-2.0 | |
| language: | |
| - ko | |
| - en | |
| library_name: transformers | |
| tags: | |
| - korean | |
| - reasoning | |
| - darwin | |
| - evolutionary-merge | |
| - sft | |
| base_model: | |
| - ginigen-ai/Rogue-28B-MIX | |
| # Warecube-KO-27B-v2 | |
| 한국어 reasoning 모델 — Darwin 진화 + 추가 SFT 정제 변종. | |
| --- | |
| ## 🧬 Darwin 진화 컨셉 | |
| 본 모델은 **Darwin V7 진화적 모델 머지** 기반의 부모 모델에 | |
| **한국어 K-AI 도메인 SFT**를 추가 학습한 자식 모델입니다. | |
| ``` | |
| 자연 진화 Darwin 머지 + SFT | |
| ───────── ─────────────────── | |
| 유전자 교차 → 가중치 모듈별 비율 결합 (부모) | |
| 세대 진화 → 부모 모델에 추가 SFT 정제 | |
| 적자 생존 → K-AI 도메인 우수 자손 보존 | |
| ``` | |
| --- | |
| ## 🏛️ 가문 계보 | |
| ``` | |
| ┌────────────────────────────────────────┐ | |
| │ 베이스 (Base / Parent) │ | |
| │ ginigen-ai/Rogue-28B-MIX │ | |
| │ │ | |
| │ - K-AI Leaderboard 2위 (avg 0.559) │ | |
| │ - Darwin family + Quetta 진화 머지 │ | |
| │ - <think> reasoning trace │ | |
| └────────────────────────────────────────┘ | |
| │ | |
| ▼ K-AI 도메인 추가 SFT 진화 | |
| ╔════════════════════════════════════════╗ | |
| ║ 자식 (Child) — 본 모델 ║ | |
| ║ Warecube/Warecube-KO-27B-v2 ║ | |
| ║ ║ | |
| ║ ✦ 베이스의 모든 능력 계승 ║ | |
| ║ ✦ Com2-main 도메인 강화 ║ | |
| ║ ✦ K-AI Leaderboard Docker 호환 형식 ║ | |
| ╚════════════════════════════════════════╝ | |
| ``` | |
| --- | |
| ## 🎓 학습 개요 | |
| | Stage | 개략 | | |
| |:---|:---| | |
| | **Base** | ginigen-ai/Rogue-28B-MIX (Darwin family × Quetta family 진화 머지) | | |
| | **SFT** | 한국어 K-AI 도메인 instruction 데이터로 추가 정제 | | |
| | **호환** | K-AI Leaderboard Docker 호환 형식으로 정비 | | |
| --- | |
| ## 🎯 사용법 | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| import torch | |
| model_id = "Warecube/Warecube-KO-27B-v2" | |
| tokenizer = AutoTokenizer.from_pretrained( | |
| model_id, trust_remote_code=True | |
| ) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_id, | |
| torch_dtype=torch.bfloat16, | |
| device_map="auto", | |
| trust_remote_code=True, | |
| ) | |
| prompt = "한국의 추석에 대해 설명해주세요." | |
| messages = [{"role": "user", "content": prompt}] | |
| inputs = tokenizer.apply_chat_template( | |
| messages, return_tensors="pt", add_generation_prompt=True | |
| ) | |
| out = model.generate( | |
| inputs.to(model.device), | |
| max_new_tokens=512, | |
| do_sample=False, | |
| ) | |
| print(tokenizer.decode(out[0], skip_special_tokens=False)) | |
| ``` | |
| --- | |
| ## 🛠️ 사양 | |
| - 파라미터: 28B (multimodal) | |
| - 양자화: bf16 | |
| - 컨텍스트: 8K (확장 가능) | |
| - 언어: 한국어 + 영어 | |
| - 추론: `<think>` reasoning trace | |
| - License: Apache 2.0 | |
| --- | |
| ## 🤝 출처 | |
| - 베이스: [ginigen-ai/Rogue-28B-MIX](https://huggingface.co/ginigen-ai/Rogue-28B-MIX) (K-AI Leaderboard 2위) | |
| - 가문: Darwin family (Darwin V7 진화적 머지 시리즈) | |