Jonna Marie Matthiesen Claude Opus 4.6 (1M context) commited on
Commit
fcac823
·
1 Parent(s): e824fce

Update README: migrate workflow from embedl-models to flash-head

Browse files

Replace embedl-models / Docker container instructions with the
flash-head vLLM plugin workflow (pip install flash-head). Update
code examples to use standard vLLM imports and add GitHub badge.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

Files changed (1) hide show
  1. README.md +119 -0
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - nvidia/Cosmos-Reason2-8B
4
+ tags:
5
+ - nvidia
6
+ - cosmos
7
+ - cosmos-reason2
8
+ - multimodal
9
+ - vlm
10
+ - quantized
11
+ - flashhead
12
+ - qwen3_vl
13
+ pipeline_tag: image-text-to-text
14
+ license: other
15
+ license_name: embedl-models-community-licence-1.0
16
+ license_link: https://github.com/embedl/embedl-models/blob/main/LICENSE
17
+ ---
18
+
19
+ # Cosmos-Reason2-8B-W4A16-FlashHead
20
+
21
+ [![GitHub](https://img.shields.io/badge/GitHub-flash--head-black?logo=github)](https://github.com/embedl/flash-head)
22
+
23
+ **Optimized version of [nvidia/Cosmos-Reason2-8B](https://huggingface.co/nvidia/Cosmos-Reason2-8B) using quantization and FlashHead, Embedl's efficient replacement for the language model head.**
24
+
25
+ Designed for **low-latency inference** on **NVIDIA GPUs**, leveraging:
26
+
27
+ - FlashHead
28
+ - Quantization (W4A16)
29
+ - vLLM plugin via [`flash-head`](https://github.com/embedl/flash-head)
30
+
31
+ ---
32
+
33
+ ## Model Details
34
+
35
+ | **Field** | **Value** |
36
+ |---|---|
37
+ | **Base Model** | [nvidia/Cosmos-Reason2-8B](https://huggingface.co/nvidia/Cosmos-Reason2-8B) |
38
+ | **Input / Output** | Text + Image / Video -> Text |
39
+ | **Optimizations** | FlashHead LM Head + Quantization (W4A16) |
40
+ | **Developers** | Embedl |
41
+ | **Licenses** | Upstream: [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). <br>Optimized components: Embedl Models Community Licence v1.0 *(no redistribution)* |
42
+
43
+ ---
44
+
45
+ ## Installation
46
+
47
+ ```bash
48
+ pip install flash-head
49
+ ```
50
+
51
+ The [`flash-head`](https://github.com/embedl/flash-head) vLLM plugin is required. It activates automatically at startup.
52
+
53
+ ---
54
+
55
+ ## Usage Examples
56
+
57
+ ### vLLM Serve
58
+
59
+ ```bash
60
+ vllm serve embedl/Cosmos-Reason2-8B-W4A16-FlashHead \
61
+ --max-model-len 8192 \
62
+ --gpu-memory-utilization 0.75
63
+ ```
64
+
65
+ ### vLLM Video Inference
66
+
67
+ ```python
68
+ from vllm import LLM, SamplingParams
69
+
70
+ if __name__ == "__main__":
71
+ model = "embedl/Cosmos-Reason2-8B-W4A16-FlashHead"
72
+ video_url = "https://nvidia-cosmos.github.io/cosmos-cookbook/gallery/vs_assets/clip_1_short.mp4"
73
+
74
+ messages = [
75
+ {
76
+ "role": "system",
77
+ "content": [{"type": "text", "text": "You are a helpful assistant."}],
78
+ },
79
+ {
80
+ "role": "user",
81
+ "content": [
82
+ {"type": "video_url", "video_url": {"url": video_url, "fps": 4}},
83
+ {"type": "text", "text": "Describe this video in detail."},
84
+ ],
85
+ },
86
+ ]
87
+
88
+ llm = LLM(
89
+ model=model,
90
+ limit_mm_per_prompt={
91
+ "video": {"count": 1, "num_frames": 12, "width": 1280, "height": 720},
92
+ "image": 0,
93
+ "audio": 0,
94
+ },
95
+ media_io_kwargs={"video": {"num_frames": -1}},
96
+ max_model_len=8192,
97
+ mm_processor_kwargs={"truncation": False},
98
+ gpu_memory_utilization=0.75,
99
+ trust_remote_code=True,
100
+ )
101
+
102
+ output = llm.chat(messages, sampling_params=SamplingParams(temperature=0.0, max_tokens=256))
103
+ print(output[0].outputs[0].text)
104
+ ```
105
+
106
+ ---
107
+
108
+ ## License
109
+
110
+ - **Upstream:** [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license)
111
+ - **Optimized Components:** Embedl Models Community Licence v1.0 *(no redistribution)*
112
+
113
+ ---
114
+
115
+ ## Contact
116
+
117
+ - Enterprise and Commercial Inquiries: `models@embedl.com`
118
+ - Technical Issues and Early Access: [`https://github.com/embedl/flash-head`](https://github.com/embedl/flash-head)
119
+ - More Information and Model Releases: `https://embedl.com`