Files changed (1) hide show
  1. README.md +222 -0
README.md ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - inclusionAI/Ling-2.6-flash
7
+ ---
8
+
9
+ ---
10
+ license: mit
11
+ language:
12
+ - en
13
+ pipeline_tag: text-generation
14
+ tags:
15
+ - mlx
16
+ base_model: inclusionAI/Ling-2.6-flash
17
+ library_name: mlx
18
+ ---
19
+
20
+ # mlx-community/Ling-2.6-flash-mlx-8bit
21
+
22
+ This model [mlx-community/Ling-2.6-flash-mlx-8bit](https://huggingface.co/mlx-community/Ling-2.6-flash-mlx-8bit) was
23
+ converted to MLX format from [inclusionAI/Ling-2.6-flash](https://huggingface.co/inclusionAI/Ling-2.6-flash)
24
+ using mlx-lm version **0.31.3**.
25
+
26
+ ## Use with mlx
27
+
28
+ ```bash
29
+ pip install mlx-lm
30
+ ```
31
+
32
+ ```python
33
+ from mlx_lm import load, generate
34
+
35
+ model, tokenizer = load("mlx-community/Ling-2.6-flash-mlx-8bit")
36
+
37
+ prompt = "hello"
38
+
39
+ if tokenizer.chat_template is not None:
40
+ messages = [{"role": "user", "content": prompt}]
41
+ prompt = tokenizer.apply_chat_template(
42
+ messages, add_generation_prompt=True, return_dict=False,
43
+ )
44
+
45
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
46
+ ```
47
+
48
+ ## Ling-2.6-flash: Faster Responses, Stronger Execution, Higher Token Efficiency
49
+ ### Introduction
50
+ Today, we announce the official open-source release of **Ling-2.6-flash**, an **instruct model** with **104B total parameters** and **7.4B active parameters**.
51
+
52
+ As agent capabilities mature, skyrocketing **token consumption** has become a primary barrier to deployment. Unlike standard chat, agent workflows involve massive inputs and complex, multi-step execution, driving up both compute demand and user costs. While the industry is pivoting toward "long-reasoning" to push performance ceilings, a critical question remains: Are these excessive reasoning tokens truly necessary for high-frequency, everyday agent use cases?
53
+
54
+ Faced with mounting token pressure, Ling-2.6-flash takes a different path. Rather than relying on longer outputs to chase higher scores, it is systematically optimized for **inference efficiency, token efficiency, and agent performance**—aiming to stay highly competitive while being **faster, leaner, and better suited for real production workloads**.
55
+
56
+ At a high level, Ling-2.6-flash is built around three core strengths:
57
+
58
+ + **Hybrid linear architecture for higher inference efficiency.**
59
+ By introducing a hybrid linear architecture, we improve computational efficiency at the foundation level. On a 4× H20 setup, Ling-2.6-flash reaches inference speeds of up to **340 tokens/s**. In other words, it completes tasks with significantly better cost-performance efficiency.
60
+ + **Token-efficiency optimization for a better intelligence-efficiency tradeoff.**
61
+ During training, we specifically optimized for token efficiency, with the goal of accomplishing tasks using more concise outputs. On the full **Artificial Analysis** evaluation suite, Ling-2.6-flash uses only **15M tokens**while still delivering competitive performance. This translates into a meaningfully stronger intelligence-efficiency profile.
62
+ + **Targeted improvements for agent scenarios.**
63
+ For the agent use cases seeing the strongest demand today, we continuously refined Ling-2.6-flash in tool use, multi-step planning, and task execution. As a result, the model achieves performance that is competitive with, and in some cases reaches **SOTA level** against, models with larger active parameter counts on benchmarks including **BFCL-V4, TAU2-bench, SWE-bench Verified, Claw-Eval, and PinchBench**.
64
+
65
+ ### Evaluation
66
+ We have conducted a comprehensive evaluation of Ling-2.6-flash across multiple authoritative benchmarks. **Ling-2.6-flash** performs strongly on representative agent benchmarks such as **BFCL-V4**, **TAU2-bench**, **SWE-bench Verified**, and **PinchBench**. In practice, Ling-2.6-flash delivers a strong user experience across frameworks including **Claude Code**, **Kilo Code**, **Qwen Code**, **Hermes Agent**, and **OpenClaw**, etc.
67
+
68
+ Beyond agent tasks, Ling-2.6-flash also delivers strong performance across **general knowledge**,**mathematical reasoning**, **instruction following**, and **long-context understanding**, remains well aligned with SOTA models in the same size class.
69
+ <div align="center">
70
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/KhFxSrxyF5IAAAAAgCAAAAgADryCAQFr/original" width="8001" title="" crop="0,0,1,1" id="u4a7a4034" class="ne-image">
71
+ </div>
72
+
73
+ <div align="center">
74
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/4bI1SK8pNM8AAAAAgBAAAAgADryCAQFr/original" width="8001" title="" crop="0,0,1,1" id="uc95688f2" class="ne-image">
75
+ </div>
76
+
77
+ > + **<font style="color:rgb(38, 38, 38);">PinchBench</font>**<font style="color:rgb(38, 38, 38);">: Comparative scores are retrieved directly from the official PinchBench leaderboard (as of April 20, 2026), adhering to their evaluation modes (potentially Reasoning Mode). </font>
78
+ > + **<font style="color:rgb(38, 38, 38);">Claw-Eval</font>**<font style="color:rgb(38, 38, 38);">: Comparative scores are sourced from the official Claw-Eval leaderboard (version dated 2026-03-25), adhering to their evaluation modes (potentially Reasoning Mode). Official scores for GPT-OSS-120B and GPT-5.4-mini are currently unavailable and have been omitted.</font>
79
+ > + **<font style="color:rgb(38, 38, 38);">TAU2-Bench</font>**<font style="color:rgb(38, 38, 38);">: Evaluations are conducted using official v1.0.0 code and datasets. Following the GLM-5 evaluation protocol, we applied minor prompt adjustments in the Retail and Telecom domains to ensure users express requests clearly and to prevent premature session termination. Additionally, GPT-5.2 was utilized as the User Agent across all evaluated domains.</font>
80
+ > + **<font style="color:rgb(38, 38, 38);">IFBench</font>**<font style="color:rgb(38, 38, 38);">: Scores for GPT-OSS-120B (low) and GPT-5.4-mini (Non-Reasoning) are sourced from the AA (Artificial Analysis) Leaderboard. All other model performance data are based on internal evaluation results.</font>
81
+ >
82
+
83
+ ### Architecture
84
+ Ling-2.6-flash continues the architectural direction introduced in Ling 2.5. Building on the Ling 2.0 foundation, we incorporate a **hybrid linear attention mechanism**, upgrading the original **GQA attention** design into a **1:7 MLA + Lightning Linear** hybrid architecture through incremental training.
85
+ <div align="center">
86
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/dZ9VS4RPjzAAAAAAgBAAAAgADryCAQFr/fmt.webp" width="650" title="" crop="0,0,1,1" id="u46a87a11" class="ne-image">
87
+ </div>
88
+
89
+ This combination of **hybrid attention** and a **highly sparse MoE architecture** gives Ling-2.6-flash a clear advantage in inference efficiency. Compared with mainstream SOTA models in a similar size class, Ling-2.6-flash not only delivers faster time-to-first-token, but also achieves substantially higher generation throughput in long-output scenarios. At peak, both **prefill throughput** and **decode throughput** can improve by up to **around 4×**.
90
+
91
+ As shown in the figure below, Ling-2.6-flash’s throughput advantage becomes more pronounced as both context length and generation length increase. More importantly, this is not just a benchmark-side gain on static metrics. In real deployment settings, the model continues to unlock stronger speed benefits as task complexity grows.
92
+
93
+ Whether the workload involves **long-context understanding** or **extended text generation**, Ling-2.6-flash preserves model capability while delivering **faster responses, higher throughput, and better real-world deployment efficiency**.
94
+ <div align="center">
95
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/Fa_fQrVD3hcAAAAAX7AAAAgADryCAQFr/original" width="600" alt="Decode Throughput Comparison">
96
+ <p><em>Decode Throughput Comparison, 4× H20-3e, TP=4, Batch Size = 32</em></p>
97
+ </div>
98
+
99
+ <div align="center">
100
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/LRDBTILYEooAAAAAXdAAAAgADryCAQFr/original" width="600" alt="Prefill Throughput Comparison">
101
+ <p><em>Prefill Throughput Comparison, 4× H20-3e, TP=4, Batch Size = 32</em></p>
102
+ </div>
103
+
104
+ ### Quickstart
105
+ #### SGLang (Recommended)
106
+ ##### Environment Preparation
107
+ ```bash
108
+ pip install uv
109
+
110
+ uv venv ~/my_ling_env
111
+
112
+ source ~/my_ling_env/bin/activate
113
+
114
+ # uv pip "sglang-kernel>=0.4.1"
115
+ uv pip install "sglang[all]>=0.5.10.post1" --prerelease=allow
116
+ ```
117
+
118
+ ##### Run Inference
119
+ Both BF16 and FP8 models are supported by SGLang now. It depends on the dtype of the model in `${MODEL_PATH}`. Here is the example to run Ling-2.6-flash with 4 GPUs, where the master node IP is `${MASTER_IP}` and server port is `${PORT}`:
120
+
121
+ **Server**
122
+
123
+ **1. Standard Inference (Without MTP)**
124
+ ```bash
125
+ python -m sglang.launch_server \
126
+ --model-path $MODEL_PATH \
127
+ --tp-size 4 \
128
+ --pp-size 1 \
129
+ --dp-size 1 \
130
+ --trust-remote-code \
131
+ --context-length 262144 \
132
+ --tool-call-parser qwen25 \
133
+ --json-model-override-args '{"rope_scaling": {"rope_type": "yarn", "factor": 2.0, "rope_theta": 6000000, "partial_rotary_factor": 0.5, "original_max_position_embeddings": 131072}}' \
134
+ --dist-init-addr $MASTER_IP:2345 \
135
+ --port $PORT \
136
+ --nnodes 1
137
+ ```
138
+
139
+ **2. Inference with MTP (Multi-Token Prediction)**
140
+ _The current official SGLang implementation of MTP contains a bug. For better inference performance, we recommend installing our patched version. Our fix is currently under review and is expected to be merged into the official SGLang library shortly._
141
+
142
+ **Install our SGLang**
143
+ ```bash
144
+ git clone -b ling_2_6 git@github.com:antgroup/sglang.git
145
+ cd sglang
146
+
147
+ pip install --upgrade pip
148
+ pip install -e "python"
149
+ ```
150
+ Start server
151
+ ```bash
152
+ python -m sglang.launch_server \
153
+ --model-path $MODEL_PATH \
154
+ --tp-size 4 \
155
+ --pp-size 1 \
156
+ --dp-size 1 \
157
+ --context-length 262144 \
158
+ --mamba-scheduler-strategy extra_buffer \
159
+ --speculative-algorithm NEXTN \
160
+ --speculative-num-steps 3 \
161
+ --speculative-eagle-topk 1 \
162
+ --speculative-num-draft-tokens 4 \
163
+ --mem-fraction-static 0.75 \
164
+ --max-running-requests 64 \
165
+ --max-mamba-cache-size 256 \
166
+ --tool-call-parser qwen25 \
167
+ --json-model-override-args '{"rope_scaling": {"rope_type": "yarn", "factor": 2.0, "rope_theta": 6000000, "partial_rotary_factor": 0.5, "original_max_position_embeddings": 131072}}' \
168
+ --trust-remote-code \
169
+ --dist-init-addr $MASTER_IP:2345 \
170
+ --port $PORT \
171
+ --nnodes 1
172
+ ```
173
+
174
+ **Client**
175
+
176
+ ```bash
177
+ curl -s http://${MASTER_IP}:${PORT}/v1/chat/completions \
178
+ -H "Content-Type: application/json" \
179
+ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
180
+ ```
181
+
182
+ #### vLLM
183
+ ##### Environment Preparation
184
+ ```bash
185
+ pip install uv
186
+
187
+ uv venv ~/my_ling_env
188
+
189
+ source ~/my_ling_env/bin/activate
190
+
191
+ git clone https://github.com/vllm-project/vllm.git
192
+
193
+ cd vllm
194
+
195
+ VLLM_USE_PRECOMPILED=1 uv pip install --editable . --torch-backend=auto
196
+ ```
197
+
198
+ #### Run inference
199
+
200
+ **Server**
201
+ ```bash
202
+ vllm serve $MODEL_PATH \
203
+ --port $PORT \
204
+ --served-model-name my_model \
205
+ --trust-remote-code --tensor-parallel-size 4 \
206
+ --gpu-memory-utilization 0.85
207
+ ```
208
+
209
+ **Client**
210
+
211
+ ```bash
212
+ curl -s http://${MASTER_IP}:${PORT}/v1/chat/completions \
213
+ -H "Content-Type: application/json" \
214
+ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
215
+ ```
216
+
217
+ ### Limitations & Future Plans
218
+ Ling-2.6-flash has already made meaningful progress in our pursuit of an extreme intelligence-efficiency tradeoff. The model has improved substantially in key areas such as **tool use, multi-step planning, and long-horizon task execution**. Combined with systematic optimizations in inference efficiency and interaction experience, Ling-2.6-flash is now better equipped to handle **large-scale, high-frequency automated workloads**, delivering stronger real-world value in production settings.
219
+
220
+ At the same time, we are fully aware that pushing intelligence efficiency to the limit comes with tradeoffs. In some highly complex scenarios, the model can still exhibit **tool hallucinations** due to limited reasoning depth. In addition, there is still room for improvement in areas such as **natural bilingual switching between Chinese and English** and **compliance with highly complex instructions**.
221
+
222
+ Looking ahead, we will continue exploring the frontier of intelligence efficiency. While preserving the model’s high-efficiency inference characteristics, we aim to further improve the balance between **output quality** and **token efficiency**, and to continuously strengthen the model’s **stability, usability, and interaction experience across a wider range of real-world scenarios**.