caizhi1 commited on
Commit
c7e9500
·
verified ·
1 Parent(s): 7cfd469

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +185 -0
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ ---
6
+ ## Ling-2.6-flash: Faster Responses, Stronger Execution, Higher Token Efficiency
7
+ ### Introduction
8
+ Today, we announce the official open-source release of **Ling-2.6-flash**, an **instruct model** with **104B total parameters** and **7.4B active parameters**.
9
+
10
+ As agent capabilities mature, skyrocketing **token consumption** has become a primary barrier to deployment. Unlike standard chat, agent workflows involve massive inputs and complex, multi-step execution, driving up both compute demand and user costs. While the industry is pivoting toward "long-reasoning" to push performance ceilings, a critical question remains: Are these excessive reasoning tokens truly necessary for high-frequency, everyday agent use cases?
11
+
12
+ Faced with mounting token pressure, Ling-2.6-flash takes a different path. Rather than relying on longer outputs to chase higher scores, it is systematically optimized for **inference efficiency, token efficiency, and agent performance**—aiming to stay highly competitive while being **faster, leaner, and better suited for real production workloads**.
13
+
14
+ At a high level, Ling-2.6-flash is built around three core strengths:
15
+
16
+ + **Hybrid linear architecture for higher inference efficiency.**
17
+ By introducing a hybrid linear architecture, we improve computational efficiency at the foundation level. On a 4× H20 setup, Ling-2.6-flash reaches inference speeds of up to **340 tokens/s**. In other words, it completes tasks with significantly better cost-performance efficiency.
18
+ + **Token-efficiency optimization for a better intelligence-efficiency tradeoff.**
19
+ During training, we specifically optimized for token efficiency, with the goal of accomplishing tasks using more concise outputs. On the full **Artificial Analysis** evaluation suite, Ling-2.6-flash uses only **15M tokens**while still delivering competitive performance. This translates into a meaningfully stronger intelligence-efficiency profile.
20
+ + **Targeted improvements for agent scenarios.**
21
+ For the agent use cases seeing the strongest demand today, we continuously refined Ling-2.6-flash in tool use, multi-step planning, and task execution. As a result, the model achieves performance that is competitive with, and in some cases reaches **SOTA level** against, models with larger active parameter counts on benchmarks including **BFCL-V4, TAU2-bench, SWE-bench Verified, Claw-Eval, and PinchBench**.
22
+
23
+ ### Evaluation
24
+ We have conducted a comprehensive evaluation of Ling-2.6-flash across multiple authoritative benchmarks. **Ling-2.6-flash** performs strongly on representative agent benchmarks such as **BFCL-V4**, **TAU2-bench**, **SWE-bench Verified**, and **PinchBench**. In practice, Ling-2.6-flash delivers a strong user experience across frameworks including **Claude Code**, **Kilo Code**, **Qwen Code**, **Hermes Agent**, and **OpenClaw**, etc.
25
+
26
+ Beyond agent tasks, Ling-2.6-flash also delivers strong performance across **general knowledge**,**mathematical reasoning**, **in struction following**, and **long-context understanding**, remains well aligned with SOTA models in the same size class.
27
+
28
+ <div align="center">
29
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/KhFxSrxyF5IAAAAAgCAAAAgADryCAQFr/original" width="8001" title="" crop="0,0,1,1" id="u4a7a4034" class="ne-image">
30
+ </div>
31
+
32
+ <div align="center">
33
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/4bI1SK8pNM8AAAAAgBAAAAgADryCAQFr/original" width="8001" title="" crop="0,0,1,1" id="uc95688f2" class="ne-image">
34
+ </div>
35
+
36
+ > + **<font style="color:rgb(38, 38, 38);">PinchBench</font>**<font style="color:rgb(38, 38, 38);">: Comparative scores are retrieved directly from the official PinchBench leaderboard (as of April 20, 2026), adhering to their evaluation modes (potentially Reasoning Mode). </font>
37
+ > + **<font style="color:rgb(38, 38, 38);">Claw-Eval</font>**<font style="color:rgb(38, 38, 38);">: Comparative scores are sourced from the official Claw-Eval leaderboard (version dated 2026-03-25), adhering to their evaluation modes (potentially Reasoning Mode). Official scores for GPT-OSS-120B and GPT-5.4-mini are currently unavailable and have been omitted.</font>
38
+ > + **<font style="color:rgb(38, 38, 38);">TAU2-Bench</font>**<font style="color:rgb(38, 38, 38);">: Evaluations are conducted using official v1.0.0 code and datasets. Following the GLM-5 evaluation protocol, we applied minor prompt adjustments in the Retail and Telecom domains to ensure users express requests clearly and to prevent premature session termination. Additionally, GPT-5.2 was utilized as the User Agent across all evaluated domains.</font>
39
+ > + **<font style="color:rgb(38, 38, 38);">IFBench</font>**<font style="color:rgb(38, 38, 38);">: Scores for GPT-OSS-120B (low) and GPT-5.4-mini (Non-Reasoning) are sourced from the AA (Artificial Analysis) Leaderboard. All other model performance data are based on internal evaluation results.</font>
40
+ >
41
+
42
+ #### Quantization Robustness: FP8 and INT4
43
+ We evaluate the FP8 and INT4 quantized models using several datasets. The FP8 and INT4 quantizations are applied via the blockwise quantization and groupwise quantization, respectively.
44
+
45
+ <div align="center">
46
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/8QEoRqtZhAcAAAAAQaAAAAgADryCAQFr/original" width="800" title="" crop="0,0,1,1" id="uc95688f2" class="ne-image">
47
+ </div>
48
+
49
+ ### Architecture
50
+ Ling-2.6-flash continues the architectural direction introduced in Ling 2.5. Building on the Ling 2.0 foundation, we incorporate a **hybrid linear attention mechanism**, upgrading the original **GQA attention** design into a **1:7 MLA + Lightning Linear** hybrid architecture through incremental training.
51
+ <div align="center">
52
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/dZ9VS4RPjzAAAAAAgBAAAAgADryCAQFr/fmt.webp" width="650" title="" crop="0,0,1,1" id="u46a87a11" class="ne-image">
53
+ </div>
54
+
55
+ This combination of **hybrid attention** and a **highly sparse MoE architecture** gives Ling-2.6-flash a clear advantage in inference efficiency. Compared with mainstream SOTA models in a similar size class, Ling-2.6-flash not only delivers faster time-to-first-token, but also achieves substantially higher generation throughput in long-output scenarios. At peak, both **prefill throughput** and **decode throughput** can improve by up to **around 4×**.
56
+
57
+ As shown in the figure below, Ling-2.6-flash’s throughput advantage becomes more pronounced as both context length and generation length increase. More importantly, this is not just a benchmark-side gain on static metrics. In real deployment settings, the model continues to unlock stronger speed benefits as task complexity grows.
58
+
59
+ Whether the workload involves **long-context understanding** or **extended text generation**, Ling-2.6-flash preserves model capability while delivering **faster responses, higher throughput, and better real-world deployment efficiency**.
60
+ <div align="center">
61
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/Fa_fQrVD3hcAAAAAX7AAAAgADryCAQFr/original" width="600" alt="Decode Throughput Comparison">
62
+ <p><em>Decode Throughput Comparison, 4× H20-3e, TP=4, Batch Size = 32</em></p>
63
+ </div>
64
+
65
+ <div align="center">
66
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/LRDBTILYEooAAAAAXdAAAAgADryCAQFr/original" width="600" alt="Prefill Throughput Comparison">
67
+ <p><em>Prefill Throughput Comparison, 4× H20-3e, TP=4, Batch Size = 32</em></p>
68
+ </div>
69
+
70
+ ### Quickstart
71
+ #### SGLang (Recommended)
72
+ ##### Environment Preparation
73
+ ```bash
74
+ pip install uv
75
+
76
+ uv venv ~/my_ling_env
77
+
78
+ source ~/my_ling_env/bin/activate
79
+
80
+ # uv pip "sglang-kernel>=0.4.1"
81
+ uv pip install "sglang[all]>=0.5.10.post1" --prerelease=allow
82
+ ```
83
+
84
+ ##### Run Inference
85
+ Both BF16 and FP8 models are supported by SGLang now. It depends on the dtype of the model in `${MODEL_PATH}`. Here is the example to run Ling-2.6-flash with 4 GPUs, where the master node IP is `${MASTER_IP}` and server port is `${PORT}`:
86
+
87
+ **Server**
88
+
89
+ **1. Standard Inference (Without MTP)**
90
+ ```bash
91
+ python -m sglang.launch_server \
92
+ --model-path $MODEL_PATH \
93
+ --tp-size 4 \
94
+ --pp-size 1 \
95
+ --dp-size 1 \
96
+ --trust-remote-code \
97
+ --tool-call-parser qwen25 \
98
+ --json-model-override-args '{"quantization_config": {"activation_scheme": "dynamic", "fmt": "e4m3", "quant_method": "fp8", "modules_to_not_convert": ["q_a_proj", "q_b_proj", "kv_a_proj_with_mqa", "kv_b_proj", "lm_head"], "weight_block_size": [128, 128]}, "linear_backend": "seg_la", "torch_dtype": "bfloat16", "architectures": ["BailingMoeV2_5ForCausalLM"], "model_type": "bailing_hybrid", "rope_scaling": {"rope_type": "yarn", "factor": 2.0, "rope_theta": 6000000, "partial_rotary_factor": 0.5, "original_max_position_embeddings": 131072}}' \
99
+ --dist-init-addr $MASTER_IP:2345 \
100
+ --port $PORT \
101
+ --nnodes 1
102
+ ```
103
+
104
+ **2. Inference with MTP (Multi-Token Prediction)**
105
+ _The current official SGLang implementation of MTP contains a bug. For better inference performance, we recommend installing our patched version. Our fix is currently under review and is expected to be merged into the official SGLang library shortly._
106
+
107
+ **Install our SGLang**
108
+ ```bash
109
+ git clone -b ling_2_6 git@github.com:antgroup/sglang.git
110
+ cd sglang
111
+
112
+ pip install --upgrade pip
113
+ pip install -e "python"
114
+ ```
115
+ Start server
116
+ ```bash
117
+ python -m sglang.launch_server \
118
+ --model-path $MODEL_PATH \
119
+ --tp-size 4 \
120
+ --pp-size 1 \
121
+ --dp-size 1 \
122
+ --speculative-algorithm NEXTN \
123
+ --speculative-num-steps 1 \
124
+ --speculative-eagle-topk 1 \
125
+ --speculative-num-draft-tokens 4 \
126
+ --mem-fraction-static 0.75 \
127
+ --max-running-requests 64 \
128
+ --max-mamba-cache-size 256 \
129
+ --tool-call-parser qwen25 \
130
+ --json-model-override-args '{"quantization_config": {"activation_scheme": "dynamic", "fmt": "e4m3", "quant_method": "fp8", "modules_to_not_convert": ["q_a_proj", "q_b_proj", "kv_a_proj_with_mqa", "kv_b_proj", "lm_head"], "weight_block_size": [128, 128]}, "linear_backend": "seg_la", "torch_dtype": "bfloat16", "architectures": ["BailingMoeV2_5ForCausalLM"], "model_type": "bailing_hybrid", "rope_scaling": {"rope_type": "yarn", "factor": 2.0, "rope_theta": 6000000, "partial_rotary_factor": 0.5, "original_max_position_embeddings": 131072}}' \
131
+ --trust-remote-code \
132
+ --dist-init-addr $MASTER_IP:2345 \
133
+ --port $PORT \
134
+ --nnodes 1
135
+ ```
136
+
137
+ **Client**
138
+
139
+ ```bash
140
+ curl -s http://${MASTER_IP}:${PORT}/v1/chat/completions \
141
+ -H "Content-Type: application/json" \
142
+ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
143
+ ```
144
+
145
+ #### vLLM
146
+ ##### Environment Preparation
147
+ ```bash
148
+ pip install uv
149
+
150
+ uv venv ~/my_ling_env
151
+
152
+ source ~/my_ling_env/bin/activate
153
+
154
+ git clone https://github.com/vllm-project/vllm.git
155
+
156
+ cd vllm
157
+
158
+ VLLM_USE_PRECOMPILED=1 uv pip install --editable . --torch-backend=auto
159
+ ```
160
+
161
+ #### Run inference
162
+
163
+ **Server**
164
+ ```bash
165
+ vllm serve $MODEL_PATH \
166
+ --port $PORT \
167
+ --served-model-name my_model \
168
+ --trust-remote-code --tensor-parallel-size 4 \
169
+ --gpu-memory-utilization 0.85
170
+ ```
171
+
172
+ **Client**
173
+
174
+ ```bash
175
+ curl -s http://${MASTER_IP}:${PORT}/v1/chat/completions \
176
+ -H "Content-Type: application/json" \
177
+ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
178
+ ```
179
+
180
+ ### Limitations & Future Plans
181
+ Ling-2.6-flash has already made meaningful progress in our pursuit of an extreme intelligence-efficiency tradeoff. The model has improved substantially in key areas such as **tool use, multi-step planning, and long-horizon task execution**. Combined with systematic optimizations in inference efficiency and interaction experience, Ling-2.6-flash is now better equipped to handle **large-scale, high-frequency automated workloads**, delivering stronger real-world value in production settings.
182
+
183
+ At the same time, we are fully aware that pushing intelligence efficiency to the limit comes with tradeoffs. In some highly complex scenarios, the model can still exhibit **tool hallucinations** due to limited reasoning depth. In addition, there is still room for improvement in areas such as **natural bilingual switching between Chinese and English** and **compliance with highly complex instructions**.
184
+
185
+ Looking ahead, we will continue exploring the frontier of intelligence efficiency. While preserving the model’s high-efficiency inference characteristics, we aim to further improve the balance between **output quality** and **token efficiency**, and to continuously strengthen the model’s **stability, usability, and interaction experience across a wider range of real-world scenarios**.