caizhi1 commited on
Commit
93d187d
·
verified ·
1 Parent(s): 3c16663

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ ---
7
+ ## Ling-2.6-flash: Faster Responses, Stronger Execution, Higher Token Efficiency
8
+ ### Introduction
9
+ Today, we announce the official open-source release of **Ling-2.6-flash**, an **instruct model** with **104B total parameters** and **7.4B active parameters**.
10
+
11
+ As agent capabilities mature, skyrocketing **token consumption** has become a primary barrier to deployment. Unlike standard chat, agent workflows involve massive inputs and complex, multi-step execution, driving up both compute demand and user costs. While the industry is pivoting toward "long-reasoning" to push performance ceilings, a critical question remains: Are these excessive reasoning tokens truly necessary for high-frequency, everyday agent use cases?
12
+
13
+ Faced with mounting token pressure, Ling-2.6-flash takes a different path. Rather than relying on longer outputs to chase higher scores, it is systematically optimized for **inference efficiency, token efficiency, and agent performance**—aiming to stay highly competitive while being **faster, leaner, and better suited for real production workloads**.
14
+
15
+ At a high level, Ling-2.6-flash is built around three core strengths:
16
+
17
+ + **Hybrid linear architecture for higher inference efficiency.**
18
+ By introducing a hybrid linear architecture, we improve computational efficiency at the foundation level. On a 4× H20 setup, Ling-2.6-flash reaches inference speeds of up to **340 tokens/s**. In other words, it completes tasks with significantly better cost-performance efficiency.
19
+ + **Token-efficiency optimization for a better intelligence-efficiency tradeoff.**
20
+ During training, we specifically optimized for token efficiency, with the goal of accomplishing tasks using more concise outputs. On the full **Artificial Analysis** evaluation suite, Ling-2.6-flash uses only **15M tokens**while still delivering competitive performance. This translates into a meaningfully stronger intelligence-efficiency profile.
21
+ + **Targeted improvements for agent scenarios.**
22
+ For the agent use cases seeing the strongest demand today, we continuously refined Ling-2.6-flash in tool use, multi-step planning, and task execution. As a result, the model achieves performance that is competitive with, and in some cases reaches **SOTA level** against, models with larger active parameter counts on benchmarks including **BFCL-V4, TAU2-bench, SWE-bench Verified, Claw-Eval, and PinchBench**.
23
+
24
+ ### Evaluation
25
+ We have conducted a comprehensive evaluation of Ling-2.6-flash across multiple authoritative benchmarks. **Ling-2.6-flash** performs strongly on representative agent benchmarks such as **BFCL-V4**, **TAU2-bench**, **SWE-bench Verified**, and **PinchBench**. In practice, Ling-2.6-flash delivers a strong user experience across frameworks including **Claude Code**, **Kilo Code**, **Qwen Code**, **Hermes Agent**, and **OpenClaw**, etc.
26
+
27
+ Beyond agent tasks, Ling-2.6-flash also delivers strong performance across **general knowledge**,**mathematical reasoning**, **instruction following**, and **long-context understanding**, remains well aligned with SOTA models in the same size class.
28
+ <div align="center">
29
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/FOkPQZDdKtkAAAAAgCAAAAgADryCAQFr/original" width="8001" title="" crop="0,0,1,1" id="u4a7a4034" class="ne-image">
30
+ </div>
31
+
32
+ <div align="center">
33
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/_rp_TKkkG4wAAAAAgBAAAAgADryCAQFr/original" width="8001" title="" crop="0,0,1,1" id="uc95688f2" class="ne-image">
34
+ </div>
35
+
36
+ > + **<font style="color:rgb(38, 38, 38);">PinchBench</font>**<font style="color:rgb(38, 38, 38);">: Comparative scores are retrieved directly from the official PinchBench leaderboard (as of April 20, 2026), adhering to their evaluation modes (potentially Reasoning Mode). </font>
37
+ > + **<font style="color:rgb(38, 38, 38);">Claw-Eval</font>**<font style="color:rgb(38, 38, 38);">: Comparative scores are sourced from the official Claw-Eval leaderboard (version dated 2026-03-25), adhering to their evaluation modes (potentially Reasoning Mode). Official scores for GPT-OSS-120B and GPT-5.4-mini are currently unavailable and have been omitted.</font>
38
+ > + **<font style="color:rgb(38, 38, 38);">TAU2-Bench</font>**<font style="color:rgb(38, 38, 38);">: Evaluations are conducted using official v1.0.0 code and datasets. Following the GLM-5 evaluation protocol, we applied minor prompt adjustments in the Retail and Telecom domains to ensure users express requests clearly and to prevent premature session termination. Additionally, GPT-5.2 was utilized as the User Agent across all evaluated domains.</font>
39
+ > + **<font style="color:rgb(38, 38, 38);">IFBench</font>**<font style="color:rgb(38, 38, 38);">: Scores for GPT-OSS-120B (low) and GPT-5.4-mini (Non-Reasoning) are sourced from the AA(Artificial Analysis) Leaderboard. All other model performance data are based on internal evaluation results.</font>
40
+ >
41
+
42
+ ### Architecture
43
+ Ling-2.6-flash continues the architectural direction introduced in Ling 2.5. Building on the Ling 2.0 foundation, we incorporate a **hybrid linear attention mechanism**, upgrading the original **GQA attention** design into a **1:7 MLA + Lightning Linear** hybrid architecture through incremental training.
44
+ <div align="center">
45
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/dZ9VS4RPjzAAAAAAgBAAAAgADryCAQFr/fmt.webp" width="650" title="" crop="0,0,1,1" id="u46a87a11" class="ne-image">
46
+ </div>
47
+
48
+ This combination of **hybrid attention** and a **highly sparse MoE architecture** gives Ling-2.6-flash a clear advantage in inference efficiency. Compared with mainstream SOTA models in a similar size class, Ling-2.6-flash not only delivers faster time-to-first-token, but also achieves substantially higher generation throughput in long-output scenarios. At peak, both **prefill throughput** and **decode throughput** can improve by up to **around 4×**.
49
+
50
+ As shown in the figure below, Ling-2.6-flash’s throughput advantage becomes more pronounced as both context length and generation length increase. More importantly, this is not just a benchmark-side gain on static metrics. In real deployment settings, the model continues to unlock stronger speed benefits as task complexity grows.
51
+
52
+ Whether the workload involves **long-context understanding** or **extended text generation**, Ling-2.6-flash preserves model capability while delivering **faster responses, higher throughput, and better real-world deployment efficiency**.
53
+ <div align="center">
54
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/Fa_fQrVD3hcAAAAAX7AAAAgADryCAQFr/original" width="600" alt="Decode Throughput Comparison">
55
+ <p><em>Decode Throughput Comparison, 4× H20-3e, TP=4, Batch Size = 32</em></p>
56
+ </div>
57
+
58
+ <div align="center">
59
+ <img src="https://mdn.alipayobjects.com/huamei_3p6pd0/afts/img/LRDBTILYEooAAAAAXdAAAAgADryCAQFr/original" width="600" alt="Prefill Throughput Comparison">
60
+ <p><em>Prefill Throughput Comparison, 4× H20-3e, TP=4, Batch Size = 32</em></p>
61
+ </div>
62
+
63
+ ### Quickstart
64
+ #### SGLang
65
+ ##### Environment Preparation
66
+ ```bash
67
+ pip install uv
68
+
69
+ uv venv ~/my_sglang_env
70
+
71
+ source ~/my_sglang_env/bin/activate
72
+
73
+ uv pip install sglang
74
+ ```
75
+
76
+ ##### Run Inference
77
+ Both BF16 and FP8 models are supported by SGLang now. It depends on the dtype of the model in `${MODEL_PATH}`. Here is the example to run Ling-2.6-flash with 4 GPUs, where the master node IP is `${MASTER_IP}` and server port is `${PORT}`:
78
+
79
+ **Server**
80
+
81
+ **1. Standard Inference (Without MTP)**
82
+
83
+ For standard, auto-regressive generation, you can load and run the model using the default `transformers` pipeline.
84
+
85
+ ```bash
86
+ python -m sglang.launch_server \
87
+ --model-path $MODEL_PATH \
88
+ --tp-size 4 \
89
+ --pp-size 1 \
90
+ --dp-size 1 \
91
+ --trust-remote-code \
92
+ --dist-init-addr $MASTER_IP:2345 \
93
+ --port $PORT \
94
+ --nnodes 1
95
+ ```
96
+
97
+ **2. Inference with MTP (Multi-Token Prediction)**
98
+ To significantly accelerate text generation, this model supports Multi-Token Prediction (MTP). You can enable it by passing the relevant flags during model initialization or generation.
99
+
100
+ ```bash
101
+ # mtp
102
+ python -m sglang.launch_server \
103
+ --model-path $MODEL_PATH \
104
+ --tp-size 4 \
105
+ --pp-size 1 \
106
+ --dp-size 1 \
107
+ --speculative-algorithm NEXTN \
108
+ --speculative-num-steps 1 \
109
+ --speculative-eagle-topk 1 \
110
+ --speculative-num-draft-tokens 4 \
111
+ --mem-fraction-static 0.75 \
112
+ --max-running-requests 64 \
113
+ --max-mamba-cache-size 256 \
114
+ --trust-remote-code \
115
+ --dist-init-addr $MASTER_IP:2345 \
116
+ --port $PORT \
117
+ --nnodes 1
118
+ ```
119
+
120
+ **Client**
121
+
122
+ ```bash
123
+ curl -s http://${MASTER_IP}:${PORT}/v1/chat/completions \
124
+ -H "Content-Type: application/json" \
125
+ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
126
+ ```
127
+
128
+ ### Limitations & Future Plans
129
+ Ling-2.6-flash has already made meaningful progress in our pursuit of an extreme intelligence-efficiency tradeoff. The model has improved substantially in key areas such as **tool use, multi-step planning, and long-horizon task execution**. Combined with systematic optimizations in inference efficiency and interaction experience, Ling-2.6-flash is now better equipped to handle **large-scale, high-frequency automated workloads**, delivering stronger real-world value in production settings.
130
+
131
+ At the same time, we are fully aware that pushing intelligence efficiency to the limit comes with tradeoffs. In some highly complex scenarios, the model can still exhibit **tool hallucinations** due to limited reasoning depth. In addition, there is still room for improvement in areas such as **natural bilingual switching between Chinese and English** and **compliance with highly complex instructions**.
132
+
133
+ Looking ahead, we will continue exploring the frontier of intelligence efficiency. While preserving the model’s high-efficiency inference characteristics, we aim to further improve the balance between **output quality** and **token efficiency**, and to continuously strengthen the model’s **stability, usability, and interaction experience across a wider range of real-world scenarios**.