Introducing
QwenSeek-2B - distillation of thinking of DeepSeek V4 into Qwen3.5-2B!
We conducted the distillation of thinking(mainly of the blocks) from the DeepSeek-V4 model, to Qwen3.5-2B. Now this is your mini DeepSeek!
Training
| Teacher Model | deepseek-ai/DeepSeek-V4-Flash |
| Student Model | Qwen/Qwen3.5-2B |
| Dataset | Jackrong/DeepSeek-V4-Distill-8000x |
| Number of examples | ~8K |
| Number of passed epochs | 1 |
| Training time | ~9-10 hours |
| Number of steps passed | 250 |
| Hardware | T4 x 1(Kaggle) |
Benchmarks
We could not conduct benchmarks, therefore, if someone wants to conduct them with our model, please, leave the results in the Community. We will definitely add them HERE!
Usage
GGUF
This is the GGUF version of the model. You can use it, for example, in LM Studio (by searching for QwenSeek-2B-GGUF).
Quickstart
QwenSeek models support both non-thinking and thinking mode. QwenSeek-2B operates in non-thinking mode by default. To enable thinking, refer to the examples here.
For streamlined integration, we recommend using QwenSeek via APIs. Below is a guide to use QwenSeek via OpenAI-compatible API.
Serving QwenSeek
QwenSeek can be served via APIs with popular inference frameworks. In the following, we show example commands to launch OpenAI-Compatible API servers for QwenSeek models.
Inference efficiency and throughput vary significantly across frameworks. We recommend using the latest framework versions to ensure optimal performance and compatibility. For production workloads or high-throughput scenarios, dedicated serving engines such as SGLang, KTransformers or vLLM are strongly recommended.
The model has a default context length of 262,144 tokens. If you encounter out-of-memory (OOM) errors, consider reducing the context window.
SGLang
SGLang is a fast serving framework for large language models and vision language models. SGLang from the main branch of the open-source repository is required for QwenSeek, which can be installed using the following command in a fresh environment:
uv pip install 'git+https://github.com/sgl-project/sglang.git#subdirectory=python&egg=sglang[all]'
See its documentation for more details.
The following will create API endpoints at http://localhost:8000/v1:
Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.
python -m sglang.launch_server --model-path faunix/QwenSeek-2B --port 8000 --tp-size 1 --mem-fraction-static 0.8 --context-length 262144Tool Use: To support tool use, you can use the following command.
python -m sglang.launch_server --model-path faunix/QwenSeek-2B --port 8000 --tp-size 1 --mem-fraction-static 0.8 --context-length 262144 --tool-call-parser qwen3_coderMulti-Token Prediction (MTP): The following command is recommended for MTP:
python -m sglang.launch_server --model-path faunix/QwenSeek-2B --port 8000 --tp-size 1 --mem-fraction-static 0.8 --context-length 262144 --speculative-algo NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. vLLM from the main branch of the open-source repository is required for QwenSeek, which can be installed using the following command in a fresh environment:
uv pip install vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly
Seeits documentation for more details.
For detailed QwenSeek usage guide, see thevLLM QwenSeek recipe.
The following will create API endpoints at http://localhost:8000/v1:
Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.
vllm serve faunix/QwenSeek-2B --port 8000 --tensor-parallel-size 1 --max-model-len 262144Tool Call: To support tool use, you can use the following command.
vllm serve faunix/QwenSeek-2B --port 8000 --tensor-parallel-size 1 --max-model-len 262144 --enable-auto-tool-choice --tool-call-parser qwen3_coderMulti-Token Prediction (MTP): The following command is recommended for MTP:
vllm serve faunix/QwenSeek-2B --port 8000 --tensor-parallel-size 1 --max-model-len 262144 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'Text-Only: The following command skips the vision encoder and multimodal profiling to free up memory for additional KV cache:
vllm serve faunix/QwenSeek-2B --port 8000 --tensor-parallel-size 1 --max-model-len 262144 --language-model-only
KTransformers
KTransformers is a flexible framework for experiencing cutting-edge LLM inference optimizations with CPU-GPU heterogeneous computing. For running QwenSeek with KTransformers, see the KTransformers Deployment Guide.
Hugging Face Transformers
Hugging Face Transformers contains a lightweight server which can be used for quick testing and moderate load deployment.
The latest transformers is required for QwenSeek:
pip install "transformers[serving] @ git+https://github.com/huggingface/transformers.git@main"
See its documentation for more details. Please also make sure torchvision and pillow are installed.
Then, run transformers serve to launch a server with API endpoints at http://localhost:8000/v1; it will place the model on accelerators if available:
transformers serve --force-model faunix/QwenSeek-2B --port 8000 --continuous-batching
Using QwenSeek via the Chat Completions API
The chat completions API is accessible via standard HTTP requests or OpenAI SDKs. Here, we show examples using the OpenAI Python SDK.
Before starting, make sure it is installed and the API key and the API base URL is configured, e.g.:
pip install -U openai
# Set the following accordingly
export OPENAI_BASE_URL="http://localhost:8000/v1"
export OPENAI_API_KEY="EMPTY"
We recommend using the following set of sampling parameters for generation
- Non-thinking mode for text tasks:
temperature=1.0, top_p=1.00, top_k=20, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0- Thinking mode for text tasks:
temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0- Thinking mode for precise coding (e.g. WebDev) tasks :
temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0Please note that the support for sampling parameters varies according to inference frameworks.
Text-Only Input
from openai import OpenAI
# Configured by environment variables
client = OpenAI()
messages =[
{"role": "user", "content": "Give me a short introduction to large language models."},
]
chat_response = client.chat.completions.create(
model="faunix/QwenSeek-2B",
messages=messages,
max_tokens=32768,
temperature=1.0,
top_p=1.0,
presence_penalty=2.0,
extra_body={
"top_k": 20,
},
)
print("Chat response:", chat_response)
Thinking Mode
QwenSeek does not officially support the soft switch of Qwen3, i.e.,
/thinkand/nothink.
You can make the model think before response by configuring the API parameters. For example,
from openai import OpenAI
# Configured by environment variables
client = OpenAI()
messages =[
{"role": "user", "content": "Type \"I love QwenSeek\" backwards"},
]
chat_response = client.chat.completions.create(
model="faunix/QwenSeek-2B",
messages=messages,
max_tokens=81920,
temperature=1.0,
top_p=0.95,
presence_penalty=1.5,
extra_body={
"top_k": 20,
"enable_thinking": True,
},
)
print("Chat response:", chat_response)
About us...
Honestly we say that... This model, as in principle Faunix itself was created by just 2 13-year-old schoolchildren from Russia. We really adore, and research AI, not just figuring it out, but creating our own! At the given moment, we work on Kaggle T4, and we... Lack resources. We trained this model in FP32, with 0.01 it\s, because FP16 is not supported by Qwen3.5, and BF16 is not supported by T4 🤣... Community, suggest where it is possible to, for example get grants for GPU or something like that. This is very important to us, as we have so many models in plans... But we banally do not have good GPUs... :)
Citation
@misc{qwenseek2026,
title={QwenSeek-2B: Distillation of thinking DeepSeek V4 into Qwen3.5-2B},
author={faunix},
year={2026},
url={https://huggingface.co/faunix/QwenSeek-2B}
}
Creator: Faunix
Release Date: 03.05.26
Model Name: QwenSeek-2B
:)
- Downloads last month
- 3,047
