EdgeRazor-Nbit
Collection
15 items • Updated
| Mixed-Precision Recipe | Bit-Width | This Repo |
|---|---|---|
| 100% 4-bit + 0% 1.58-bit | 4 | ✔️ |
| 50% 4-bit + 50% 1.58-bit | 2.79 | |
| 12.5% 4-bit + 87.5% 1.58-bit | 1.88 | |
| 0% 4-bit + 100% 1.58-bit | 1.58 |
| Methods | W-A-KV | Video-MME | MLVU | Average (↑) |
|---|---|---|---|---|
| Qwen2.5-Omni-7B | 16-16-16 | 62.81 | 48.01 | 55.41 |
| AWQ | 4-16-16 | 61.78 | 47.40 | 54.59 |
| GPTQ | 4-16-16 | 60.51 | 48.06 | 54.29 |
| EdgeRazor | 4-16-16 | 62.22 | 48.82 | 55.52 |
It is recommended to ensure that EdgeRazor is installed in advance for weight-activation quantization. The provided weights are already quantized (quantized_weights*scaling_bf16); to enable activation and KV cache quantization, set trust_remote_code=True in the model configuration.
import soundfile as sf
from transformers import AutoModelForCausalLM, Qwen2_5OmniProcessor
from qwen_omni_utils import process_mm_info
# default: Load the model on the available device(s)
model = AutoModelForCausalLM.from_pretrained(
"zhangsq-nju/Qwen2.5-Omni-7B-EdgeRazor-4bit",
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModelForCausalLM.from_pretrained(
# "zhangsq-nju/Qwen2.5-Omni-7B-EdgeRazor-4bit",
# torch_dtype="auto",
# device_map="auto",
# attn_implementation="flash_attention_2",
# trust_remote_code=True,
# )
processor = Qwen2_5OmniProcessor.from_pretrained("zhangsq-nju/Qwen2.5-Omni-7B-EdgeRazor-4bit")
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "You are Qwen-EdgeRazor, a virtual human, capable of perceiving auditory and visual inputs, as well as generating text and speech."}
],
},
{
"role": "user",
"content": [
{"type": "video", "video": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen2.5-Omni/draw.mp4"},
],
},
]
# set use audio in video
USE_AUDIO_IN_VIDEO = True
# Preparation for inference
text = processor.apply_chat_template(conversation, add_generation_prompt=True, tokenize=False)
audios, images, videos = process_mm_info(conversation, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = processor(text=text, audio=audios, images=images, videos=videos, return_tensors="pt", padding=True, use_audio_in_video=USE_AUDIO_IN_VIDEO)
inputs = inputs.to(model.device).to(model.dtype)
# Inference: Generation of the output text and audio
text_ids, audio = model.generate(**inputs, use_audio_in_video=USE_AUDIO_IN_VIDEO)
text = processor.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(text)
sf.write(
"output.wav",
audio.reshape(-1).detach().cpu().numpy(),
samplerate=24000,
)
If you find our project useful in your research, please consider kindly citing our papers ✏️:
@article{zhangsh-edgerazor,
title={{EdgeRazor}: A Lightweight Framework for Large Language Models via Mixed-Precision Quantization-Aware Distillation},
author={Shu-Hao Zhang and Le-Tong Huang and Xiang-Sheng Deng and Xin-Yi Zou and Chen Wu and Nan Li and Shao-Qun Zhang},
year={2026},
}
Base model
Qwen/Qwen2.5-Omni-7B