new version

#1
by yqchen-sci - opened

I believe the 4B small model is currently the best local assistant for running on a laptop, suitable for tasks such as text paragraph polishing, meeting transcription processing, and translation. The latest version of the model has just been released. Are there any plans to perform fine-tuning on the next version?

Can you add a merged version? I'd like to load this as a clip for z-image to experiment and I can't reliably use torch on my amd rig to merge them. Apologies for my noob status here.


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

OLD_MODEL_ID = "huihui-ai/Huihui-Qwen3-4B-abliterated-v2"
NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-4B-abliterated-New"
model = AutoModelForCausalLM.from_pretrained(
    OLD_MODEL_ID,
    device_map="auto",
    trust_remote_code=True,
    torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(OLD_MODEL_ID, trust_remote_code=True)

model.save_pretrained(NEW_MODEL_ID, max_shard_size="10GB")
tokenizer.save_pretrained(NEW_MODEL_ID)

Sign up or log in to comment