これは何?

これはsbintuitions/sarashina2.2-3b-instruct-v0.1DataPilot/Zero_SFT_Ja_v3_ReasoningでSFTしたモデルです. モデルは答えの前に,思考過程を <think> </think> の間に出力するよう調整されています. コンテクスト長はオリジナルの8192から16384に拡張されています.

推論方法

unslothで以下のように推論してください.素のtransformersで推論させると出力の質が低下する問題が発生しています(2025/9/20時点で解決法がわかっていません)

from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "OsakanaTeishoku/sarashina2.2-3b-cot-sft-v0.1",
    max_seq_length = 16384,
    load_in_4bit = False,
    # token = HF_TOKEN, 
)


from transformers import TextStreamer
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

# modified from https://huggingface.co/unsloth/SmolLM3-3B/blob/main/chat_template.jinja
# do not change the system prompt!!!
SYSTEM_PROMPT = "You are a helpful AI assistant. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracking, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> Thought section </think> Solution section. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion."

messages = [{"role": "system", "content": SYSTEM_PROMPT},{"role": "user", "content": "what is 13 + 24 ?"},{"role": "assistant", "content": "<think>"}]

input_text = tokenizer.apply_chat_template(messages, tokenize=False, continue_final_message=True)

input = tokenizer(input_text, return_tensors="pt").to(model.device)
FastLanguageModel.for_inference(model)

output = model.generate(**input, max_new_tokens=10000, use_cache=True, do_sample=True, temperature=0.6, top_p=0.95, streamer=streamer)

Uploaded finetuned model

  • Developed by: OsakanaTeishoku
  • License: mit
  • Finetuned from model : sbintuitions/sarashina2.2-3b-instruct-v0.1

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
14
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for OsakanaTeishoku/sarashina2.2-3b-cot-sft-20250920

Finetuned
(32)
this model
Quantizations
1 model

Dataset used to train OsakanaTeishoku/sarashina2.2-3b-cot-sft-20250920

Collection including OsakanaTeishoku/sarashina2.2-3b-cot-sft-20250920