Divij/Llama-3.2-3B-Instruct-sft-without-thoughts
Supervised fine-tune of meta-llama/Llama-3.2-3B-Instruct on a scientific-methodology
instruction dataset, where each assistant response is a plain step-by-step research methodology.
The project goal is to compare whether including explicit <Thought_i> reasoning
traces alongside each <Step_i> action during SFT produces stronger scientific-methodology
generators than training on step-only plans.
Variant
This checkpoint is the without-thoughts variant:
The assistant target is only <Step_1>...</Step_1> ... <Step_n>...</Step_n> — the reasoning traces are excluded. Trained with max_seq_length=2048.
Training data
- Source:
sft_without_thoughts.jsonlfrom theverl_scientific_discoveryrepeated-sampling pipeline. - 4,990
messages-format examples (system+user+assistant). - Each assistant response is a step-by-step research methodology for a given
Research Goal+Constraintsprompt.
Training setup
- Framework: open-instruct
finetune.py(accelerate + FSDP2). - Hardware: 2× NVIDIA H100 NVL (96 GB).
- Precision: bf16 mixed precision.
- Attention: FlashAttention-2.
- Memory: gradient checkpointing enabled.
Hyperparameters
max_seq_length |
2048 |
num_train_epochs |
3 |
per_device_train_batch_size |
1 |
gradient_accumulation_steps |
8 |
| Effective batch size | 16 (1 × 2 GPU × 8 accum) |
learning_rate |
2e-5 |
lr_scheduler_type |
linear |
warmup_ratio |
0.03 |
weight_decay |
0.0 |
seed |
42 |
| Optimizer | fused AdamW |
| Total optimization steps | 936 |
| Final training loss | 1.988 |
The chat template is inherited from the base model
(meta-llama/Llama-3.2-3B-Instruct). Labels are masked on the system and
user turns so only the assistant response contributes to the loss
(open-instruct's sft_tulu_tokenize_and_truncate_v1 transform).
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "Divij/Llama-3.2-3B-Instruct-sft-without-thoughts"
tokenizer = AutoTokenizer.from_pretrained(repo)
model = AutoModelForCausalLM.from_pretrained(
repo,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Given a research goal and constraints, provide a step-by-step methodology.\n\nFormat:\n<Step_1>...</Step_1>\n<Step_2>...</Step_2>"},
{"role": "user", "content": (
"You are given a scientific research problem.\n\n"
"Research Goal:\n<your research goal here>\n\n"
"Constraints:\n1) <constraint 1>\n2) <constraint 2>"
)},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
output = model.generate(
inputs,
max_new_tokens=1024,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
print(tokenizer.decode(output[0][inputs.shape[-1]:], skip_special_tokens=True))
Notes
- Context length. Use
max_seq_length≥ 2048 at inference time to match the training regime; generations longer than this were not seen during training. - Intended use. Research artifact for generating structured scientific research plans. Not aligned for general-purpose chat or safety-critical use.
- Compared to sibling. A matching with-thoughts checkpoint at
Divij/Llama-3.2-3B-Instruct-sft-with-thoughtsis trained on the same data but with the opposite treatment of reasoning traces.
- Downloads last month
- 222
Model tree for Divij/Llama-3.2-3B-Instruct-sft-without-thoughts
Base model
meta-llama/Llama-3.2-3B-Instruct