mistral-small-3.1-24b-harmoni-sft-dpo

Version: v1.0.0

Model Description

This model is a fine-tuned version of mistralai/Mistral-Small-3.1-24B-Instruct-2503 for manufacturing domain applications.

Training Method: Sequential SFT (Supervised Fine-Tuning) followed by DPO (Direct Preference Optimization)

Training Pipeline:

  1. SFT Phase: LoRA fine-tuning on domain-specific instruction data
  2. DPO Phase: Preference optimization for alignment
  3. Merge: Weighted merge of SFT and DPO adapters with base model

Training Configuration

  • PEFT Strategy: aggressive
  • SFT Configuration: aggressive
  • Dataset: UltraChat Dataset
  • Hardware: Multi-GPU (FSDP/DeepSpeed)
  • Context Length: 128K tokens
  • Training Framework: HuggingFace Transformers + PEFT

Usage

Using Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft-dpo"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto",
    torch_dtype="auto"
)

messages = [
    {"role": "system", "content": "You are a helpful manufacturing assistant."},
    {"role": "user", "content": "What are the key steps in CNC machining?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Using vLLM (Recommended for Quantized Models)

from vllm import LLM, SamplingParams

llm = LLM(
    model="ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft-dpo",
    quantization="compressed-tensors",
    tensor_parallel_size=1
)

sampling_params = SamplingParams(temperature=0.7, top_p=0.9, max_tokens=512)
outputs = llm.generate("What are the key steps in CNC machining?", sampling_params)
print(outputs[0].outputs[0].text)

Model Details

  • Base Model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
  • Organization: ssoni-harmoni
  • Training Date: 2026-02-09
  • Model Type: Causal Language Model

Version History

Version Date Changes
v1.0.0 2026-02-09 Initial release

License

This model inherits the Apache 2.0 license from the base model.

Citation

@misc{harmoni-manufacturing-model,
  title={Harmoni Manufacturing Domain Fine-Tuned Model},
  author={Harmoni ML Team},
  year={2026},
  publisher={HuggingFace},
  url={https://huggingface.co/ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft-dpo}
}

Contact

For questions or issues, please contact the Harmoni ML team.

Downloads last month
5
Safetensors
Model size
24B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ssoni-harmoni/mistral-small-3.1-24b-harmoni-sft-dpo