kanana-1.5.2.1b-instruct-sft
This is a fine-tuned version of kakaocorp/kanana-1.5-2.1b-instruct-2505.
Notice (Required)
Kanana is licensed in accordance with the Kanana License Agreement.
Copyright © KAKAO Corp. All Rights Reserved.
Powered by Kanana
Derivative / Modification
This repository contains a fine-tuned derivative of:
Base Model: kakaocorp/kanana-1.5-2.1b-instruct-2505
Modified by: Jiwon Jeong
The model was fine-tuned using supervised fine-tuning (SFT) with custom instruction-following data.
Responsible AI
Use of this model must comply with applicable laws and Kakao's Guidelines for Responsible AI:
https://www.kakaocorp.com/page/responsible/detail/guidelinesForResponsibleAI
License
This model is distributed under the Kanana License Agreement.
Please refer to the LICENSE file in this repository for the full license text.
Model Description
This model is an instruction-following fine-tuned version of Kanana-1.5-2.1B.
It is designed to improve conversational ability and instruction adherence through supervised fine-tuning.
Training Details
- Base Model:
kakaocorp/kanana-1.5-2.1b-instruct-2505 - Method: Supervised Fine-Tuning (SFT)
- Framework: Axolotl
- Task Type: Instruction Following / Chat
Usage
You can load the model using Hugging Face Transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"jiwon9703/kanana-1.5.2.1b-instruct-sft",
device_map="auto",
torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("jiwon9703/kanana-1.5.2.1b-instruct-sft")
- Downloads last month
- 6