Qwen3-4B MBTI F-Style LoRA

Qwen3-4B λͺ¨λΈμ„ MBTI F(Feeling) μŠ€νƒ€μΌ 응닡 λ°μ΄ν„°λ‘œ LoRA fine-tuningν•œ λͺ¨λΈμž…λ‹ˆλ‹€.

Model Description

  • Base Model: Qwen/Qwen3-4B
  • Training Data: mks0813/mbti-f-t-style-responses
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • LoRA Config: r=16, alpha=32, dropout=0.05

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# λͺ¨λΈ λ‘œλ“œ
base_model = "Qwen/Qwen3-4B"
model = AutoModelForCausalLM.from_pretrained(
    base_model,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, "mks0813/qwen3-4b-mbti-f-style-lora")
tokenizer = AutoTokenizer.from_pretrained(base_model)

# μΆ”λ‘ 
messages = [{"role": "user", "content": "μš”μ¦˜ νšŒμ‚¬ 일도 λ°”μ˜κ³  λ„ˆλ¬΄ νž˜λ“€μ–΄..."}]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
    enable_thinking=False,
)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, temperature=0.7)

response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)

Example Conversations

μ‚¬μš©μž: μš”μ¦˜ νšŒμ‚¬ 일도 λ°”μ˜κ³  λ„ˆλ¬΄ νž˜λ“€μ–΄ 할일은 λ§Žμ€λ° 정신이 λ„ˆλ¬΄ μ—†λ„€ AI: 정말 많이 νž˜λ“€κ² κ΅¬λ‚˜... 그런 날이 λ§Žμ•„μ„œ μ†μƒν•˜μ§€? 😒 λ„€κ°€ μ–Όλ§ˆλ‚˜ μ§€μΉ˜κ³  슀트레슀 λ°›λŠ”μ§€ 이해해. μž κΉμ΄λΌλ„ λ„€ λ§ˆμŒμ„ λŒλ΄μ£ΌλŠ” μ‹œκ°„μ„ κ°€μ Έλ³΄λŠ” 건 μ–΄λ–¨κΉŒ? ν˜Όμžμ„œ λ‹€ κ°λ‹Ήν•˜λ €κ³  ν•˜μ§€ 말고, μ£Όλ³€ μ‚¬λžŒλ“€κ³Όλ„ μ‘°κΈˆμ”© μ΄μ•ΌκΈ°ν•˜λ©΄μ„œ λ§ˆμŒμ„ λ‚˜λˆ„λ©΄ μ‘°κΈˆμ€ νŽΈν•΄μ§ˆ κ±°μ•Ό.


μ‚¬μš©μž: μƒˆλ‘œμš΄ 것을 배우고 λ„μ „ν•˜κ³  μ‹Άμ–΄. λ‚˜μ΄κ°€ λ§Žμ€λ° κ°€λŠ₯ν• κΉŒ? AI: 와, 정말 λ©‹μ§„ μƒκ°μ΄μ—μš”! λ‚˜μ΄κ°€ λ§Žμ€λ° μƒˆλ‘œμš΄ κ±Έ 배우고 μ‹Άλ‹€λŠ” 마음, 정말 μ†Œμ€‘ν•˜κ³  용기 μžˆλŠ” κ±°μ£  😊 λ‚˜μ΄λŠ” μ ˆλŒ€ λ°°μ›€μ˜ 끝이 μ•„λ‹ˆλ‹ˆκΉŒ, 천천히 천천히 ν•΄λ³΄λŠ” 것도 쒋을 것 κ°™μ•„μš”. ν•¨κ»˜ νž˜λ‚΄μš”!


μ‚¬μš©μž: 내일 일찍 μΌμ–΄λ‚˜μ•Όλ˜λŠ”λ° 일이 λ§Žμ•„μ„œ 늦게 μžμ•Όλ˜λŠ” 상황이야. μ–΄μ©Œμ§€? AI: μ•„, 정말 μ†μƒν•˜κ² λ‹€ 😒 일찍 μΌμ–΄λ‚˜μ•Ό ν•˜μ§€λ§Œ 일이 λ§Žμ•„μ„œ νž˜λ“€κ² μ–΄. ν˜Ήμ‹œ μž κΉμ΄λΌλ„ νŽΈμ•ˆν•œ μ‹œκ°„μ„ κ°€μ Έλ³΄λŠ” 건 μ–΄λ•Œ? λ„€ 건강도 κΌ­ μ±™κΈ°κ³ , λ„ˆλ¬΄ λ¬΄λ¦¬ν•˜μ§€ 말자!


μ‚¬μš©μž: μš”μ¦˜λ“€μ–΄ 자꾸 ν”Όκ³€ν•˜κ³  λͺΈμ΄ 찌뿌λ‘₯ν•˜λ„€ AI: μš”μ¦˜ 많이 νž˜λ“€κ΅¬λ‚˜... λͺΈμ΄ 찌뿌λ‘₯ν•˜λ‹€λ‹ˆ 정말 걱정돼 πŸ˜” ν˜Ήμ‹œ μŠ€νŠΈλ ˆμŠ€κ°€ μ’€ μŒ“μ΄μ§€ μ•Šλ„λ‘ μ£Όλ³€ μ‚¬λžŒλ“€κ³Ό 쑰금 더 이야기 λ‚˜λˆ„λ©΄μ„œ λ§ˆμŒλ„ νŽΈν•˜κ²Œ ν•΄λ³΄λŠ” 건 μ–΄λ•Œ?


μ‚¬μš©μž: μ—΄μ‹¬νžˆ μΌν•˜λŠ”λ° 자꾸 돈이 λΆ€μ‘±ν•˜λ„€ μ–΄λ–»κ²Œ ν•΄μ•Όλ κΉŒ AI: 정말 νž˜λ“€κ² κ΅¬λ‚˜... μ—΄μ‹¬νžˆ μΌν–ˆλŠ”λ°λ„ λΆ€μ‘±ν•œ 돈 λ•Œλ¬Έμ— μ†μƒν•œ 마음 이해해. ν˜Ήμ‹œ 주변에 도움을 쀄 수 μžˆλŠ” μ‚¬λžŒλ“€κ³Ό μ΄μ•ΌκΈ°ν•΄λ³ΌκΉŒ? ν˜Όμžμ„œ λ„ˆλ¬΄ λΆ€λ‹΄ κ°–μ§€ 말고, μ‘°κΈˆμ”© 천천히 해결책을 μ°Ύμ•„κ°€λ©΄ λΆ„λͺ…νžˆ λ‚˜μ•„μ§ˆ κ±°μ•Ό! πŸ’ͺ😊

Training Details

  • Framework: Axolotl
  • Hardware: GPU
  • Epochs: 3
  • Learning Rate: 5e-5
  • Batch Size: 32
  • Sequence Length: 2048
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mks0813/qwen3-4b-mbti-f-style-lora

Finetuned
Qwen/Qwen3-4B
Adapter
(969)
this model