Gemma 3 4B IT LoRA Adapter

๊ฐœ์š”

์ด ์ €์žฅ์†Œ๋Š” RunPod์—์„œ ํŒŒ์ธํŠœ๋‹ํ•œ google/gemma-3-4b-it ๊ธฐ๋ฐ˜ LoRA ์–ด๋Œ‘ํ„ฐ์ž…๋‹ˆ๋‹ค.

ํ•™์Šต ๋ฐ์ดํ„ฐ

  • ๊ณต๊ฐœ ์—ฌ๋ถ€: ๋น„๊ณต๊ฐœ
  • ์„ค๋ช…: ์ œ๊ณต๋˜์ง€ ์•Š์Œ

ํ•™์Šต ํ™˜๊ฒฝ

  • ํ”Œ๋žซํผ: RunPod
  • ๋ฐฉ๋ฒ•: LoRA (PEFT)

์‚ฌ์šฉ ๋ฐฉ๋ฒ•

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "google/gemma-3-4b-it"
adapter_id = "your-username/your-lora-repo"

tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(base_model_id)
model = PeftModel.from_pretrained(model, adapter_id)

์ฐธ๊ณ 

  • ์ด ์ €์žฅ์†Œ๋Š” ์–ด๋Œ‘ํ„ฐ๋งŒ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. ๋ฒ ์ด์Šค ๋ชจ๋ธ์€ ๋ณ„๋„๋กœ ๋ฐ›์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค.
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Youngwoo1006/gemma-3-1b-it-skn18-3team-fine-tuning_14

Adapter
(319)
this model