Gemma3-1B-LOMO-Shipbuilding-Marine

Model Description

This model is a fine-tuned version of the Gemma3-1B model. It has been specifically adapted for the shipbuilding and marine domain using the LOMO (LOw-Memory Optimization) methodology. It was trained on a dataset of 5,000 specialized terms.

์ด ๋ชจ๋ธ์€ Gemma3-1B๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ, ์กฐ์„ ํ•ด์–‘ ๋ถ„์•ผ์˜ ์ „๋ฌธ ์šฉ์–ด์— ํŠนํ™”ํ•˜์—ฌ ํŒŒ์ธํŠœ๋‹ํ•œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. LOMO (LOw-Memory Optimization) ๋ฐฉ๋ฒ•๋ก ์„ ์‚ฌ์šฉํ•˜์—ฌ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. 5,000๊ฐœ์˜ ์ „๋ฌธ ์šฉ์–ด ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

How to Use

You can use this model with the transformers library:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "naisksh32/Gemma3-1B-LOMO-Shipbuilding-Marine"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example usage
prompt = "ํ•ด์–‘ํ”Œ๋žœํŠธ์˜ ์ฃผ์š” ์„ค๋น„๋Š” ๋ฌด์—‡์ธ๊ฐ€์š”?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=128)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Note: Please replace "naisksh32/Gemma3-1B-LOMO-Shipbuilding-Marine" with the actual repository name after you upload the model.

Training Details

  • Base Model: Gemma3-1B
  • Fine-tuning Method: LOMO (LOw-Memory Optimization)
  • Dataset: A custom dataset containing 5,000 terms and related texts from the shipbuilding and marine industry.
  • Domain: Shipbuilding and Marine

Disclaimer

This model is a research artifact and may have limitations. It is optimized for specialized terminology and its performance on general-purpose tasks may vary.

Downloads last month
5
Safetensors
Model size
1.0B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for naisksh32/Gemma3-1B-LOMO-Shipbuilding-Marine

Quantizations
1 model