Qwen3-1.7B-LOMO-Shipbuilding-Marine

Model Description

This model is a fine-tuned version of the Qwen3-1.7B model. It has been specifically adapted for the shipbuilding and marine domain using the LOMO (LOw-Memory Optimization) methodology. It was trained on a dataset of 5,000 specialized terms for 2 epochs.

이 λͺ¨λΈμ€ Qwen3-1.7Bλ₯Ό 기반으둜, μ‘°μ„ ν•΄μ–‘ λΆ„μ•Όμ˜ μ „λ¬Έ μš©μ–΄μ— νŠΉν™”ν•˜μ—¬ νŒŒμΈνŠœλ‹ν•œ λͺ¨λΈμž…λ‹ˆλ‹€. LOMO(LOw-Memory Optimization) 방법둠을 μ‚¬μš©ν•˜μ—¬ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€. 5,000개의 μ „λ¬Έ μš©μ–΄ λ°μ΄ν„°μ…‹μœΌλ‘œ 2 에포크 λ™μ•ˆ ν•™μŠ΅λ˜μ—ˆμŠ΅λ‹ˆλ‹€.

How to Use

You can use this model with the transformers library:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "naisksh32/Qwen3-1.7B-LOMO-Shipbuilding-Marine"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example usage
prompt = "ν•΄μ–‘ν”ŒλžœνŠΈμ˜ μ£Όμš” μ„€λΉ„λŠ” λ¬΄μ—‡μΈκ°€μš”?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=128)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Note: Please replace "naisksh32/Qwen3-1.7B-LOMO-Shipbuilding-Marine" with the actual repository name after you upload the model.

Training Details

  • Base Model: Qwen3-1.7B
  • Fine-tuning Method: LOMO (LOw-Memory Optimization)
  • Dataset: A custom dataset containing 5,000 terms and related texts from the shipbuilding and marine industry.
  • Epochs: 2
  • Domain: Shipbuilding and Marine

Disclaimer

This model is a research artifact and may have limitations. It is optimized for specialized terminology and its performance on general-purpose tasks may vary.

Downloads last month
2
Safetensors
Model size
2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support