Qwen3-1.7B-LOMO-Shipbuilding-Marine
Model Description
This model is a fine-tuned version of the Qwen3-1.7B model. It has been specifically adapted for the shipbuilding and marine domain using the LOMO (LOw-Memory Optimization) methodology. It was trained on a dataset of 5,000 specialized terms for 2 epochs.
μ΄ λͺ¨λΈμ Qwen3-1.7Bλ₯Ό κΈ°λ°μΌλ‘, μ‘°μ ν΄μ λΆμΌμ μ λ¬Έ μ©μ΄μ νΉννμ¬ νμΈνλν λͺ¨λΈμ λλ€. LOMO(LOw-Memory Optimization) λ°©λ²λ‘ μ μ¬μ©νμ¬ νμ΅λμμ΅λλ€. 5,000κ°μ μ λ¬Έ μ©μ΄ λ°μ΄ν°μ μΌλ‘ 2 μν¬ν¬ λμ νμ΅λμμ΅λλ€.
How to Use
You can use this model with the transformers library:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "naisksh32/Qwen3-1.7B-LOMO-Shipbuilding-Marine"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "ν΄μνλνΈμ μ£Όμ μ€λΉλ 무μμΈκ°μ?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Note: Please replace "naisksh32/Qwen3-1.7B-LOMO-Shipbuilding-Marine" with the actual repository name after you upload the model.
Training Details
- Base Model: Qwen3-1.7B
- Fine-tuning Method: LOMO (LOw-Memory Optimization)
- Dataset: A custom dataset containing 5,000 terms and related texts from the shipbuilding and marine industry.
- Epochs: 2
- Domain: Shipbuilding and Marine
Disclaimer
This model is a research artifact and may have limitations. It is optimized for specialized terminology and its performance on general-purpose tasks may vary.
- Downloads last month
- 2