🌊 LFM2-1.2B-KoEn-MT-v4-100k-GGUF

이 λ¦¬ν¬μ§€ν† λ¦¬λŠ” gyung/lfm2-1.2b-koen-mt-v4-100k λͺ¨λΈμ˜ GGUF(Gemma/Llama.cpp Compatible) μ–‘μžν™” 버전을 ν¬ν•¨ν•˜κ³  μžˆμŠ΅λ‹ˆλ‹€.

ℹ️ λͺ¨λΈ μ„€λͺ… (Model Description)

LFM2-1.2B-KoEn-MT-v4-100k은 LiquidAI의 LFM2-1.2B μ•„ν‚€ν…μ²˜λ₯Ό 기반으둜 ν•œκ΅­μ–΄-μ˜μ–΄ λ²ˆμ—­ μ„±λŠ₯을 κ·ΉλŒ€ν™”ν•˜κΈ° μœ„ν•΄ 100,000개의 κ³ ν’ˆμ§ˆ 병렬 λ°μ΄ν„°μ…‹μœΌλ‘œ νŒŒμΈνŠœλ‹λœ λͺ¨λΈμž…λ‹ˆλ‹€.

  • Base Model: LiquidAI/LFM2-1.2B
  • Finetuned by: Gyung
  • Parameters: 1.2B
  • Purpose: Korean-English Translation (ν•œκ΅­μ–΄-μ˜μ–΄ λ²ˆμ—­)

πŸ“¦ μ œκ³΅λ˜λŠ” GGUF 파일 (Quantization Methods)

μ‚¬μš© ν™˜κ²½κ³Ό ν•„μš”μ— 따라 μ μ ˆν•œ μ–‘μžν™” 버전을 μ„ νƒν•˜μ—¬ λ‹€μš΄λ‘œλ“œν•˜μ„Έμš”. (ꢌμž₯: Q4_K_M λ˜λŠ” Q5_K_M)

파일λͺ… (μ˜ˆμ‹œ) μ–‘μžν™” (Quant) 크기 (Size) μ„€λͺ… (Description)
lfm2-1.2b-koen-mt-v4-100k-f16.gguf F16 ~2.34 GB 원본 μ„±λŠ₯ μœ μ§€, μš©λŸ‰ 큼
lfm2-1.2b-koen-mt-v4-100k-q8_0.gguf Q8_0 ~1.25 GB ν’ˆμ§ˆ 손싀 거의 μ—†μŒ
lfm2-1.2b-koen-mt-v4-100k-q6_k.gguf Q6_K ~963 MB 높은 ν’ˆμ§ˆ, κ· ν˜• 작힌 μ„±λŠ₯
lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf Q5_K_M ~843 MB μΆ”μ²œ: ν’ˆμ§ˆκ³Ό 속도/μš©λŸ‰μ˜ 졜적 κ· ν˜•
lfm2-1.2b-koen-mt-v4-100k-q4_k_m.gguf Q4_K_M ~731 MB μΆ”μ²œ: 적은 λ©”λͺ¨λ¦¬ μ†Œλͺ¨, μ€€μˆ˜ν•œ μ„±λŠ₯
lfm2-1.2b-koen-mt-v4-100k-q4_0.gguf Q4_0 ~696 MB κ°€μž₯ 가벼움, 일뢀 ν’ˆμ§ˆ μ €ν•˜ κ°€λŠ₯μ„±

πŸš€ μ‚¬μš© 방법 (Usage)

llama.cpp

μ΅œμ‹  λ²„μ „μ˜ llama.cppλ₯Ό μ‚¬μš©ν•˜μ—¬ μ‹€ν–‰ν•  수 μžˆμŠ΅λ‹ˆλ‹€. (LFM μ•„ν‚€ν…μ²˜ 지원 μ—¬λΆ€λ₯Ό ν™•μΈν•˜μ„Έμš”)

./llama-cli -m lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf \
  -p "Translate to Korean: The model is working correctly now." \
  -n 256

Python (llama-cpp-python)

from llama_cpp import Llama

llm = Llama(
    model_path="./lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf",
    n_ctx=2048,
    verbose=False
)

prompt = "Translate to Korean: The model is working correctly now."
output = llm(
    f"User: {prompt}\nAssistant:",
    max_tokens=256,
    stop=["User:", "\n"],
    echo=True
)

print(output['choices'][0]['text'])

πŸ“Š 벀치마크 (Benchmarks)

원본 λͺ¨λΈ(F16) κΈ°μ€€ Flores-200 평가 κ²°κ³Όμž…λ‹ˆλ‹€. GGUF μ–‘μžν™” μ‹œ μ μˆ˜λŠ” μ†Œν­ ν•˜λ½ν•  수 μžˆμŠ΅λ‹ˆλ‹€.

  • LFM2-1.2B-KOEN-MT-v4-100k: CHrF++ 30.98 / BLEU 11.09
  • Google Translate: CHrF++ 39.27
  • NLLB-200-Distilled-600M: CHrF++ 31.97

πŸ“œ λΌμ΄μ„ μŠ€ (License)

이 λͺ¨λΈμ€ Liquid AI LFM Open License v1.0을 λ”°λ¦…λ‹ˆλ‹€.

  • ν•™μˆ /개인 연ꡬ: μ œν•œ μ—†μŒ
  • 상업적 이용: μ—° 맀좜 $10M 미만 무료 (초과 μ‹œ 별도 계약 ν•„μš”)
  • μžμ„Έν•œ λ‚΄μš©μ€ LICENSEλ₯Ό μ°Έκ³ ν•˜μ„Έμš”.

Citation

@misc{lfm2-1.2b-koen-mt-v4-100k,
  author = {Gyung},
  title = {LFM2-1.2B Korean-English Machine Translation Model v4},
  year = {2025},
  publisher = {Hugging Face},
  journal = {Hugging Face Model Hub},
  howpublished = {\url{[https://huggingface.co/gyung/lfm2-1.2b-koen-mt-v4-100k](https://huggingface.co/gyung/lfm2-1.2b-koen-mt-v4-100k)}}
}
Downloads last month
103
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for gyung/lfm2-1.2b-koen-mt-v4-100k-GGUF

Quantized
(2)
this model