π LFM2-1.2B-KoEn-MT-v4-100k-GGUF
μ΄ λ¦¬ν¬μ§ν 리λ gyung/lfm2-1.2b-koen-mt-v4-100k λͺ¨λΈμ GGUF(Gemma/Llama.cpp Compatible) μμν λ²μ μ ν¬ν¨νκ³ μμ΅λλ€.
βΉοΈ λͺ¨λΈ μ€λͺ (Model Description)
LFM2-1.2B-KoEn-MT-v4-100kμ LiquidAIμ LFM2-1.2B μν€ν
μ²λ₯Ό κΈ°λ°μΌλ‘ νκ΅μ΄-μμ΄ λ²μ μ±λ₯μ κ·ΉλννκΈ° μν΄ 100,000κ°μ κ³ νμ§ λ³λ ¬ λ°μ΄ν°μ
μΌλ‘ νμΈνλλ λͺ¨λΈμ
λλ€.
- Base Model: LiquidAI/LFM2-1.2B
- Finetuned by: Gyung
- Parameters: 1.2B
- Purpose: Korean-English Translation (νκ΅μ΄-μμ΄ λ²μ)
π¦ μ 곡λλ GGUF νμΌ (Quantization Methods)
μ¬μ© νκ²½κ³Ό νμμ λ°λΌ μ μ ν μμν λ²μ μ μ ννμ¬ λ€μ΄λ‘λνμΈμ. (κΆμ₯: Q4_K_M λλ Q5_K_M)
| νμΌλͺ (μμ) | μμν (Quant) | ν¬κΈ° (Size) | μ€λͺ (Description) |
|---|---|---|---|
lfm2-1.2b-koen-mt-v4-100k-f16.gguf |
F16 | ~2.34 GB | μλ³Έ μ±λ₯ μ μ§, μ©λ νΌ |
lfm2-1.2b-koen-mt-v4-100k-q8_0.gguf |
Q8_0 | ~1.25 GB | νμ§ μμ€ κ±°μ μμ |
lfm2-1.2b-koen-mt-v4-100k-q6_k.gguf |
Q6_K | ~963 MB | λμ νμ§, κ· ν μ‘ν μ±λ₯ |
lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf |
Q5_K_M | ~843 MB | μΆμ²: νμ§κ³Ό μλ/μ©λμ μ΅μ κ· ν |
lfm2-1.2b-koen-mt-v4-100k-q4_k_m.gguf |
Q4_K_M | ~731 MB | μΆμ²: μ μ λ©λͺ¨λ¦¬ μλͺ¨, μ€μν μ±λ₯ |
lfm2-1.2b-koen-mt-v4-100k-q4_0.gguf |
Q4_0 | ~696 MB | κ°μ₯ κ°λ²Όμ, μΌλΆ νμ§ μ ν κ°λ₯μ± |
π μ¬μ© λ°©λ² (Usage)
llama.cpp
μ΅μ λ²μ μ llama.cppλ₯Ό μ¬μ©νμ¬ μ€νν μ μμ΅λλ€. (LFM μν€ν
μ² μ§μ μ¬λΆλ₯Ό νμΈνμΈμ)
./llama-cli -m lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf \
-p "Translate to Korean: The model is working correctly now." \
-n 256
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(
model_path="./lfm2-1.2b-koen-mt-v4-100k-q5_k_m.gguf",
n_ctx=2048,
verbose=False
)
prompt = "Translate to Korean: The model is working correctly now."
output = llm(
f"User: {prompt}\nAssistant:",
max_tokens=256,
stop=["User:", "\n"],
echo=True
)
print(output['choices'][0]['text'])
π λ²€μΉλ§ν¬ (Benchmarks)
μλ³Έ λͺ¨λΈ(F16) κΈ°μ€ Flores-200 νκ° κ²°κ³Όμ λλ€. GGUF μμν μ μ μλ μν νλ½ν μ μμ΅λλ€.
- LFM2-1.2B-KOEN-MT-v4-100k: CHrF++ 30.98 / BLEU 11.09
- Google Translate: CHrF++ 39.27
- NLLB-200-Distilled-600M: CHrF++ 31.97
π λΌμ΄μ μ€ (License)
μ΄ λͺ¨λΈμ Liquid AI LFM Open License v1.0μ λ°λ¦ λλ€.
- νμ /κ°μΈ μ°κ΅¬: μ ν μμ
- μμ μ μ΄μ©: μ° λ§€μΆ $10M λ―Έλ§ λ¬΄λ£ (μ΄κ³Ό μ λ³λ κ³μ½ νμ)
- μμΈν λ΄μ©μ LICENSEλ₯Ό μ°Έκ³ νμΈμ.
Citation
@misc{lfm2-1.2b-koen-mt-v4-100k,
author = {Gyung},
title = {LFM2-1.2B Korean-English Machine Translation Model v4},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{[https://huggingface.co/gyung/lfm2-1.2b-koen-mt-v4-100k](https://huggingface.co/gyung/lfm2-1.2b-koen-mt-v4-100k)}}
}
- Downloads last month
- 103
4-bit
5-bit
6-bit
8-bit
16-bit