Llama-3.1-8B-Ko-Coder-DARE

Overview

Llama-3.1-8B-Ko-Coder-DARE is an 8B merged model combining:

  • Korean instruction-tuned alignment
  • Enhanced code generation capability

This model merges:

The merge was performed using DARE-TIES.


Core Characteristics

  • Natural Korean language understanding and generation
  • Strong code generation and debugging support
  • Clear technical explanation capability
  • Bilingual (Korean / English) developer workflow support

Recommended Use Cases

  • Korean coding assistant
  • Code generation and refactoring
  • Technical Q&A
  • Developer support workflows

Disclaimer

This is a community-merged experimental model and is not officially affiliated with Meta.

Please use with caution. Always review outputs carefully and perform sufficient verification before any real-world or production use.


Disclaimer & License

This is a community-merged model and is not officially affiliated with Meta.

The model is released under the Llama 3.1 Community License.

Please use with caution. Always review outputs carefully and perform sufficient verification before any real-world or production use.

Reasoning performance may vary depending on quantization level and inference configuration.


Citations

@misc{llama-3.1-ko-coder-dare-2026,
  author = {GrooveJ},
  title  = {Llama-3.1-8B-Ko-Coder-DARE (DARE-TIES Merge)},
  year   = {2026},
  publisher = {HuggingFace}
}
Downloads last month
5
Safetensors
Model size
8B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for muzerai/Llama-3.1-KoEn-8b-Coder-DARE

Quantizations
2 models