EXAONE-4.0-1.2B-OptAI

EXAONE-4.0 is a large language model series developed by LG AI Research, consisting of a mid-size 32B model optimized for high performance and a small 1.2B model designed for on-device environments.

We have quantized the weights of this model to INT4 (w/o embedding) to optimize it for on-device deployment.

Model Conversion Contributor: Juneyoung Park (OptAI Inc.)

Model Stats:

  • Input sequence length for Prompt Processor: 128
  • Maximum context length: 4096
  • Quantization Type: w4a16 (4-bit weights with 16-bit activations), KV-Cache(INT8)
  • Supported languages: English, Korean, ...etc
  • TTFT: Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
  • Response Rate: Rate of response generation after the first response token. Measured on a short prompt with a long response; may slow down when using longer context lengths.

Model Details

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Architecture: Transformers with RoPE, QK-Reorder-Norm, and GQA (Grouped Query Attention)
  • Number of Parameters: 1.2B
  • Number of Parameters (Non-Embedding): 1.07B
  • Context Length Support: Up to 4096 tokens (optimized for on-device)

For more details, please refer to the official EXAONE4.0 Blog, GitHub, and Documentation.

Model Performance

Model Chipset Target Runtime Precision Primary Compute Unit Context Length Response Rate (TPS) Time to First Token (sec)
EXAONE-4.0-1.2B Snapdragon 8 Elite Mobile QNN(2.42)-GENIE W4A16 NPU 4096 55.6 0.04 - 0.9

Model Conversion & Inference

If you want faster inference and conversion support for a wider variety of models, feel free to reach out anytime. When running inference with the uploaded model, we recommend using non-reasoning mode. Please refer to the template below.

[|system|]
{SYSTEM_PROMPT}[|endofturn|]
[|user|]
{USER_PROMPT}[|endofturn|]
[|assistant|]
<think>

</think>

Repository Structure

EXAONE-4.0-1.2B-OptAI/
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md
β”œβ”€β”€ .gitattributes
└── EXAONE4.0-1.2B-genie-w4a16-qualcomm_snapdragon_8_elite_OptAI.zip

Internal structure

EXAONE4.0-1.2B-genie-w4a16-qualcomm_snapdragon_8_elite_OptAI/
β”œβ”€β”€ config.json
β”œβ”€β”€ exaone4_part_1_of_5.bin
β”œβ”€β”€ ...
β”œβ”€β”€ exaone4_part_5_of_5.bin
β”œβ”€β”€ genie_config.json
β”œβ”€β”€ tokenizer.json
β”œβ”€β”€ tokenizer_config.json
└── ...

References

License

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for qualcomm-ai-hub-community/EXAONE-4.0-1.2B-OptAI

Quantized
(33)
this model

Paper for qualcomm-ai-hub-community/EXAONE-4.0-1.2B-OptAI