Apertus-SEA-LION-v4-8B-IT-GGUF
[Last update: 2026-02-05]
SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Apertus-SEA-LION-v4-8B-IT is a 8-billion parameter model built upon the Apertus-8B-Instruct architecture. To ensure domain adaptation for the region, the model underwent rigorous post-training on a curated dataset of approximately 6.4 million instruction-text pairs.
This extensive post-training instills multilingual and multicultural fluency, covering key SEA languages such as Indonesian, Vietnamese, Thai, Filipino, Tamil, Burmese, Malay. This curated dataset also includes a filtered open sourced set of tool-calling instruction-text pairs to impart these capabilities, in addition to linguistic fluency.
Apertus-SEA-LION-v4-8B-IT is designed as a fully open model to align with this core philosophy, we have released the datasets used for post-training, as well as the evaluation codes and datasets used to evaluate the model.
These resources can be accessed via the link below.
SEA-LION stands for Southeast Asian Languages In One Network.
We performed Post-Training in English and SEA languages on Apertus-8B-Instruct-2509, a decoder model using the Apertus architecture, to create Apertus-SEA-LION-v4-8B-IT.
For tokenization, the model employs the default tokenizer used in Apertus-8B-Instruct-2509.
- Developed by: AI Products Pillar, AI Singapore
- Funded by: Singapore NRF
- Shared by: AI Products Pillar, AI Singapore
- Model type: Decoder
- Context length: 65k
- Language(s): Burmese, Filipino, Indonesian, Malay, Vietnamese, Thai, Tamil
- License: Apache-2.0
- Finetuned from model: Apertus-8B-Instruct
Model Sources
Collection: 🤗aisingapore/sea-lion-v4 The collection includes the compact models listed below:
Text Generation - aisingapore/Apertus-SEA-LION-v4-8B-IT-GGUF
Image-Text-to-Text - aisingapore/Gemma-SEA-LION-v4-27B-IT-GGUF
Text Generation - aisingapore/Gemma-SEA-LION-v4-27B-IT-FP8-Dynamic
Text Generation - aisingapore/Gemma-SEA-LION-v4-27B-IT-NVFP4
Text Generation - aisingapore/Qwen-SEA-LION-v4-32B-IT-8BIT
Text Generation - aisingapore/Qwen-SEA-LION-v4-32B-IT-4BIT
Image-Text-to-Text - aisingapore/Gemma-SEA-LION-v4-4B-VL-GGUF
Repository: 🤗aisingapore/Apertus-SEA-LION-v4-8B-IT-GGUF
Model Weights included in this repository:
- Apertus-SEA-LION-v4-8B-IT-F16
- Apertus-SEA-LION-v4-8B-IT-Q4_K_M
- Apertus-SEA-LION-v4-8B-IT-Q6_K
- Apertus-SEA-LION-v4-8B-IT-Q8_0
Take note that some GGUFs may be split into parts. Most tools such as llama.cpp and those built on it do support split GGUFs, pointing the platform to the first split will be sufficient for it to function. In the event where a merge is necessary, it can be done using llama.cpp's gguf-split: ./gguf-split --merge ./path/to/first-split ./path/to/output-gguf More details: gguf-split guide & README
Uses
How to Get Started with the Model
Use the code below to get started with the model using llama.cpp
llama.cpp (text-only)
./llama-cli -hf aisingapore/Apertus-SEA-LION-v4-8B-IT-GGUF --jinja -p "Hello, please introduce yourself."
Training Details
Training Data
The dataset comprises Burmese, English, Filipino, Indonesian, Malay, Vietnamese, Thai and Tamil languages, collected from a mixture of sources including web data, code, open-source datasets, and synthetically generated datasets, amounting to a total of 500 billion tokens sampled from our bucket of 1 trillion tokens.
Out-of-Scope Use
The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
Bias, Risks, and Limitations
The model was not tested for robustness against adversarial prompting. It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies.
More Information
This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
For more info, please contact us at sealion@aisingapore.org
Team
Ahmed Dabeer, Ahn Jeongmi, Antonyrex Sajeban, Chan Hok Teng Adwin, Cheng Zi Yi Nicholas, Choa Hsueh Mei Esther, Heng Jonathan, Huang Yuli, Jann Railey Estrada Montalan, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Liew Rachel, Limkonchotiwat Peerat, Muhammad Ridzuan Bin Mokhtar, Nagarajan Karthik, Ng Boon Cheong Raymond, Ngee Chia Tai, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Tat-Wee David, Ong Zhi Hao, Pereira Mark, Poon Joseph, Rengarajan Hamsawardhini, Siow Wei Kang Bryan, Susanto Yosephine, Sutaveephamochanon Anocha, Tan Choon Meng, Tan Chor Phin Evelyn, Tan Siao Wei Jessica, Tan Yixian, Tee Jun Yun, Teng Kok Wai Walter, Teo Eng Sipp Leslie, Tjhi William, Wu Donghang, Yeo Yeow Tong, Yong Xianbin, Zhang Zhou, Imanol Schlag (Swiss AI), Antoine Bosselut (Swiss AI) and Martin Jaggi (Swiss AI)
Acknowledgement
This project is supported by the National Research Foundation Singapore and Infocomm Media Development Authority (IMDA), Singapore under its National Large Language Model Funding Initiative.
Contact
- Downloads last month
- 181
4-bit
6-bit
8-bit
16-bit
Model tree for aisingapore/Apertus-SEA-LION-v4-8B-IT-GGUF
Base model
swiss-ai/Apertus-8B-2509