Gemma-SEA-LION-v4-4B-VL-GGUF
[Last update: 2026-02-05]
SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Gemma-SEA-LION-v4-4B-VL is a 4-billion parameter Vision-Language Model (VLM) built upon the gemma-3-4b-it architecture. To ensure domain adaptation for the region, the model underwent rigorous post-training on a curated dataset of approximately 6.7 million instruction-text pairs.
This extensive post-training instills multilingual and multicultural fluency, covering key SEA languages such as Indonesian, Vietnamese, Thai, Filipino, Tamil, Burmese, Malay. This curated dataset also includes a filtered open sourced set of tool-calling instruction-text pairs to impart these capabilities, in addition to linguistic fluency.
Gemma-SEA-LION-v4-4B-VL inherits the image and text capabilities from gemma-3-4b-it alongside its large context length of 128K tokens. Additionally, beyond extending the multilingual capabilities of the original gemma model for SEA languages, we experimented with:
- Adding function calling to the model to allow for this model to be used in tool calling applications.
- The visual parsing capabilities in Thai, Chinese and English.
SEA-LION stands for Southeast Asian Languages In One Network.
We performed Post-Training in English and SEA languages on gemma-3-4b-it, a decoder model using the gemma-3 architecture, to create Gemma-SEA-LION-v4-4B-VL.
For tokenization, the model employs the default tokenizer used in gemma-3-4b-it.
- Developed by: AI Products Pillar, AI Singapore
- Funded by: Singapore NRF
- Shared by: AI Products Pillar, AI Singapore
- Model type: Decoder
- Context length: 128k
- Language(s): Indonesian, Vietnamese, Thai, Filipino, Tamil, Burmese, Malay
- License: Gemma
- Finetuned from model: gemma-3-4b-it
Model Sources
Collection: ๐คaisingapore/sea-lion-v4 The collection includes the quantized models listed below:
Image-Text-to-Text - aisingapore/Gemma-SEA-LION-v4-4B-VL-GGUF
Image-Text-to-Text - aisingapore/Gemma-SEA-LION-v4-27B-IT-GGUF
Text Generation - aisingapore/Gemma-SEA-LION-v4-27B-IT-FP8-Dynamic
Text Generation - aisingapore/Gemma-SEA-LION-v4-27B-IT-NVFP4
Text Generation - aisingapore/Apertus-SEA-LION-v4-8B-IT-GGUF
Text Generation - aisingapore/Qwen-SEA-LION-v4-32B-IT-8BIT
Text Generation - aisingapore/Qwen-SEA-LION-v4-32B-IT-4BIT
Repository: ๐คaisingapore/Gemma-SEA-LION-v4-4B-VL-GGUF This repo contains GGUF format models files for aisingapore/Gemma-SEA-LION-v4-4B-VL
Model Weights included in this repository:
- Gemma-SEA-LION-v4-4B-VL-Q4_K_M
- Gemma-SEA-LION-v4-4B-VL-Q6_K
- Gemma-SEA-LION-v4-4B-VL-Q8_0
- Gemma-SEA-LION-v4-4B-VL-F16
- Gemma-SEA-LION-v4-4B-VL-mmproj
Take note that some GGUFs may be split into parts. Most tools such as llama.cpp and those built on it do support split GGUFs, pointing the platform to the first split will be sufficient for it to function. In the event where a merge is necessary, it can be done using llama.cpp's gguf-split: ./gguf-split --merge ./path/to/first-split ./path/to/output-gguf More details: gguf-split guide & README
Uses
How to Get Started with the Model
Use the code below to get started with the model using llama.cpp
llama.cpp (text-only)
./llama-cli -hf aisingapore/Gemma-SEA-LION-v4-4B-VL-GGUF -p "Hello, tell me what you are capable of."
llama.cpp (image input)
wget https://github.com/bebechien/gemma/blob/main/surprise.png?raw=true -O ~/Downloads/surprise.png
./llama-mtmd-cli -hf aisingapore/Gemma-SEA-LION-v4-4B-VL-GGUF -p "What is in the image?" --image ~/Downloads/surprise.png
Training Details
Training Data
The dataset comprises Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese languages, collected from a mixture of sources including web data, code, open-source datasets, and synthetically generated datasets, amounting to a total of 500 billion tokens sampled from our bucket of 1 trillion tokens.
Out-of-Scope Use
The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
Bias, Risks, and Limitations
The model was not tested for robustness against adversarial prompting. It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies.
More Information
This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
For more info, please contact us at sealion@aisingapore.org
Team
Ahmed Dabeer, Ahn Jeongmi, Antonyrex Sajeban, Chan Hok Teng Adwin, Cheng Zi Yi Nicholas, Choa Hsueh Mei Esther, Heng Jonathan, Huang Yuli, Jann Railey Estrada Montalan, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Liew Rachel, Limkonchotiwat Peerat, Muhammad Ridzuan Bin Mokhtar, Nagarajan Karthik, Ng Boon Cheong Raymond, Ngee Chia Tai, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Tat-Wee David, Ong Zhi Hao, Pereira Mark, Poon Joseph, Rengarajan Hamsawardhini, Siow Wei Kang Bryan, Susanto Yosephine, Sutaveephamochanon Anocha, Tan Choon Meng, Tan Chor Phin Evelyn, Tan Siao Wei Jessica, Tan Yixian, Tee Jun Yun, Teng Kok Wai Walter, Teo Eng Sipp Leslie, Tjhi William, Wu Donghang, Yeo Yeow Tong, Yong Xianbin, Zhang Zhou, Elliott Chris (Google), Mohseni Mohammadreza (Google), Sharan Mayank (Google), Wei Fanny (Google), Tang Jiuqiang (Google), Xu Xiang (Google), Yu Ting (Google), Loh Michelle (Google), Mangal Saurabh (Google), Mukherjee Pratyusha (Google), Sim Stephanie (Google)
Acknowledgement
This project is supported by the National Research Foundation Singapore and Infocomm Media Development Authority (IMDA), Singapore under its National Large Language Model Funding Initiative.
Contact
- Downloads last month
- 154
4-bit
6-bit
8-bit
16-bit
