SARC-Taigi-LLM-27b-GGUF (Taiwanese LLM)
This repository contains GGUF format model files for SARC-Taigi-LLM-27b. This model is a specialized version of google/gemma-3-27b-it, fine-tuned by the Speech AI Research Center (SARC) using IMA's 'Taiwan Tongues' Taigi Datasets and QLoRA.
The GGUF versions are optimized for inference on consumer-grade hardware (CPUs or GPUs with limited VRAM) via llama.cpp, Ollama, or other compatible backends.
1. Main Capabilities
- Taigi Dialogue and Consultation: Understands and responds to inquiries in Taigi using both Taiwanese Chinese characters (Hàn-jī) and Romanization (Tâi-lô).
- Linguistic Knowledge Retrieval: Supports queries regarding the meaning, usage, and cultural context of Taigi vocabulary.
- Logical Reasoning: Capable of complex reasoning and problem-solving within a Taiwanese linguistic and cultural framework.
2. Quantization Versions (GGUF)
For a 27B model, we recommend Q4_K_M for the best balance between speed and linguistic precision.
| File Name | Method | RAM/VRAM Required | Description |
|---|---|---|---|
SARC-Taigi-LLM-27b-Q6_K.gguf |
Q6_K | ~25-27 GB | High precision for specialized linguistic research. |
SARC-Taigi-LLM-27b-Q4_K_M.gguf |
Q4_K_M | ~19-21 GB | Highly Recommended. Optimal quality/performance ratio. |
SARC-Taigi-LLM-27b-Q3_K_L.gguf |
Q3_K_L | ~15-17 GB | Lightweight version for memory-constrained devices. |
3. Main Capabilities
- Taigi Dialogue and Consultation: Capable of understanding and responding to daily and professional inquiries in Taigi (using Taiwanese Chinese characters (Tâi-bûn Hàn-jī) or Romanization (Tâi-lô)).
- Linguistic Knowledge Retrieval: Supports queries regarding the meaning, usage, and cultural background of Taigi vocabulary.
- Logical Reasoning: Performs logical judgment and problem-solving specifically within a Taigi linguistic context.
4. Demostration
QA Examples
5. Training Pipeline
The model underwent a two-stage training process designed to build a robust linguistic foundation, followed by instruction alignment:
- Phase 1: Continual Pre-Training (CPT)
- Ministry of Education Dictionary of Frequently-Used Taiwanese Taigi.
- Taigi Literature Collection (
taigi-literature): A diverse corpus of classical and modern Taigi literary works.
- Phase 2: Supervised Fine-Tuning (SFT)
- Taigi-version Alpaca Dataset: Instruction-following data optimized for Taigi dialogue.
- Grand Challenge Training Set: Multiple-choice questions from the training text of the 1st "Grand Challenge" (科技大擂台) competition.
6. Evaluation on << 2020 Grand Challenge, Talk to AI >> Final-Test Dataset
We evaluated the models using the << 2020 Grand Challenge, Talk to AI (科技大擂台,與AI對話)>> Final-Test Dataset, which consists of 1,000 multiple-choice reading comprehension questions. This serves as a benchmark for Taigi language understanding:
- Question Example
- Experimental Results
| Model | Accuracy | Note | |
|---|---|---|---|
| Stage | Gemma-3-12b-it | Gemma-3-27b-it | |
| Original | 0.80320 | 0.86214 | Baseline performance |
| After CPT | 0.88312 | 0.92296 | Knowledge internalization |
| After SFT | 0.89610 | 0.92582 | Instruction alignment |
7. Model Usage
via Ollama
# Ensure Ollama is installed and your Ollama SSH public key (typically found at ~/.ollama/id_ed25519.pub) is registered in your Hugging Face account.
ollama pull huggingface.co/Speech-AI-Research-Center/SARC-Taigi-LLM-27b-GGUF:Q4_K_M
ollama run huggingface.co/Speech-AI-Research-Center/SARC-Taigi-LLM-27b-GGUF:Q4_K_M
ollama ps
ollama run huggingface.co/Speech-AI-Research-Center/SARC-Taigi-LLM-27b-GGUF:Q4_K_M
via llama.cpp
# Ensure you are in the llama.cpp directory
cd /path/to/llama.cpp
# Run the Taigi model with optional optimization flags
./build/bin/llama-cli \
-hf Speech-AI-Research-Center/SARC-Taigi-LLM-27b-GGUF:Q4_K_M \
-p "<start_of_turn>user\n用台語共我紹介一下你自己。<end_of_turn>\n<start_of_turn>model\n" \
-n 512 \
-ngl 99 \
--temp 0.7
Optional Arguments:
-n, --n-predict(Default: 128)- Specifies the maximum number of tokens to generate. For a 27B model, we recommend 512 or higher to ensure the Taigi responses are not cut off mid-sentence.
-ngl, --n-gpu-layers(Default: 0)- Crucial for Performance: Determines how many model layers are offloaded to the GPU.
- Setting it to 99 (or any number higher than the actual layers) forces the entire model into VRAM for maximum speed.
- Note: If your VRAM is insufficient (e.g., less than 20GB for Q4_K_M), decrease this number to perform "partial offloading" to the CPU. If omitted, the model runs entirely on the CPU, which will be significantly slower.
--temp(Default: 0.8)- Adjusts the randomness of the output.
- 0.7 provides a good balance between creativity and coherence for Taigi dialogue. Use a lower value (e.g., 0.2) for factual tasks.
8. Roadmap: Beyond SFT
While the current release is the result of CPT and SFT, this is only the beginning. Our multi-stage alignment strategy includes:
- Phase I (CPT): Building linguistic foundation (Completed).
- Phase II (SFT): Instruction and dialogue alignment (Current Release).
- Phase III (GRPO): Future reinforcement learning using Group Relative Policy Optimization (GRPO) to further enhance self-correction and complex reasoning chains.
9. Training Resources
Learn how to perform this multi-stage fine-tuning (CPT + SFT) with our custom callbacks for Loss minimization and Gap stability on GitHub:
[GitHub: SARC-Taigi-LLM Training Pipeline]
Citation
If you find this project useful, please cite the IMA's Taiwan Tongues resource page and the Speech AI Research Center organization pages on Hugging Face and GitHub.
@misc{ima_taiwan_2026,
title = {IMA-Taiwan},
author = {Information Management Association of R.O.C. (IMA)},
year = {2026},
howpublished = {https://huggingface.co/IMA-Taiwan},
note = {Hugging Face organization page for Taiwan Tongues resources}
}
@misc{sarc_hf_2026,
title = {Speech-AI-Research-Center},
author = {Speech AI Research Center (SARC)},
year = {2026},
howpublished = {https://huggingface.co/Speech-AI-Research-Center},
note = {Hugging Face organization page for released Taigi model adapters}
}
@misc{sarctaigillm_repo_2026,
title = {Speech-AI-Research-Center},
author = {Speech AI Research Center (SARC)},
year = {2026},
howpublished = {https://github.com/Speech-AI-Research-Center},
note = {GitHub organization page for released Taigi-LLM training project}
}
License
This model is subject to the Gemma Terms of Use. By using this model, you agree to comply with Google’s licensing requirements.
- Downloads last month
- 163
3-bit
4-bit
6-bit