+
+### Supporters ❤️
+
+|
+
+👋 Join our [WeChat group](assets/wechat.jpg), [NPU user group](assets/wechat_npu.jpg) or [Alaya NeW user group](assets/wechat_alaya.png).
+
+\[ English | [中文](README_zh.md) \]
+
+**Fine-tuning a large language model can be easy as...**
+
+https://github.com/user-attachments/assets/3991a3a8-4276-4d30-9cab-4cb0c4b9b99e
+
+Choose your path:
+
+- **Documentation (WIP)**: https://llamafactory.readthedocs.io/en/latest/
+- **Documentation (AMD GPU)**: https://rocm.docs.amd.com/projects/ai-developer-hub/en/latest/notebooks/fine_tune/llama_factory_llama3.html
+- **Colab (free)**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing
+- **Local machine**: Please refer to [usage](#getting-started)
+- **PAI-DSW (free trial)**: https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
+- **Alaya NeW (cloud GPU deal)**: https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory
+
+> [!NOTE]
+> Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them.
+
+## Table of Contents
+
+- [Features](#features)
+- [Blogs](#blogs)
+- [Changelog](#changelog)
+- [Supported Models](#supported-models)
+- [Supported Training Approaches](#supported-training-approaches)
+- [Provided Datasets](#provided-datasets)
+- [Requirement](#requirement)
+- [Getting Started](#getting-started)
+ - [Installation](#installation)
+ - [Data Preparation](#data-preparation)
+ - [Quickstart](#quickstart)
+ - [Fine-Tuning with LLaMA Board GUI](#fine-tuning-with-llama-board-gui-powered-by-gradio)
+ - [Build Docker](#build-docker)
+ - [Deploy with OpenAI-style API and vLLM](#deploy-with-openai-style-api-and-vllm)
+ - [Download from ModelScope Hub](#download-from-modelscope-hub)
+ - [Download from Modelers Hub](#download-from-modelers-hub)
+ - [Use W&B Logger](#use-wb-logger)
+ - [Use SwanLab Logger](#use-swanlab-logger)
+- [Projects using LLaMA Factory](#projects-using-llama-factory)
+- [License](#license)
+- [Citation](#citation)
+- [Acknowledgement](#acknowledgement)
+
+## Features
+
+- **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, DeepSeek, Yi, Gemma, ChatGLM, Phi, etc.
+- **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc.
+- **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ.
+- **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [APOLLO](https://github.com/zhuhanqing/APOLLO), [Adam-mini](https://github.com/zyushun/Adam-mini), [Muon](https://github.com/KellerJordan/Muon), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ and PiSSA.
+- **Practical tricks**: [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), [Unsloth](https://github.com/unslothai/unsloth), [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA.
+- **Wide tasks**: Multi-turn dialogue, tool using, image understanding, visual grounding, video recognition, audio understanding, etc.
+- **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, [SwanLab](https://github.com/SwanHubX/SwanLab), etc.
+- **Faster inference**: OpenAI-style API, Gradio UI and CLI with [vLLM worker](https://github.com/vllm-project/vllm) or [SGLang worker](https://github.com/sgl-project/sglang).
+
+### Day-N Support for Fine-Tuning Cutting-Edge Models
+
+| Support Date | Model Name |
+| ------------ | -------------------------------------------------------------------- |
+| Day 0 | Qwen3 / Qwen2.5-VL / Gemma 3 / GLM-4.1V / InternLM 3 / MiniCPM-o-2.6 |
+| Day 1 | Llama 3 / GLM-4 / Mistral Small / PaliGemma2 / Llama 4 |
+
+## Blogs
+
+- [Fine-tune GPT-OSS for Role-Playing using LLaMA-Factory](https://docs.llamafactory.com.cn/docs/documents/best-practice/gptoss/?utm_source=LLaMA-Factory) (Chinese)
+- [Fine-tune Llama3.1-70B for Medical Diagnosis using LLaMA-Factory](https://docs.alayanew.com/docs/documents/bestPractice/bigModel/llama70B/?utm_source=LLaMA-Factory) (Chinese)
+- [A One-Stop Code-Free Model Reinforcement Learning and Deployment Platform based on LLaMA-Factory and EasyR1](https://aws.amazon.com/cn/blogs/china/building-llm-model-hub-based-on-llamafactory-and-easyr1/) (Chinese)
+- [How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/) (English)
+- [Easy Dataset × LLaMA Factory: Enabling LLMs to Efficiently Learn Domain Knowledge](https://buaa-act.feishu.cn/wiki/GVzlwYcRFiR8OLkHbL6cQpYin7g) (English)
+
+
All Blogs
+
+- [Fine-tune Qwen2.5-VL for Autonomous Driving using LLaMA-Factory](https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory) (Chinese)
+- [LLaMA Factory: Fine-tuning the DeepSeek-R1-Distill-Qwen-7B Model for News Classifier](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b) (Chinese)
+- [A One-Stop Code-Free Model Fine-Tuning \& Deployment Platform based on SageMaker and LLaMA-Factory](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) (Chinese)
+- [LLaMA Factory Multi-Modal Fine-Tuning Practice: Fine-Tuning Qwen2-VL for Personal Tourist Guide](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) (Chinese)
+- [LLaMA Factory: Fine-tuning Llama3 for Role-Playing](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) (Chinese)
+
+
+
+## Changelog
+
+[25/08/06] We supported fine-tuning the **[GPT-OSS](https://github.com/openai/gpt-oss)** models. See [PR #8826](https://github.com/hiyouga/LLaMA-Factory/pull/8826) to get started.
+
+[25/07/02] We supported fine-tuning the **[GLM-4.1V-9B-Thinking](https://github.com/THUDM/GLM-4.1V-Thinking)** model.
+
+[25/04/28] We supported fine-tuning the **[Qwen3](https://qwenlm.github.io/blog/qwen3/)** model family.
+
+
Full Changelog
+
+[25/04/21] We supported the **[Muon](https://github.com/KellerJordan/Muon)** optimizer. See [examples](examples/README.md) for usage. Thank [@tianshijing](https://github.com/tianshijing)'s PR.
+
+[25/04/16] We supported fine-tuning the **[InternVL3](https://huggingface.co/OpenGVLab/InternVL3-8B)** model. See [PR #7258](https://github.com/hiyouga/LLaMA-Factory/pull/7258) to get started.
+
+[25/04/14] We supported fine-tuning the **[GLM-Z1](https://huggingface.co/THUDM/GLM-Z1-9B-0414)** and **[Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)** models.
+
+[25/04/06] We supported fine-tuning the **[Llama 4](https://ai.meta.com/blog/llama-4-multimodal-intelligence/)** model. See [PR #7611](https://github.com/hiyouga/LLaMA-Factory/pull/7611) to get started.
+
+[25/03/31] We supported fine-tuning the **[Qwen2.5 Omni](https://qwenlm.github.io/blog/qwen2.5-omni/)** model. See [PR #7537](https://github.com/hiyouga/LLaMA-Factory/pull/7537) to get started.
+
+[25/03/15] We supported **[SGLang](https://github.com/sgl-project/sglang)** as inference backend. Try `infer_backend: sglang` to accelerate inference.
+
+[25/03/12] We supported fine-tuning the **[Gemma 3](https://huggingface.co/blog/gemma3)** model.
+
+[25/02/24] Announcing **[EasyR1](https://github.com/hiyouga/EasyR1)**, an efficient, scalable and multi-modality RL training framework for efficient GRPO training.
+
+[25/02/11] We supported saving the **[Ollama](https://github.com/ollama/ollama)** modelfile when exporting the model checkpoints. See [examples](examples/README.md) for usage.
+
+[25/02/05] We supported fine-tuning the **[Qwen2-Audio](Qwen/Qwen2-Audio-7B-Instruct)** and **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** on audio understanding tasks.
+
+[25/01/31] We supported fine-tuning the **[DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)** and **[Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)** models.
+
+[25/01/15] We supported **[APOLLO](https://arxiv.org/abs/2412.05270)** optimizer. See [examples](examples/README.md) for usage.
+
+[25/01/14] We supported fine-tuning the **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** and **[MiniCPM-V-2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6)** models. Thank [@BUAADreamer](https://github.com/BUAADreamer)'s PR.
+
+[25/01/14] We supported fine-tuning the **[InternLM 3](https://huggingface.co/collections/internlm/)** models. Thank [@hhaAndroid](https://github.com/hhaAndroid)'s PR.
+
+[25/01/10] We supported fine-tuning the **[Phi-4](https://huggingface.co/microsoft/phi-4)** model.
+
+[24/12/21] We supported using **[SwanLab](https://github.com/SwanHubX/SwanLab)** for experiment tracking and visualization. See [this section](#use-swanlab-logger) for details.
+
+[24/11/27] We supported fine-tuning the **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** model and the **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** dataset.
+
+[24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage.
+
+[24/09/19] We supported fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models.
+
+[24/08/30] We supported fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR.
+
+[24/08/27] We supported **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training.
+
+[24/08/09] We supported **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR.
+
+[24/07/04] We supported [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR.
+
+[24/06/16] We supported **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage.
+
+[24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models.
+
+[24/05/26] We supported **[SimPO](https://arxiv.org/abs/2405.14734)** algorithm for preference learning. See [examples](examples/README.md) for usage.
+
+[24/05/20] We supported fine-tuning the **PaliGemma** series models. Note that the PaliGemma models are pre-trained models, you need to fine-tune them with `paligemma` template for chat completion.
+
+[24/05/18] We supported **[KTO](https://arxiv.org/abs/2402.01306)** algorithm for preference learning. See [examples](examples/README.md) for usage.
+
+[24/05/14] We supported training and inference on the Ascend NPU devices. Check [installation](#installation) section for details.
+
+[24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See [examples](examples/README.md) for usage.
+
+[24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details.
+
+[24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See [examples](examples/README.md) for usage.
+
+[24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)** optimizer. See [examples](examples/README.md) for usage.
+
+[24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison).
+
+[24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See [examples](examples/README.md) for usage.
+
+[24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv!
+
+[24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See [examples](examples/README.md) for usage.
+
+[24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See [examples](examples/README.md) for usage.
+
+[24/03/07] We supported **[GaLore](https://arxiv.org/abs/2403.03507)** optimizer. See [examples](examples/README.md) for usage.
+
+[24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `infer_backend: vllm` to enjoy **270%** inference speed.
+
+[24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `use_dora: true` to activate DoRA training.
+
+[24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See [examples](examples/README.md) for usage.
+
+[24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details.
+
+[24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `dataset: glaive_toolcall_en`.
+
+[23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `use_unsloth: true` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details.
+
+[23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement).
+
+[23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)**. See [this tutorial](#download-from-modelscope-hub) for usage.
+
+[23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune.
+
+[23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `shift_attn: true` argument to enable shift short attention.
+
+[23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [examples](examples/README.md) for usage.
+
+[23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `flash_attn: fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs.
+
+[23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `rope_scaling: linear` argument in training and `rope_scaling: dynamic` argument at inference to extrapolate the position embeddings.
+
+[23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [examples](examples/README.md) for usage.
+
+[23/07/31] We supported **dataset streaming**. Try `streaming: true` and `max_steps: 10000` arguments to load your dataset in streaming mode.
+
+[23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details.
+
+[23/07/18] We developed an **all-in-one Web UI** for training, evaluation and inference. Try `train_web.py` to fine-tune models in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development.
+
+[23/07/09] We released **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested.
+
+[23/06/29] We provided a **reproducible example** of training a chat model using instruction-following datasets, see [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft) for details.
+
+[23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**.
+
+[23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). See [examples](examples/README.md) for usage.
+
+
+
+> [!TIP]
+> If you cannot use the latest feature, please pull the latest code and install LLaMA-Factory again.
+
+## Supported Models
+
+| Model | Model size | Template |
+| ----------------------------------------------------------------- | -------------------------------- | ------------------- |
+| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
+| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
+| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
+| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
+| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
+| [DeepSeek 2.5/3](https://huggingface.co/deepseek-ai) | 236B/671B | deepseek3 |
+| [DeepSeek R1 (Distill)](https://huggingface.co/deepseek-ai) | 1.5B/7B/8B/14B/32B/70B/671B | deepseekr1 |
+| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
+| [Falcon-H1](https://huggingface.co/tiiuae) | 0.5B/1.5B/3B/7B/34B | falcon_h1 |
+| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma/gemma2 |
+| [Gemma 3/Gemma 3n](https://huggingface.co/google) | 1B/4B/6B/8B/12B/27B | gemma3/gemma3n |
+| [GLM-4/GLM-4-0414/GLM-Z1](https://huggingface.co/zai-org) | 9B/32B | glm4/glmz1 |
+| [GLM-4.1V](https://huggingface.co/zai-org)* | 9B | glm4v |
+| [GLM-4.5](https://huggingface.co/zai-org)* | 106B/355B | glm4_moe |
+| [GPT-2](https://huggingface.co/openai-community) | 0.1B/0.4B/0.8B/1.5B | - |
+| [GPT-OSS](https://huggingface.co/openai) | 20B/120B | gpt |
+| [Granite 3.0-3.3](https://huggingface.co/ibm-granite) | 1B/2B/3B/8B | granite3 |
+| [Granite 4](https://huggingface.co/ibm-granite) | 7B | granite4 |
+| [Hunyuan](https://huggingface.co/tencent/) | 7B | hunyuan |
+| [Index](https://huggingface.co/IndexTeam) | 1.9B | index |
+| [InternLM 2-3](https://huggingface.co/internlm) | 7B/8B/20B | intern2 |
+| [InternVL 2.5-3](https://huggingface.co/OpenGVLab) | 1B/2B/8B/14B/38B/78B | intern_vl |
+| [Kimi-VL](https://huggingface.co/moonshotai) | 16B | kimi_vl |
+| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
+| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
+| [Llama 3-3.3](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
+| [Llama 4](https://huggingface.co/meta-llama) | 109B/402B | llama4 |
+| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |
+| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
+| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
+| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
+| [MiMo](https://huggingface.co/XiaomiMiMo) | 7B | mimo |
+| [MiniCPM](https://huggingface.co/openbmb) | 0.5B/1B/2B/4B/8B | cpm/cpm3/cpm4 |
+| [MiniCPM-o-2.6/MiniCPM-V-2.6](https://huggingface.co/openbmb) | 8B | minicpm_o/minicpm_v |
+| [Ministral/Mistral-Nemo](https://huggingface.co/mistralai) | 8B/12B | ministral |
+| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
+| [Mistral Small](https://huggingface.co/mistralai) | 24B | mistral_small |
+| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
+| [PaliGemma/PaliGemma2](https://huggingface.co/google) | 3B/10B/28B | paligemma |
+| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
+| [Phi-3/Phi-3.5](https://huggingface.co/microsoft) | 4B/14B | phi |
+| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small |
+| [Phi-4](https://huggingface.co/microsoft) | 14B | phi4 |
+| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |
+| [Qwen (1-2.5) (Code/Math/MoE/QwQ)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
+| [Qwen3 (MoE)](https://huggingface.co/Qwen) | 0.6B/1.7B/4B/8B/14B/32B/235B | qwen3 |
+| [Qwen2-Audio](https://huggingface.co/Qwen) | 7B | qwen2_audio |
+| [Qwen2.5-Omni](https://huggingface.co/Qwen) | 3B/7B | qwen2_omni |
+| [Qwen2-VL/Qwen2.5-VL/QVQ](https://huggingface.co/Qwen) | 2B/3B/7B/32B/72B | qwen2_vl |
+| [Seed Coder](https://huggingface.co/ByteDance-Seed) | 8B | seed_coder |
+| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |
+| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
+| [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 |
+| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
+| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
+| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
+| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |
+
+> [!NOTE]
+> For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models.
+>
+> Remember to use the **SAME** template in training and inference.
+>
+> \*: You should install the `transformers` from main branch and use `DISABLE_VERSION_CHECK=1` to skip version check.
+>
+> \*\*: You need to install a specific version of `transformers` to use the corresponding model.
+
+Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported.
+
+You also can add a custom chat template to [template.py](src/llamafactory/data/template.py).
+
+## Supported Training Approaches
+
+| Approach | Full-tuning | Freeze-tuning | LoRA | QLoRA |
+| ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
+| Pre-Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Supervised Fine-Tuning | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| KTO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| ORPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| SimPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+
+> [!TIP]
+> The implementation details of PPO can be found in [this blog](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html).
+
+## Provided Datasets
+
+
Pre-training datasets
+
+- [Wiki Demo (en)](data/wiki_demo.txt)
+- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
+- [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)
+- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)
+- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
+- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)
+- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)
+- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
+- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
+- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)
+- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
+
+
+
+
Supervised fine-tuning datasets
+
+- [Identity (en&zh)](data/identity.json)
+- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
+- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
+- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
+- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
+- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
+- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
+- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
+- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
+- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
+- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)
+- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
+- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
+- [UltraChat (en)](https://github.com/thunlp/UltraChat)
+- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
+- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
+- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
+- [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca)
+- [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca)
+- [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
+- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
+- [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa)
+- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)
+- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
+- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
+- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
+- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
+- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
+- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
+- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
+- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
+- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
+- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
+- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
+- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
+- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
+- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
+- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
+- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
+- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
+- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)
+- [Open-Thoughts (en)](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
+- [Open-R1-Math (en)](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
+- [Chinese-DeepSeek-R1-Distill (zh)](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k-SFT)
+- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
+- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
+- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
+- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
+- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
+- [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de)
+- [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de)
+- [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de)
+- [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de)
+- [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de)
+- [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de)
+
+
+
+
Preference datasets
+
+- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
+- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
+- [COIG-P (zh)](https://huggingface.co/datasets/m-a-p/COIG-P)
+- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
+- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
+- [RLAIF-V (en)](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)
+- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
+- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
+- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
+- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
+- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
+
+
+
+Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.
+
+```bash
+pip install --upgrade huggingface_hub
+huggingface-cli login
+```
+
+## Requirement
+
+| Mandatory | Minimum | Recommend |
+| ------------ | ------- | --------- |
+| python | 3.9 | 3.10 |
+| torch | 2.0.0 | 2.6.0 |
+| torchvision | 0.15.0 | 0.21.0 |
+| transformers | 4.49.0 | 4.50.0 |
+| datasets | 2.16.0 | 3.2.0 |
+| accelerate | 0.34.0 | 1.2.1 |
+| peft | 0.14.0 | 0.15.1 |
+| trl | 0.8.6 | 0.9.6 |
+
+| Optional | Minimum | Recommend |
+| ------------ | ------- | --------- |
+| CUDA | 11.6 | 12.2 |
+| deepspeed | 0.10.0 | 0.16.4 |
+| bitsandbytes | 0.39.0 | 0.43.1 |
+| vllm | 0.4.3 | 0.8.2 |
+| flash-attn | 2.5.6 | 2.7.2 |
+
+### Hardware Requirement
+
+\* *estimated*
+
+| Method | Bits | 7B | 14B | 30B | 70B | `x`B |
+| ------------------------------- | ---- | ----- | ----- | ----- | ------ | ------- |
+| Full (`bf16` or `fp16`) | 32 | 120GB | 240GB | 600GB | 1200GB | `18x`GB |
+| Full (`pure_bf16`) | 16 | 60GB | 120GB | 300GB | 600GB | `8x`GB |
+| Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | `2x`GB |
+| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | `x`GB |
+| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | `x/2`GB |
+| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | `x/4`GB |
+
+## Getting Started
+
+### Installation
+
+> [!IMPORTANT]
+> Installation is mandatory.
+
+#### Install from Source
+
+```bash
+git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
+cd LLaMA-Factory
+pip install -e ".[torch,metrics]" --no-build-isolation
+```
+
+Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, aqlm, vllm, sglang, galore, apollo, badam, adam-mini, qwen, minicpm_v, openmind, swanlab, dev
+
+#### Install from Docker Image
+
+```bash
+docker run -it --rm --gpus=all --ipc=host hiyouga/llamafactory:latest
+```
+
+This image is built on Ubuntu 22.04 (x86\_64), CUDA 12.4, Python 3.11, PyTorch 2.6.0, and Flash-attn 2.7.4.
+
+Find the pre-built images: https://hub.docker.com/r/hiyouga/llamafactory/tags
+
+Please refer to [build docker](#build-docker) to build the image yourself.
+
+
Setting up a virtual environment with uv
+
+Create an isolated Python environment with [uv](https://github.com/astral-sh/uv):
+
+```bash
+uv sync --extra torch --extra metrics --prerelease=allow
+```
+
+Run LLaMA-Factory in the isolated environment:
+
+```bash
+uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
+```
+
+
+
+
For Windows users
+
+#### Install PyTorch
+
+You need to manually install the GPU version of PyTorch on the Windows platform. Please refer to the [official website](https://pytorch.org/get-started/locally/) and the following command to install PyTorch with CUDA support:
+
+```bash
+pip uninstall torch torchvision torchaudio
+pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
+python -c "import torch; print(torch.cuda.is_available())"
+```
+
+If you see `True` then you have successfully installed PyTorch with CUDA support.
+
+Try `dataloader_num_workers: 0` if you encounter `Can't pickle local object` error.
+
+#### Install BitsAndBytes
+
+If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version.
+
+```bash
+pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
+```
+
+#### Install Flash Attention-2
+
+To enable FlashAttention-2 on the Windows platform, please use the script from [flash-attention-windows-wheel](https://huggingface.co/lldacing/flash-attention-windows-wheel) to compile and install it by yourself.
+
+
+
+
For Ascend NPU users
+
+To install LLaMA Factory on Ascend NPU devices, please upgrade Python to version 3.10 or higher and specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands:
+
+```bash
+# replace the url according to your CANN version and devices
+# install CANN Toolkit
+wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C20SPC702/Ascend-cann-toolkit_8.0.0.alpha002_linux-"$(uname -i)".run
+bash Ascend-cann-toolkit_8.0.0.alpha002_linux-"$(uname -i)".run --install
+
+# install CANN Kernels
+wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C20SPC702/Ascend-cann-kernels-910b_8.0.0.alpha002_linux-"$(uname -i)".run
+bash Ascend-cann-kernels-910b_8.0.0.alpha002_linux-"$(uname -i)".run --install
+
+# set env variables
+source /usr/local/Ascend/ascend-toolkit/set_env.sh
+```
+
+| Requirement | Minimum | Recommend |
+| ------------ | ------- | -------------- |
+| CANN | 8.0.RC1 | 8.0.0.alpha002 |
+| torch | 2.1.0 | 2.4.0 |
+| torch-npu | 2.1.0 | 2.4.0.post2 |
+| deepspeed | 0.13.2 | 0.13.2 |
+| vllm-ascend | - | 0.7.3 |
+
+Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use.
+
+If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations.
+
+Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
+
+#### Install BitsAndBytes
+
+To use QLoRA based on bitsandbytes on Ascend NPU, please follow these 3 steps:
+
+1. Manually compile bitsandbytes: Refer to [the installation documentation](https://huggingface.co/docs/bitsandbytes/installation?backend=Ascend+NPU&platform=Ascend+NPU) for the NPU version of bitsandbytes to complete the compilation and installation. The compilation requires a cmake version of at least 3.22.1 and a g++ version of at least 12.x.
+
+```bash
+# Install bitsandbytes from source
+# Clone bitsandbytes repo, Ascend NPU backend is currently enabled on multi-backend-refactor branch
+git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git
+cd bitsandbytes/
+
+# Install dependencies
+pip install -r requirements-dev.txt
+
+# Install the dependencies for the compilation tools. Note that the commands for this step may vary depending on the operating system. The following are provided for reference
+apt-get install -y build-essential cmake
+
+# Compile & install
+cmake -DCOMPUTE_BACKEND=npu -S .
+make
+pip install .
+```
+
+2. Install transformers from the main branch.
+
+```bash
+git clone -b main https://github.com/huggingface/transformers.git
+cd transformers
+pip install .
+```
+
+3. Set `double_quantization: false` in the configuration. You can refer to the [example](examples/train_qlora/llama3_lora_sft_bnb_npu.yaml).
+
+
+
+### Data Preparation
+
+Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can use datasets on HuggingFace / ModelScope / Modelers hub, load the dataset in local disk, or specify a path to s3/gcs cloud storage.
+
+> [!NOTE]
+> Please update `data/dataset_info.json` to use your custom dataset.
+
+You can also use **[Easy Dataset](https://github.com/ConardLi/easy-dataset)**, **[DataFlow](https://github.com/OpenDCAI/DataFlow)** and **[GraphGen](https://github.com/open-sciencelab/GraphGen)** to create synthetic data for fine-tuning.
+
+### Quickstart
+
+Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively.
+
+```bash
+llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
+llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
+llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
+```
+
+See [examples/README.md](examples/README.md) for advanced usage (including distributed training).
+
+> [!TIP]
+> Use `llamafactory-cli help` to show help information.
+>
+> Read [FAQs](https://github.com/hiyouga/LLaMA-Factory/issues/4614) first if you encounter any problems.
+
+### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio))
+
+```bash
+llamafactory-cli webui
+```
+
+### Build Docker
+
+For CUDA users:
+
+```bash
+cd docker/docker-cuda/
+docker compose up -d
+docker compose exec llamafactory bash
+```
+
+For Ascend NPU users:
+
+```bash
+cd docker/docker-npu/
+docker compose up -d
+docker compose exec llamafactory bash
+```
+
+For AMD ROCm users:
+
+```bash
+cd docker/docker-rocm/
+docker compose up -d
+docker compose exec llamafactory bash
+```
+
+
Build without Docker Compose
+
+For CUDA users:
+
+```bash
+docker build -f ./docker/docker-cuda/Dockerfile \
+ --build-arg PIP_INDEX=https://pypi.org/simple \
+ --build-arg EXTRAS=metrics \
+ -t llamafactory:latest .
+
+docker run -dit --ipc=host --gpus=all \
+ -p 7860:7860 \
+ -p 8000:8000 \
+ --name llamafactory \
+ llamafactory:latest
+
+docker exec -it llamafactory bash
+```
+
+For Ascend NPU users:
+
+```bash
+docker build -f ./docker/docker-npu/Dockerfile \
+ --build-arg PIP_INDEX=https://pypi.org/simple \
+ --build-arg EXTRAS=torch-npu,metrics \
+ -t llamafactory:latest .
+
+docker run -dit --ipc=host \
+ -v /usr/local/dcmi:/usr/local/dcmi \
+ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
+ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
+ -v /etc/ascend_install.info:/etc/ascend_install.info \
+ -p 7860:7860 \
+ -p 8000:8000 \
+ --device /dev/davinci0 \
+ --device /dev/davinci_manager \
+ --device /dev/devmm_svm \
+ --device /dev/hisi_hdc \
+ --name llamafactory \
+ llamafactory:latest
+
+docker exec -it llamafactory bash
+```
+
+For AMD ROCm users:
+
+```bash
+docker build -f ./docker/docker-rocm/Dockerfile \
+ --build-arg PIP_INDEX=https://pypi.org/simple \
+ --build-arg EXTRAS=metrics \
+ -t llamafactory:latest .
+
+docker run -dit --ipc=host \
+ -p 7860:7860 \
+ -p 8000:8000 \
+ --device /dev/kfd \
+ --device /dev/dri \
+ --name llamafactory \
+ llamafactory:latest
+
+docker exec -it llamafactory bash
+```
+
+
+
+
Use Docker volumes
+
+You can uncomment `VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]` in the Dockerfile to use data volumes.
+
+When building the Docker image, use `-v ./hf_cache:/root/.cache/huggingface` argument to mount the local directory to the container. The following data volumes are available.
+
+- `hf_cache`: Utilize Hugging Face cache on the host machine.
+- `shared_data`: The directionary to store datasets on the host machine.
+- `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine.
+
+
+
+### Deploy with OpenAI-style API and vLLM
+
+```bash
+API_PORT=8000 llamafactory-cli api examples/inference/llama3.yaml infer_backend=vllm vllm_enforce_eager=true
+```
+
+> [!TIP]
+> Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document.
+>
+> Examples: [Image understanding](scripts/api_example/test_image.py) | [Function calling](scripts/api_example/test_toolcall.py)
+
+### Download from ModelScope Hub
+
+If you have trouble with downloading models and datasets from Hugging Face, you can use ModelScope.
+
+```bash
+export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows
+```
+
+Train the model by specifying a model ID of the ModelScope Hub as the `model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`.
+
+### Download from Modelers Hub
+
+You can also use Modelers Hub to download models and datasets.
+
+```bash
+export USE_OPENMIND_HUB=1 # `set USE_OPENMIND_HUB=1` for Windows
+```
+
+Train the model by specifying a model ID of the Modelers Hub as the `model_name_or_path`. You can find a full list of model IDs at [Modelers Hub](https://modelers.cn/models), e.g., `TeleAI/TeleChat-7B-pt`.
+
+### Use W&B Logger
+
+To use [Weights & Biases](https://wandb.ai) for logging experimental results, you need to add the following arguments to yaml files.
+
+```yaml
+report_to: wandb
+run_name: test_run # optional
+```
+
+Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account.
+
+### Use SwanLab Logger
+
+To use [SwanLab](https://github.com/SwanHubX/SwanLab) for logging experimental results, you need to add the following arguments to yaml files.
+
+```yaml
+use_swanlab: true
+swanlab_run_name: test_run # optional
+```
+
+When launching training tasks, you can log in to SwanLab in three ways:
+
+1. Add `swanlab_api_key=
` to the yaml file, and set it to your [API key](https://swanlab.cn/settings).
+2. Set the environment variable `SWANLAB_API_KEY` to your [API key](https://swanlab.cn/settings).
+3. Use the `swanlab login` command to complete the login.
+
+## Projects using LLaMA Factory
+
+If you have a project that should be incorporated, please contact via email or create a pull request.
+
+Click to show
+
+1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
+1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
+1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
+1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)
+1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)
+1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
+1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
+1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)
+1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)
+1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)
+1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)
+1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)
+1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)
+1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)
+1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)
+1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)
+1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)
+1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
+1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
+1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
+1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
+1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
+1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
+1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
+1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
+1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
+1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
+1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
+1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
+1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
+1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
+1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
+1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
+1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
+1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
+1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)
+1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
+1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)
+1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)
+1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)
+1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)
+1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)
+1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)
+1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)
+1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)
+1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)
+1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)
+1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)
+1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)
+1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)
+1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)
+1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)
+1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)
+1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)
+1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)
+1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)
+1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)
+1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)
+1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)
+1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)
+1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)
+1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)
+1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949)
+1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365)
+1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470)
+1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129)
+1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044)
+1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756)
+1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/)
+1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561)
+1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637)
+1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535)
+1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705)
+1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137)
+1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf)
+1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11)
+1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23)
+1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693)
+1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)
+1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)
+1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)
+1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611)
+1. Zhang et al. CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling. ACL 2024. [[paper]](https://aclanthology.org/2024.findings-acl.830.pdf)
+1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B.
+1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge.
+1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B.
+1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B.
+1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods.
+1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
+1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B.
+1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models.
+1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX.
+1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory.
+1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357)
+1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**: A modified library that supports long sequence SFT & DPO using ring attention.
+1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**: An o1-like model fine-tuned by NovaSky AI with very small cost.
+1. **[WeClone](https://github.com/xming521/WeClone)**: One-stop solution for creating your digital avatar from chat logs.
+1. **[EmoLLM](https://github.com/SmartFlowAI/EmoLLM)**: A project about large language models (LLMs) and mental health.
+
+
+## License
+
+This repository is licensed under the [Apache-2.0 License](LICENSE).
+
+Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [GPT-2](https://github.com/openai/gpt-2/blob/master/LICENSE) / [Granite](LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Llama 4](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3/Phi-4](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [TeleChat2](https://huggingface.co/Tele-AI/telechat-7B/blob/main/TeleChat%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
+
+## Citation
+
+If this work is helpful, please kindly cite as:
+
+```bibtex
+@inproceedings{zheng2024llamafactory,
+ title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
+ author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
+ booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
+ address={Bangkok, Thailand},
+ publisher={Association for Computational Linguistics},
+ year={2024},
+ url={http://arxiv.org/abs/2403.13372}
+}
+```
+
+## Acknowledgement
+
+This repo benefits from [PEFT](https://github.com/huggingface/peft), [TRL](https://github.com/huggingface/trl), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works.
+
+## Star History
+
+
diff --git a/src/train_sft/README_zh.md b/src/train_sft/README_zh.md
new file mode 100644
index 0000000000000000000000000000000000000000..e970255c2336c6c002bf2af6bd2ff2754b3c1934
--- /dev/null
+++ b/src/train_sft/README_zh.md
@@ -0,0 +1,951 @@
+
+
+[](https://github.com/hiyouga/LLaMA-Factory/stargazers)
+[](https://github.com/hiyouga/LLaMA-Factory/commits/main)
+[](https://github.com/hiyouga/LLaMA-Factory/graphs/contributors)
+[](https://github.com/hiyouga/LLaMA-Factory/actions/workflows/tests.yml)
+[](https://pypi.org/project/llamafactory/)
+[](https://scholar.google.com/scholar?cites=12620864006390196564)
+[](https://hub.docker.com/r/hiyouga/llamafactory/tags)
+
+[](https://twitter.com/llamafactory_ai)
+[](https://discord.gg/rKfvV9r9FK)
+[](https://gitcode.com/zhengyaowei/LLaMA-Factory)
+
+[](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)
+[](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)
+[](https://docs.alayanew.com/docs/documents/newActivities/llamafactory/?utm_source=LLaMA-Factory)
+[](https://huggingface.co/spaces/hiyouga/LLaMA-Board)
+[](https://modelscope.cn/studios/hiyouga/LLaMA-Board)
+[](https://novita.ai/templates-library/105981?sharer=88115474-394e-4bda-968e-b88e123d0c47)
+
+### 获得[亚马逊](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)、[英伟达](https://developer.nvidia.cn/rtx/ai-toolkit)、[阿里云](https://help.aliyun.com/zh/pai/use-cases/fine-tune-a-llama-3-model-with-llama-factory)等的应用。
+
+
+
+### 赞助商 ❤️
+
+|
Warp,面向开发者的智能终端 适用于 MacOS、Linux 和 Windows |
|
+| ---- | ---- |
+
+----
+
+### 使用零代码[命令行](#快速开始)与 [Web UI](#llama-board-可视化微调由-gradio-驱动) 轻松微调百余种大模型
+
+
+
+
+
+👋 加入我们的[微信群](assets/wechat.jpg)、[NPU 用户群](assets/wechat_npu.jpg)或 [九章智算云算力优惠群](assets/wechat_alaya.png)。
+
+\[ [English](README.md) | 中文 \]
+
+**微调大模型可以像这样轻松…**
+
+https://github.com/user-attachments/assets/43b700c6-a178-41db-b1f8-8190a5d3fcfc
+
+选择你的打开方式:
+
+- **入门教程**:https://zhuanlan.zhihu.com/p/695287607
+- **微调视频教程**:https://www.bilibili.com/video/BV1djgRzxEts/
+- **框架文档**:https://llamafactory.readthedocs.io/zh-cn/latest/
+- **框架文档(昇腾 NPU)**:https://ascend.github.io/docs/sources/llamafactory/
+- **Colab(免费)**:https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing
+- **本地机器**:请见[如何使用](#如何使用)
+- **PAI-DSW(免费试用)**:https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory
+- **九章智算云(算力优惠活动)**:https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory
+
+> [!NOTE]
+> 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。
+
+## 目录
+
+- [项目特色](#项目特色)
+- [官方博客](#官方博客)
+- [更新日志](#更新日志)
+- [模型](#模型)
+- [训练方法](#训练方法)
+- [数据集](#数据集)
+- [软硬件依赖](#软硬件依赖)
+- [如何使用](#如何使用)
+ - [安装 LLaMA Factory](#安装-llama-factory)
+ - [数据准备](#数据准备)
+ - [快速开始](#快速开始)
+ - [LLaMA Board 可视化微调](#llama-board-可视化微调由-gradio-驱动)
+ - [构建 Docker](#构建-docker)
+ - [利用 vLLM 部署 OpenAI API](#利用-vllm-部署-openai-api)
+ - [从魔搭社区下载](#从魔搭社区下载)
+ - [从魔乐社区下载](#从魔乐社区下载)
+ - [使用 W&B 面板](#使用-wb-面板)
+ - [使用 SwanLab 面板](#使用-swanlab-面板)
+- [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目)
+- [协议](#协议)
+- [引用](#引用)
+- [致谢](#致谢)
+
+## 项目特色
+
+- **多种模型**:LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、DeepSeek、Yi、Gemma、ChatGLM、Phi 等等。
+- **集成方法**:(增量)预训练、(多模态)指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。
+- **多种精度**:16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。
+- **先进算法**:[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[APOLLO](https://github.com/zhuhanqing/APOLLO)、[Adam-mini](https://github.com/zyushun/Adam-mini)、[Muon](https://github.com/KellerJordan/Muon)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ 和 PiSSA。
+- **实用技巧**:[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。
+- **广泛任务**:多轮对话、工具调用、图像理解、视觉定位、视频识别和语音理解等等。
+- **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow、[SwanLab](https://github.com/SwanHubX/SwanLab) 等等。
+- **极速推理**:基于 [vLLM](https://github.com/vllm-project/vllm) 或 [SGLang](https://github.com/sgl-project/sglang) 的 OpenAI 风格 API、浏览器界面和命令行接口。
+
+### 最新模型的 Day-N 微调适配
+
+| 适配时间 | 模型名称 |
+| ------------ | -------------------------------------------------------------------- |
+| Day 0 | Qwen3 / Qwen2.5-VL / Gemma 3 / GLM-4.1V / InternLM 3 / MiniCPM-o-2.6 |
+| Day 1 | Llama 3 / GLM-4 / Mistral Small / PaliGemma2 / Llama 4 |
+
+## 官方博客
+
+- [使用 LLaMA-Factory 构建 GPT-OSS 角色扮演模型](https://docs.llamafactory.com.cn/docs/documents/best-practice/gptoss/?utm_source=LLaMA-Factory)(中文)
+- [使用 LLaMA-Factory 微调 Llama3.1-70B 医学诊断模型](https://docs.alayanew.com/docs/documents/bestPractice/bigModel/llama70B/?utm_source=LLaMA-Factory)(中文)
+- [基于 LLaMA-Factory 和 EasyR1 打造一站式无代码大模型强化学习和部署平台 LLM Model Hub](https://aws.amazon.com/cn/blogs/china/building-llm-model-hub-based-on-llamafactory-and-easyr1/)(中文)
+- [通过亚马逊 SageMaker HyperPod 上的 LLaMA-Factory 增强多模态模型银行文档的视觉信息提取](https://aws.amazon.com/cn/blogs/machine-learning/how-apoidea-group-enhances-visual-information-extraction-from-banking-documents-with-multimodal-models-using-llama-factory-on-amazon-sagemaker-hyperpod/)(英文)
+- [Easy Dataset × LLaMA Factory: 让大模型高效学习领域知识](https://buaa-act.feishu.cn/wiki/KY9xwTGs1iqHrRkjXBwcZP9WnL9)(中文)
+
+
全部博客
+
+- [使用 LLaMA-Factory 微调 Qwen2.5-VL 实现自动驾驶场景微调](https://docs.alayanew.com/docs/documents/useGuide/LLaMAFactory/mutiple/?utm_source=LLaMA-Factory)(中文)
+- [LLaMA Factory:微调 DeepSeek-R1-Distill-Qwen-7B 模型实现新闻标题分类器](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_deepseek_r1_distill_7b)(中文)
+- [基于 Amazon SageMaker 和 LLaMA-Factory 打造一站式无代码模型微调部署平台 Model Hub](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/)(中文)
+- [LLaMA Factory 多模态微调实践:微调 Qwen2-VL 构建文旅大模型](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl)(中文)
+- [LLaMA Factory:微调 Llama3 模型实现角色扮演](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory)(中文)
+
+
+
+## 更新日志
+
+[25/08/06] 我们支持了 **[GPT-OSS](https://github.com/openai/gpt-oss)** 模型的微调。查看 [PR #8826](https://github.com/hiyouga/LLaMA-Factory/pull/8826) 以使用。
+
+[25/07/02] 我们支持了 **[GLM-4.1V-9B-Thinking](https://github.com/THUDM/GLM-4.1V-Thinking)** 模型的微调。
+
+[25/04/28] 我们支持了 **[Qwen3](https://qwenlm.github.io/blog/qwen3/)** 系列模型的微调。
+
+
展开日志
+
+[25/04/21] 我们支持了 **[Muon](https://github.com/KellerJordan/Muon)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@tianshijing](https://github.com/tianshijing) 的 PR。
+
+[25/04/16] 我们支持了 **[InternVL3](https://huggingface.co/OpenGVLab/InternVL3-8B)** 模型的微调。查看 [PR #7258](https://github.com/hiyouga/LLaMA-Factory/pull/7258) 以使用。
+
+[25/04/14] 我们支持了 **[GLM-Z1](https://huggingface.co/THUDM/GLM-Z1-9B-0414)** 和 **[Kimi-VL](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct)** 模型的微调。
+
+[25/04/06] 我们支持了 **[Llama 4](https://ai.meta.com/blog/llama-4-multimodal-intelligence/)** 模型的微调。查看 [PR #7611](https://github.com/hiyouga/LLaMA-Factory/pull/7611) 以使用。
+
+[25/03/31] 我们支持了 **[Qwen2.5 Omni](https://qwenlm.github.io/blog/qwen2.5-omni/)** 模型的微调。查看 [PR #7537](https://github.com/hiyouga/LLaMA-Factory/pull/7537) 以使用。
+
+[25/03/15] 我们支持了 **[SGLang](https://github.com/sgl-project/sglang)** 推理后端,请使用 `infer_backend: sglang` 启用。
+
+[25/03/12] 我们支持了 **[Gemma 3](https://huggingface.co/blog/gemma3)** 模型的微调。
+
+[25/02/24] 我们宣布开源 **[EasyR1](https://github.com/hiyouga/EasyR1)**,一个高效可扩展的多模态强化学习框架,支持高效的 GRPO 训练。
+
+[25/02/11] 我们支持了在导出模型时保存 **[Ollama](https://github.com/ollama/ollama)** 配置文件。详细用法请参照 [examples](examples/README_zh.md)。
+
+[25/02/05] 我们支持了在语音理解任务上微调 **[Qwen2-Audio](Qwen/Qwen2-Audio-7B-Instruct)** 和 **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** 模型。
+
+[25/01/31] 我们支持了 **[DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)** 和 **[Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct)** 模型的微调。
+
+[25/01/15] 我们支持了 **[APOLLO](https://arxiv.org/abs/2412.05270)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。
+
+[25/01/14] 我们支持了 **[MiniCPM-o-2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6)** 和 **[MiniCPM-V-2.6](https://huggingface.co/openbmb/MiniCPM-V-2_6)** 模型的微调。 感谢 [@BUAADreamer](https://github.com/BUAADreamer) 的 PR.
+
+[25/01/14] 我们支持了 **[InternLM 3](https://huggingface.co/collections/internlm/)** 模型的微调。感谢 [@hhaAndroid](https://github.com/hhaAndroid) 的 PR。
+
+[25/01/10] 我们支持了 **[Phi-4](https://huggingface.co/microsoft/phi-4)** 模型的微调。
+
+[24/12/21] 我们支持了使用 **[SwanLab](https://github.com/SwanHubX/SwanLab)** 跟踪与可视化实验。详细用法请参考 [此部分](#使用-swanlab-面板)。
+
+[24/11/27] 我们支持了 **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** 模型的微调和 **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** 数据集。
+
+[24/10/09] 我们支持了从 **[魔乐社区](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。
+
+[24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。
+
+[24/08/30] 我们支持了 **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** 模型的微调。感谢 [@simonJJJ](https://github.com/simonJJJ) 的 PR。
+
+[24/08/27] 我们支持了 **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**。请使用 `enable_liger_kernel: true` 来加速训练。
+
+[24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。
+
+[24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。
+
+[24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/06/07] 我们支持了 **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** 和 **[GLM-4](https://github.com/THUDM/GLM-4)** 模型的微调。
+
+[24/05/26] 我们支持了 **[SimPO](https://arxiv.org/abs/2405.14734)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/05/20] 我们支持了 **PaliGemma** 系列模型的微调。注意 PaliGemma 是预训练模型,你需要使用 `paligemma` 模板进行微调使其获得对话能力。
+
+[24/05/18] 我们支持了 **[KTO](https://arxiv.org/abs/2402.01306)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅[安装](#安装-llama-factory)部分。
+
+[24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)。
+
+[24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练(24GB 可训练 Llama-2-7B-56k)。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
+
+[24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看!
+
+[24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/03/07] 我们支持了 **[GaLore](https://arxiv.org/abs/2403.03507)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `infer_backend: vllm` 来获得 **270%** 的推理速度。
+
+[24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `use_dora: true` 参数进行 DoRA 微调。
+
+[24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 [examples](examples/README_zh.md)。
+
+[24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。
+
+[24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `dataset: glaive_toolcall_zh` 即可使模型获得工具调用能力。
+
+[23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `use_unsloth: true` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。
+
+[23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。
+
+[23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔搭社区下载)。
+
+[23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `neftune_noise_alpha: 5` 参数启用 NEFTune。
+
+[23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `shift_attn: true` 参数以启用该功能。
+
+[23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 [examples](examples/README_zh.md)。
+
+[23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 `flash_attn: fa2` 参数以启用 FlashAttention-2。
+
+[23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `rope_scaling: linear` 参数训练模型或使用 `rope_scaling: dynamic` 参数评估模型。
+
+[23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。详细用法请参照 [examples](examples/README_zh.md)。
+
+[23/07/31] 我们支持了**数据流式加载**。请使用 `streaming: true` 和 `max_steps: 10000` 参数来流式加载数据集。
+
+[23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。
+
+[23/07/18] 我们开发了支持训练和测试的**浏览器一体化界面**。请使用 `train_web.py` 在您的浏览器中微调模型。感谢 [@KanadeSiina](https://github.com/KanadeSiina) 和 [@codemayq](https://github.com/codemayq) 在该功能开发中付出的努力。
+
+[23/07/09] 我们开源了 **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹,一个简单易用的、能迅速编辑大模型事实记忆的工具包。如果您感兴趣请关注我们的 [FastEdit](https://github.com/hiyouga/FastEdit) 项目。
+
+[23/06/29] 我们提供了一个**可复现的**指令模型微调示例,详细内容请查阅 [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft)。
+
+[23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。
+
+[23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。详细用法请参照 [examples](examples/README_zh.md)。
+
+
+
+> [!TIP]
+> 如果您无法使用最新的功能,请尝试重新拉取代码并再次安装 LLaMA-Factory。
+
+## 模型
+
+| 模型名 | 参数量 | Template |
+| ----------------------------------------------------------------- | -------------------------------- | ------------------- |
+| [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 |
+| [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - |
+| [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 |
+| [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere |
+| [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek |
+| [DeepSeek 2.5/3](https://huggingface.co/deepseek-ai) | 236B/671B | deepseek3 |
+| [DeepSeek R1 (Distill)](https://huggingface.co/deepseek-ai) | 1.5B/7B/8B/14B/32B/70B/671B | deepseekr1 |
+| [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon |
+| [Falcon-H1](https://huggingface.co/tiiuae) | 0.5B/1.5B/3B/7B/34B | falcon_h1 |
+| [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma/gemma2 |
+| [Gemma 3/Gemma 3n](https://huggingface.co/google) | 1B/4B/6B/8B/12B/27B | gemma3/gemma3n |
+| [GLM-4/GLM-4-0414/GLM-Z1](https://huggingface.co/zai-org) | 9B/32B | glm4/glmz1 |
+| [GLM-4.1V](https://huggingface.co/zai-org)* | 9B | glm4v |
+| [GLM-4.5](https://huggingface.co/zai-org)* | 106B/355B | glm4_moe |
+| [GPT-2](https://huggingface.co/openai-community) | 0.1B/0.4B/0.8B/1.5B | - |
+| [GPT-OSS](https://huggingface.co/openai) | 20B/120B | gpt |
+| [Granite 3.0-3.3](https://huggingface.co/ibm-granite) | 1B/2B/3B/8B | granite3 |
+| [Granite 4](https://huggingface.co/ibm-granite) | 7B | granite4 |
+| [Hunyuan](https://huggingface.co/tencent/) | 7B | hunyuan |
+| [Index](https://huggingface.co/IndexTeam) | 1.9B | index |
+| [InternLM 2-3](https://huggingface.co/internlm) | 7B/8B/20B | intern2 |
+| [InternVL 2.5-3](https://huggingface.co/OpenGVLab) | 1B/2B/8B/14B/38B/78B | intern_vl |
+| [Kimi-VL](https://huggingface.co/moonshotai) | 16B | kimi_vl |
+| [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - |
+| [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 |
+| [Llama 3-3.3](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 |
+| [Llama 4](https://huggingface.co/meta-llama) | 109B/402B | llama4 |
+| [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama |
+| [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava |
+| [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next |
+| [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video |
+| [MiMo](https://huggingface.co/XiaomiMiMo) | 7B | mimo |
+| [MiniCPM](https://huggingface.co/openbmb) | 0.5B/1B/2B/4B/8B | cpm/cpm3/cpm4 |
+| [MiniCPM-o-2.6/MiniCPM-V-2.6](https://huggingface.co/openbmb) | 8B | minicpm_o/minicpm_v |
+| [Ministral/Mistral-Nemo](https://huggingface.co/mistralai) | 8B/12B | ministral |
+| [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral |
+| [Mistral Small](https://huggingface.co/mistralai) | 24B | mistral_small |
+| [OLMo](https://huggingface.co/allenai) | 1B/7B | - |
+| [PaliGemma/PaliGemma2](https://huggingface.co/google) | 3B/10B/28B | paligemma |
+| [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - |
+| [Phi-3/Phi-3.5](https://huggingface.co/microsoft) | 4B/14B | phi |
+| [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small |
+| [Phi-4](https://huggingface.co/microsoft) | 14B | phi4 |
+| [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral |
+| [Qwen (1-2.5) (Code/Math/MoE/QwQ)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen |
+| [Qwen3 (MoE)](https://huggingface.co/Qwen) | 0.6B/1.7B/4B/8B/14B/32B/235B | qwen3 |
+| [Qwen2-Audio](https://huggingface.co/Qwen) | 7B | qwen2_audio |
+| [Qwen2.5-Omni](https://huggingface.co/Qwen) | 3B/7B | qwen2_omni |
+| [Qwen2-VL/Qwen2.5-VL/QVQ](https://huggingface.co/Qwen) | 2B/3B/7B/32B/72B | qwen2_vl |
+| [Seed Coder](https://huggingface.co/ByteDance-Seed) | 8B | seed_coder |
+| [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 |
+| [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - |
+| [TeleChat2](https://huggingface.co/Tele-AI) | 3B/7B/35B/115B | telechat2 |
+| [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse |
+| [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi |
+| [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl |
+| [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan |
+
+> [!NOTE]
+> 对于所有“基座”(Base)模型,`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Instruct/Chat)模型请务必使用**对应的模板**。
+>
+> 请务必在训练和推理时采用**完全一致**的模板。
+>
+> \*:您需要从 main 分支安装 `transformers` 并使用 `DISABLE_VERSION_CHECK=1` 来跳过版本检查。
+>
+> \*\*:您需要安装特定版本的 `transformers` 以使用该模型。
+
+项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。
+
+您也可以在 [template.py](src/llamafactory/data/template.py) 中添加自己的对话模板。
+
+## 训练方法
+
+| 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA |
+| --------------------- | ------------------ | ------------------ | ------------------ | ------------------ |
+| 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| KTO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| ORPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+| SimPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
+
+> [!TIP]
+> 有关 PPO 的实现细节,请参考[此博客](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html)。
+
+## 数据集
+
+
预训练数据集
+
+- [Wiki Demo (en)](data/wiki_demo.txt)
+- [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
+- [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2)
+- [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220)
+- [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered)
+- [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile)
+- [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B)
+- [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
+- [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)
+- [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack)
+- [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata)
+
+
+
+
指令微调数据集
+
+- [Identity (en&zh)](data/identity.json)
+- [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
+- [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3)
+- [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
+- [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2)
+- [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
+- [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
+- [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
+- [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
+- [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
+- [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)
+- [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
+- [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
+- [UltraChat (en)](https://github.com/thunlp/UltraChat)
+- [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
+- [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
+- [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
+- [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca)
+- [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca)
+- [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
+- [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
+- [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa)
+- [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)
+- [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
+- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
+- [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data)
+- [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen)
+- [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k)
+- [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)
+- [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
+- [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct)
+- [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m)
+- [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k)
+- [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia)
+- [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction)
+- [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo)
+- [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2)
+- [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered)
+- [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1)
+- [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub)
+- [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)
+- [Open-Thoughts (en)](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
+- [Open-R1-Math (en)](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
+- [Chinese-DeepSeek-R1-Distill (zh)](https://huggingface.co/datasets/Congliu/Chinese-DeepSeek-R1-Distill-data-110k-SFT)
+- [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k)
+- [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions)
+- [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de)
+- [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de)
+- [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de)
+- [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de)
+- [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de)
+- [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de)
+- [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de)
+- [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de)
+- [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de)
+
+
+
+
偏好数据集
+
+- [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k)
+- [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
+- [COIG-P (zh)](https://huggingface.co/datasets/m-a-p/COIG-P)
+- [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset)
+- [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback)
+- [RLAIF-V (en)](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)
+- [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
+- [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
+- [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar)
+- [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de)
+- [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k)
+
+
+
+部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。
+
+```bash
+pip install --upgrade huggingface_hub
+huggingface-cli login
+```
+
+## 软硬件依赖
+
+| 必需项 | 至少 | 推荐 |
+| ------------ | ------- | --------- |
+| python | 3.9 | 3.10 |
+| torch | 2.0.0 | 2.6.0 |
+| torchvision | 0.15.0 | 0.21.0 |
+| transformers | 4.49.0 | 4.50.0 |
+| datasets | 2.16.0 | 3.2.0 |
+| accelerate | 0.34.0 | 1.2.1 |
+| peft | 0.14.0 | 0.15.1 |
+| trl | 0.8.6 | 0.9.6 |
+
+| 可选项 | 至少 | 推荐 |
+| ------------ | ------- | --------- |
+| CUDA | 11.6 | 12.2 |
+| deepspeed | 0.10.0 | 0.16.4 |
+| bitsandbytes | 0.39.0 | 0.43.1 |
+| vllm | 0.4.3 | 0.8.2 |
+| flash-attn | 2.5.6 | 2.7.2 |
+
+### 硬件依赖
+
+\* *估算值*
+
+| 方法 | 精度 | 7B | 14B | 30B | 70B | `x`B |
+| ------------------------------- | ---- | ----- | ----- | ----- | ------ | ------- |
+| Full (`bf16` or `fp16`) | 32 | 120GB | 240GB | 600GB | 1200GB | `18x`GB |
+| Full (`pure_bf16`) | 16 | 60GB | 120GB | 300GB | 600GB | `8x`GB |
+| Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | `2x`GB |
+| QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | `x`GB |
+| QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | `x/2`GB |
+| QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | `x/4`GB |
+
+## 如何使用
+
+### 安装 LLaMA Factory
+
+> [!IMPORTANT]
+> 此步骤为必需。
+
+#### 从源码安装
+
+```bash
+git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git
+cd LLaMA-Factory
+pip install -e ".[torch,metrics]" --no-build-isolation
+```
+
+可选的额外依赖项:torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、aqlm、vllm、sglang、galore、apollo、badam、adam-mini、qwen、minicpm_v、openmind、swanlab、dev
+
+#### 从镜像安装
+
+```bash
+docker run -it --rm --gpus=all --ipc=host hiyouga/llamafactory:latest
+```
+
+该镜像基于 Ubuntu 22.04(x86\_64)、CUDA 12.4、Python 3.11、PyTorch 2.6.0 和 Flash-attn 2.7.4 构建。
+
+查看全部镜像:https://hub.docker.com/r/hiyouga/llamafactory/tags
+
+请参阅[构建 Docker](#构建-docker) 来重新构建镜像。
+
+
使用 uv 构建虚拟环境
+
+使用 [uv](https://github.com/astral-sh/uv) 创建隔离的 Python 环境:
+
+```bash
+uv sync --extra torch --extra metrics --prerelease=allow
+```
+
+在环境中运行 LLaMA-Factory:
+
+```bash
+uv run --prerelease=allow llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml
+```
+
+
+
+
Windows 用户指南
+
+#### 安装 PyTorch
+
+Windows 平台需要额外手动安装 GPU 版本的 PyTorch 依赖包,您可以参考[官方网站](https://pytorch.org/get-started/locally/)和以下命令安装并测试 PyTorch 是否正确安装。
+
+```bash
+pip uninstall torch torchvision torchaudio
+pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
+python -c "import torch; print(torch.cuda.is_available())"
+```
+
+如果看到 `True` 则说明安装成功。
+
+若遇到类似 `Can't pickle local object` 的报错,请设置 `dataloader_num_workers: 0`。
+
+#### 安装 BitsAndBytes
+
+如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。
+
+```bash
+pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl
+```
+
+#### 安装 Flash Attention-2
+
+如果要在 Windows 平台上开启 FlashAttention-2,请使用 [flash-attention-windows-wheel](https://huggingface.co/lldacing/flash-attention-windows-wheel) 中的脚本自行编译与安装。
+
+
+
+
昇腾 NPU 用户指南
+
+在昇腾 NPU 设备上安装 LLaMA Factory 时,请升级 Python 到 3.10 及以上,并需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令:
+
+```bash
+# 请替换 URL 为 CANN 版本和设备型号对应的 URL
+# 安装 CANN Toolkit
+wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run
+bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install
+
+# 安装 CANN Kernels
+wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run
+bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install
+
+# 设置环境变量
+source /usr/local/Ascend/ascend-toolkit/set_env.sh
+```
+
+| 依赖项 | 至少 | 推荐 |
+| ------------ | ------- | -------------- |
+| CANN | 8.0.RC1 | 8.0.0.alpha002 |
+| torch | 2.1.0 | 2.4.0 |
+| torch-npu | 2.1.0 | 2.4.0.post2 |
+| deepspeed | 0.13.2 | 0.13.2 |
+| vllm-ascend | - | 0.7.3 |
+
+请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。
+
+如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。
+
+下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html)
+
+#### 安装 BitsAndBytes
+
+如果要在 Ascend NPU 上进行基于 bitsandbytes 的 QLoRA 量化微调,请执行如下步骤:
+
+1. 手动编译 bitsandbytes:请参考[安装文档](https://huggingface.co/docs/bitsandbytes/installation?backend=Ascend+NPU&platform=Ascend+NPU)完成 NPU 版的 bitsandbytes 安装,编译要求环境 cmake 版本不低于 3.22.1,g++ 版本不低于 12.x。
+
+```bash
+# 从源码安装 bitsandbytes
+# 克隆 bitsandbytes 仓库, Ascend NPU 目前在 multi-backend-refactor 中支持
+git clone -b multi-backend-refactor https://github.com/bitsandbytes-foundation/bitsandbytes.git
+cd bitsandbytes/
+
+# 安装依赖
+pip install -r requirements-dev.txt
+
+# 安装编译工具依赖,该步骤在不同系统上命令有所不同,供参考
+apt-get install -y build-essential cmake
+
+# 编译 & 安装
+cmake -DCOMPUTE_BACKEND=npu -S .
+make
+pip install .
+```
+
+2. 安装 transformers 的 main 分支版本。
+
+```bash
+git clone -b main https://github.com/huggingface/transformers.git
+cd transformers
+pip install .
+```
+
+3. 在训练参数中设置 `double_quantization: false`,可参考[示例](examples/train_qlora/llama3_lora_sft_bnb_npu.yaml)。
+
+
+
+### 数据准备
+
+关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope / Modelers 上的数据集或加载本地数据集。
+
+> [!NOTE]
+> 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。
+
+您也可以使用 **[Easy Dataset](https://github.com/ConardLi/easy-dataset)**、**[DataFlow](https://github.com/OpenDCAI/DataFlow)** 和 **[GraphGen](https://github.com/open-sciencelab/GraphGen)** 构建用于微调的合成数据。
+
+### 快速开始
+
+下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。
+
+```bash
+llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml
+llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
+llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml
+```
+
+高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。
+
+> [!TIP]
+> 使用 `llamafactory-cli help` 显示帮助信息。
+>
+> 遇到报错请先看[常见问题](https://github.com/hiyouga/LLaMA-Factory/issues/4614)。
+
+### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动)
+
+```bash
+llamafactory-cli webui
+```
+
+### 构建 Docker
+
+CUDA 用户:
+
+```bash
+cd docker/docker-cuda/
+docker compose up -d
+docker compose exec llamafactory bash
+```
+
+昇腾 NPU 用户:
+
+```bash
+cd docker/docker-npu/
+docker compose up -d
+docker compose exec llamafactory bash
+```
+
+AMD ROCm 用户:
+
+```bash
+cd docker/docker-rocm/
+docker compose up -d
+docker compose exec llamafactory bash
+```
+
+
不使用 Docker Compose 构建
+
+CUDA 用户:
+
+```bash
+docker build -f ./docker/docker-cuda/Dockerfile \
+ --build-arg PIP_INDEX=https://pypi.org/simple \
+ --build-arg EXTRAS=metrics \
+ -t llamafactory:latest .
+
+docker run -dit --ipc=host --gpus=all \
+ -p 7860:7860 \
+ -p 8000:8000 \
+ --name llamafactory \
+ llamafactory:latest
+
+docker exec -it llamafactory bash
+```
+
+昇腾 NPU 用户:
+
+```bash
+docker build -f ./docker/docker-npu/Dockerfile \
+ --build-arg PIP_INDEX=https://pypi.org/simple \
+ --build-arg EXTRAS=torch-npu,metrics \
+ -t llamafactory:latest .
+
+docker run -dit --ipc=host \
+ -v /usr/local/dcmi:/usr/local/dcmi \
+ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
+ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
+ -v /etc/ascend_install.info:/etc/ascend_install.info \
+ -p 7860:7860 \
+ -p 8000:8000 \
+ --device /dev/davinci0 \
+ --device /dev/davinci_manager \
+ --device /dev/devmm_svm \
+ --device /dev/hisi_hdc \
+ --name llamafactory \
+ llamafactory:latest
+
+docker exec -it llamafactory bash
+```
+
+AMD ROCm 用户:
+
+```bash
+docker build -f ./docker/docker-rocm/Dockerfile \
+ --build-arg PIP_INDEX=https://pypi.org/simple \
+ --build-arg EXTRAS=metrics \
+ -t llamafactory:latest .
+
+docker run -dit --ipc=host \
+ -p 7860:7860 \
+ -p 8000:8000 \
+ --device /dev/kfd \
+ --device /dev/dri \
+ --name llamafactory \
+ llamafactory:latest
+
+docker exec -it llamafactory bash
+```
+
+
+
+
使用数据卷
+
+您可以通过移除 Dockerfile 中 `VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]` 的注释来使用数据卷。
+
+在构建 Docker 时使用参数 `-v ./hf_cache:/root/.cache/huggingface` 来挂载数据卷。各个数据卷的含义表示如下。
+
+- `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹。
+- `shared_data`:宿主机中存放数据集的文件夹路径。
+- `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。
+
+
+
+### 利用 vLLM 部署 OpenAI API
+
+```bash
+API_PORT=8000 llamafactory-cli api examples/inference/llama3.yaml infer_backend=vllm vllm_enforce_eager=true
+```
+
+> [!TIP]
+> API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。
+>
+> 示例:[图像理解](scripts/api_example/test_image.py) | [工具调用](scripts/api_example/test_toolcall.py)
+
+### 从魔搭社区下载
+
+如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。
+
+```bash
+export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1`
+```
+
+将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`。
+
+### 从魔乐社区下载
+
+您也可以通过下述方法,使用魔乐社区下载数据集和模型。
+
+```bash
+export USE_OPENMIND_HUB=1 # Windows 使用 `set USE_OPENMIND_HUB=1`
+```
+
+将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔乐社区](https://modelers.cn/models)查看所有可用的模型,例如 `TeleAI/TeleChat-7B-pt`。
+
+### 使用 W&B 面板
+
+若要使用 [Weights & Biases](https://wandb.ai) 记录实验数据,请在 yaml 文件中添加下面的参数。
+
+```yaml
+report_to: wandb
+run_name: test_run # 可选
+```
+
+在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。
+
+### 使用 SwanLab 面板
+
+若要使用 [SwanLab](https://github.com/SwanHubX/SwanLab) 记录实验数据,请在 yaml 文件中添加下面的参数。
+
+```yaml
+use_swanlab: true
+swanlab_run_name: test_run # 可选
+```
+
+在启动训练任务时,登录SwanLab账户有以下三种方式:
+
+方式一:在 yaml 文件中添加 `swanlab_api_key=
` ,并设置为你的 [API 密钥](https://swanlab.cn/settings)。
+方式二:将环境变量 `SWANLAB_API_KEY` 设置为你的 [API 密钥](https://swanlab.cn/settings)。
+方式三:启动前使用 `swanlab login` 命令完成登录。
+
+## 使用了 LLaMA Factory 的项目
+
+如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。
+
+点击显示
+
+1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223)
+1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092)
+1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526)
+1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816)
+1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710)
+1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319)
+1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286)
+1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904)
+1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625)
+1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176)
+1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187)
+1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746)
+1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801)
+1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809)
+1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819)
+1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204)
+1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714)
+1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043)
+1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333)
+1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419)
+1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228)
+1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073)
+1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541)
+1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246)
+1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008)
+1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443)
+1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604)
+1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827)
+1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167)
+1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316)
+1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084)
+1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836)
+1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581)
+1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215)
+1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621)
+1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140)
+1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585)
+1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760)
+1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378)
+1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055)
+1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739)
+1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816)
+1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215)
+1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30)
+1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380)
+1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106)
+1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136)
+1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496)
+1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688)
+1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955)
+1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973)
+1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115)
+1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815)
+1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099)
+1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173)
+1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074)
+1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408)
+1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546)
+1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695)
+1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233)
+1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069)
+1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25)
+1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949)
+1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365)
+1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470)
+1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129)
+1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044)
+1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756)
+1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/)
+1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561)
+1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637)
+1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535)
+1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705)
+1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137)
+1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf)
+1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11)
+1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23)
+1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693)
+1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168)
+1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/)
+1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072)
+1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611)
+1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。
+1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。
+1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。
+1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。
+1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。
+1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt)
+1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。
+1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。
+1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。
+1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调.
+1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357)
+1. **[360-LLaMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)**:一个魔改后的代码库,通过 Ring Attention 支持长序列的 SFT 和 DPO 训练。
+1. **[Sky-T1](https://novasky-ai.github.io/posts/sky-t1/)**:由 NovaSky AI 微调的低成本类 o1 长推理模型。
+1. **[WeClone](https://github.com/xming521/WeClone)**:从聊天记录创造数字分身的一站式解决方案。
+
+
+
+## 协议
+
+本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。
+
+使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [GPT-2](https://github.com/openai/gpt-2/blob/master/LICENSE) / [Granite](LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [Llama 4](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3/Phi-4](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [TeleChat2](https://huggingface.co/Tele-AI/telechat-7B/blob/main/TeleChat%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan)
+
+## 引用
+
+如果您觉得此项目有帮助,请考虑以下列格式引用
+
+```bibtex
+@inproceedings{zheng2024llamafactory,
+ title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
+ author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
+ booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
+ address={Bangkok, Thailand},
+ publisher={Association for Computational Linguistics},
+ year={2024},
+ url={http://arxiv.org/abs/2403.13372}
+}
+```
+
+## 致谢
+
+本项目受益于 [PEFT](https://github.com/huggingface/peft)、[TRL](https://github.com/huggingface/trl)、[QLoRA](https://github.com/artidoro/qlora) 和 [FastChat](https://github.com/lm-sys/FastChat),感谢以上诸位作者的付出。
+
+## Star History
+
+
diff --git a/src/train_sft/assets/alaya_new.svg b/src/train_sft/assets/alaya_new.svg
new file mode 100644
index 0000000000000000000000000000000000000000..3568e1511fb758b46cfc84122554ea23c8a0b1c7
--- /dev/null
+++ b/src/train_sft/assets/alaya_new.svg
@@ -0,0 +1,38 @@
+
+
+
+
+ background
+
+
+
+ Layer 1
+
+
+
+
+
+
+
+
+ Open in Alaya NeW
+
+
+
+
+
diff --git a/src/train_sft/assets/serpapi.svg b/src/train_sft/assets/serpapi.svg
new file mode 100644
index 0000000000000000000000000000000000000000..79bdf4001382b368148e9c3b611507bb7c0494f9
--- /dev/null
+++ b/src/train_sft/assets/serpapi.svg
@@ -0,0 +1 @@
+
diff --git a/src/train_sft/docker/docker-cuda/Dockerfile b/src/train_sft/docker/docker-cuda/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..afca20871029757494bf9543d5fd44fd9afbc1a1
--- /dev/null
+++ b/src/train_sft/docker/docker-cuda/Dockerfile
@@ -0,0 +1,66 @@
+# https://hub.docker.com/r/hiyouga/pytorch/tags
+ARG BASE_IMAGE=hiyouga/pytorch:th2.6.0-cu124-flashattn2.7.4-cxx11abi0-devel
+FROM ${BASE_IMAGE}
+
+# Installation arguments
+ARG PIP_INDEX=https://pypi.org/simple
+ARG EXTRAS=metrics
+ARG INSTALL_FLASHATTN=false
+ARG HTTP_PROXY=""
+
+# Define environments
+ENV MAX_JOBS=16
+ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
+ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
+ENV DEBIAN_FRONTEND=noninteractive
+ENV NODE_OPTIONS=""
+ENV PIP_ROOT_USER_ACTION=ignore
+ENV http_proxy="${HTTP_PROXY}"
+ENV https_proxy="${HTTP_PROXY}"
+
+# Use Bash instead of default /bin/sh
+SHELL ["/bin/bash", "-c"]
+
+# Set the working directory
+WORKDIR /app
+
+# Change pip source
+RUN pip config set global.index-url "${PIP_INDEX}" && \
+ pip config set global.extra-index-url "${PIP_INDEX}" && \
+ pip install --no-cache-dir --upgrade pip packaging wheel setuptools
+
+# Install the requirements
+COPY requirements.txt /app
+RUN pip install --no-cache-dir -r requirements.txt
+
+# Copy the rest of the application into the image
+COPY . /app
+
+# Install LLaMA Factory
+RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
+
+# Rebuild flash attention
+RUN if [ "${INSTALL_FLASHATTN}" == "true" ]; then \
+ pip uninstall -y ninja && \
+ pip install --no-cache-dir ninja && \
+ pip install --no-cache-dir flash-attn --no-build-isolation; \
+ fi
+
+# Set up volumes
+# VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
+
+# Expose port 7860 for LLaMA Board
+ENV GRADIO_SERVER_PORT=7860
+EXPOSE 7860
+
+# Expose port 8000 for API service
+ENV API_PORT=8000
+EXPOSE 8000
+
+# unset proxy
+ENV http_proxy=
+ENV https_proxy=
+
+# Reset pip config
+RUN pip config unset global.index-url && \
+ pip config unset global.extra-index-url
diff --git a/src/train_sft/docker/docker-cuda/Dockerfile.base b/src/train_sft/docker/docker-cuda/Dockerfile.base
new file mode 100644
index 0000000000000000000000000000000000000000..542eb8dcced27998875cbd5dc5ee08b61e776db2
--- /dev/null
+++ b/src/train_sft/docker/docker-cuda/Dockerfile.base
@@ -0,0 +1,55 @@
+# Start from the pytorch official image (ubuntu-22.04 + cuda-12.4.1 + python-3.11)
+# https://hub.docker.com/r/pytorch/pytorch/tags
+FROM pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel
+
+# Define environments
+ENV MAX_JOBS=16
+ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
+ENV DEBIAN_FRONTEND=noninteractive
+ENV NODE_OPTIONS=""
+ENV PIP_ROOT_USER_ACTION=ignore
+
+# Define installation arguments
+ARG APT_SOURCE=https://mirrors.tuna.tsinghua.edu.cn/ubuntu/
+ARG PIP_INDEX=https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
+
+# Set apt source
+RUN cp /etc/apt/sources.list /etc/apt/sources.list.bak && \
+ { \
+ echo "deb ${APT_SOURCE} jammy main restricted universe multiverse"; \
+ echo "deb ${APT_SOURCE} jammy-updates main restricted universe multiverse"; \
+ echo "deb ${APT_SOURCE} jammy-backports main restricted universe multiverse"; \
+ echo "deb ${APT_SOURCE} jammy-security main restricted universe multiverse"; \
+ } > /etc/apt/sources.list
+
+# Install systemctl and wget
+RUN apt-get update && \
+ apt-get install -y -o Dpkg::Options::="--force-confdef" systemd wget && \
+ apt-get clean
+
+# Install git and vim
+RUN apt-get update && \
+ apt-get install -y git vim && \
+ apt-get clean
+
+# Install gcc and g++
+RUN apt-get update && \
+ apt-get install -y gcc g++ && \
+ apt-get clean
+
+# Change pip source
+RUN pip config set global.index-url "${PIP_INDEX}" && \
+ pip config set global.extra-index-url "${PIP_INDEX}" && \
+ pip install --no-cache-dir --upgrade pip packaging wheel setuptools
+
+# Install flash-attn-2.7.4.post1 (cxx11abi=False)
+RUN wget -nv https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp311-cp311-linux_x86_64.whl && \
+ pip install --no-cache-dir flash_attn-2.7.4.post1+cu12torch2.6cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
+
+# Install flashinfer-0.2.2.post1+cu124 (cxx11abi=False)
+RUN wget -nv https://github.com/flashinfer-ai/flashinfer/releases/download/v0.2.2.post1/flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl && \
+ pip install --no-cache-dir flashinfer_python-0.2.2.post1+cu124torch2.6-cp38-abi3-linux_x86_64.whl
+
+# Reset pip config
+RUN pip config unset global.index-url && \
+ pip config unset global.extra-index-url
diff --git a/src/train_sft/docker/docker-cuda/README.md b/src/train_sft/docker/docker-cuda/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..86707ce94852808d77d5f382ba5e1cff4c152a2b
--- /dev/null
+++ b/src/train_sft/docker/docker-cuda/README.md
@@ -0,0 +1,111 @@
+# Docker Setup for NVIDIA GPUs
+
+This directory contains Docker configuration files for running LLaMA Factory with NVIDIA GPU support.
+
+## Prerequisites
+
+### Linux-specific Requirements
+
+Before running the Docker container with GPU support, you need to install the following packages:
+
+1. **Docker**: The container runtime
+ ```bash
+ # Ubuntu/Debian
+ sudo apt-get update
+ sudo apt-get install docker.io
+
+ # Or install Docker Engine from the official repository:
+ # https://docs.docker.com/engine/install/
+ ```
+
+2. **Docker Compose** (if using the docker-compose method):
+ ```bash
+ # Ubuntu/Debian
+ sudo apt-get install docker-compose
+
+ # Or install the latest version:
+ # https://docs.docker.com/compose/install/
+ ```
+
+3. **NVIDIA Container Toolkit** (required for GPU support):
+ ```bash
+ # Add the NVIDIA GPG key and repository
+ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
+ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+
+ # Install nvidia-container-toolkit
+ sudo apt-get update
+ sudo apt-get install -y nvidia-container-toolkit
+
+ # Restart Docker to apply changes
+ sudo systemctl restart docker
+ ```
+
+ **Note**: Without `nvidia-container-toolkit`, the Docker container will not be able to access your NVIDIA GPU.
+
+### Verify GPU Access
+
+After installation, verify that Docker can access your GPU:
+
+```bash
+sudo docker run --rm --gpus all nvidia/cuda:12.4.0-base-ubuntu22.04 nvidia-smi
+```
+
+If successful, you should see your GPU information displayed.
+
+## Usage
+
+### Using Docker Compose (Recommended)
+
+```bash
+cd docker/docker-cuda/
+docker compose up -d
+docker compose exec llamafactory bash
+```
+
+### Using Docker Run
+
+```bash
+# Build the image
+docker build -f ./docker/docker-cuda/Dockerfile \
+ --build-arg PIP_INDEX=https://pypi.org/simple \
+ --build-arg EXTRAS=metrics \
+ -t llamafactory:latest .
+
+# Run the container
+docker run -dit --ipc=host --gpus=all \
+ -p 7860:7860 \
+ -p 8000:8000 \
+ --name llamafactory \
+ llamafactory:latest
+
+# Enter the container
+docker exec -it llamafactory bash
+```
+
+## Troubleshooting
+
+### GPU Not Detected
+
+If your GPU is not detected inside the container:
+
+1. Ensure `nvidia-container-toolkit` is installed
+2. Check that the Docker daemon has been restarted after installation
+3. Verify your NVIDIA drivers are properly installed: `nvidia-smi`
+4. Check Docker GPU support: `docker run --rm --gpus all ubuntu nvidia-smi`
+
+### Permission Denied
+
+If you get permission errors, ensure your user is in the docker group:
+
+```bash
+sudo usermod -aG docker $USER
+# Log out and back in for changes to take effect
+```
+
+## Additional Notes
+
+- The default image is built on Ubuntu 22.04 (x86_64), CUDA 12.4, Python 3.11, PyTorch 2.6.0, and Flash-attn 2.7.4
+- For different CUDA versions, you may need to adjust the base image in the Dockerfile
+- Make sure your NVIDIA driver version is compatible with the CUDA version used in the Docker image
diff --git a/src/train_sft/docker/docker-cuda/docker-compose.yml b/src/train_sft/docker/docker-cuda/docker-compose.yml
new file mode 100644
index 0000000000000000000000000000000000000000..ab0da4d8702ce3a2a160e4cc222e31444d6592da
--- /dev/null
+++ b/src/train_sft/docker/docker-cuda/docker-compose.yml
@@ -0,0 +1,25 @@
+services:
+ llamafactory:
+ build:
+ dockerfile: ./docker/docker-cuda/Dockerfile
+ context: ../..
+ args:
+ PIP_INDEX: https://pypi.org/simple
+ EXTRAS: metrics
+ container_name: llamafactory
+ ports:
+ - "7860:7860"
+ - "8000:8000"
+ ipc: host
+ tty: true
+ # shm_size: "16gb" # ipc: host is set
+ stdin_open: true
+ command: bash
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - driver: nvidia
+ count: "all"
+ capabilities: [ gpu ]
+ restart: unless-stopped
diff --git a/src/train_sft/docker/docker-npu/Dockerfile b/src/train_sft/docker/docker-npu/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..7cd5e8f44ff74287c4407f483d5d9e5cb6b4c199
--- /dev/null
+++ b/src/train_sft/docker/docker-npu/Dockerfile
@@ -0,0 +1,63 @@
+# https://hub.docker.com/r/ascendai/cann/tags
+ARG BASE_IMAGE=ascendai/cann:8.1.rc1-910b-ubuntu22.04-py3.11
+FROM ${BASE_IMAGE}
+
+# Installation arguments
+ARG PIP_INDEX=https://pypi.org/simple
+ARG EXTRAS=torch-npu,metrics
+ARG HTTP_PROXY=""
+ARG PYTORCH_INDEX=https://download.pytorch.org/whl/cpu
+
+# Define environments
+ENV MAX_JOBS=16
+ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
+ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
+ENV DEBIAN_FRONTEND=noninteractive
+ENV NODE_OPTIONS=""
+ENV PIP_ROOT_USER_ACTION=ignore
+ENV http_proxy="${HTTP_PROXY}"
+ENV https_proxy="${HTTP_PROXY}"
+
+# Use Bash instead of default /bin/sh
+SHELL ["/bin/bash", "-c"]
+
+# Set the working directory
+WORKDIR /app
+
+# Change pip source
+RUN pip config set global.index-url "${PIP_INDEX}" && \
+ pip config set global.extra-index-url "${PIP_INDEX}" && \
+ pip install --no-cache-dir --upgrade pip packaging wheel setuptools
+
+# Install torch-npu
+RUN pip uninstall -y torch torchvision torchaudio && \
+ pip install --no-cache-dir "torch-npu==2.5.1" "torchvision==0.20.1" --index-url "${PYTORCH_INDEX}"
+
+# Install the requirements
+COPY requirements.txt /app
+RUN pip install --no-cache-dir -r requirements.txt
+
+# Copy the rest of the application into the image
+COPY . /app
+
+# Install LLaMA Factory
+RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
+
+# Set up volumes
+# VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
+
+# Expose port 7860 for LLaMA Board
+ENV GRADIO_SERVER_PORT=7860
+EXPOSE 7860
+
+# Expose port 8000 for API service
+ENV API_PORT=8000
+EXPOSE 8000
+
+# unset proxy
+ENV http_proxy=
+ENV https_proxy=
+
+# Reset pip config
+RUN pip config unset global.index-url && \
+ pip config unset global.extra-index-url
diff --git a/src/train_sft/docker/docker-npu/docker-compose.yml b/src/train_sft/docker/docker-npu/docker-compose.yml
new file mode 100644
index 0000000000000000000000000000000000000000..659f8d1bb29f98023447b7395d9a77bc24e81224
--- /dev/null
+++ b/src/train_sft/docker/docker-npu/docker-compose.yml
@@ -0,0 +1,28 @@
+services:
+ llamafactory:
+ build:
+ dockerfile: ./docker/docker-npu/Dockerfile
+ context: ../..
+ args:
+ PIP_INDEX: https://pypi.org/simple
+ EXTRAS: torch-npu,metrics
+ container_name: llamafactory
+ volumes:
+ - /usr/local/dcmi:/usr/local/dcmi
+ - /usr/local/bin/npu-smi:/usr/local/bin/npu-smi
+ - /usr/local/Ascend/driver:/usr/local/Ascend/driver
+ - /etc/ascend_install.info:/etc/ascend_install.info
+ ports:
+ - "7860:7860"
+ - "8000:8000"
+ ipc: host
+ tty: true
+ # shm_size: "16gb" # ipc: host is set
+ stdin_open: true
+ command: bash
+ devices:
+ - /dev/davinci0
+ - /dev/davinci_manager
+ - /dev/devmm_svm
+ - /dev/hisi_hdc
+ restart: unless-stopped
diff --git a/src/train_sft/docker/docker-rocm/Dockerfile b/src/train_sft/docker/docker-rocm/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..67f4f3669c130327606eac2047295c09c1fd2cd6
--- /dev/null
+++ b/src/train_sft/docker/docker-rocm/Dockerfile
@@ -0,0 +1,71 @@
+# https://hub.docker.com/r/rocm/pytorch/tags
+ARG BASE_IMAGE=rocm/pytorch:rocm6.4.1_ubuntu22.04_py3.10_pytorch_release_2.6.0
+FROM ${BASE_IMAGE}
+
+# Installation arguments
+ARG PIP_INDEX=https://pypi.org/simple
+ARG EXTRAS=metrics
+ARG INSTALL_FLASHATTN=false
+ARG HTTP_PROXY=""
+ARG PYTORCH_INDEX=https://download.pytorch.org/whl/rocm6.3
+
+# Define environments
+ENV MAX_JOBS=16
+ENV FLASH_ATTENTION_FORCE_BUILD=TRUE
+ENV VLLM_WORKER_MULTIPROC_METHOD=spawn
+ENV DEBIAN_FRONTEND=noninteractive
+ENV NODE_OPTIONS=""
+ENV PIP_ROOT_USER_ACTION=ignore
+ENV http_proxy="${HTTP_PROXY}"
+ENV https_proxy="${HTTP_PROXY}"
+
+# Use Bash instead of default /bin/sh
+SHELL ["/bin/bash", "-c"]
+
+# Set the working directory
+WORKDIR /app
+
+# Change pip source
+RUN pip config set global.index-url "${PIP_INDEX}" && \
+ pip config set global.extra-index-url "${PIP_INDEX}" && \
+ pip install --no-cache-dir --upgrade pip packaging wheel setuptools
+
+# Reinstall pytorch rocm
+RUN pip uninstall -y torch torchvision torchaudio && \
+ pip install --no-cache-dir --pre torch torchvision torchaudio --index-url "${PYTORCH_INDEX}"
+
+# Install the requirements
+COPY requirements.txt /app
+RUN pip install --no-cache-dir -r requirements.txt
+
+# Copy the rest of the application into the image
+COPY . /app
+
+# Install LLaMA Factory
+RUN pip install --no-cache-dir -e ".[${EXTRAS}]" --no-build-isolation
+
+# Rebuild flash attention
+RUN if [ "${INSTALL_FLASHATTN}" == "true" ]; then \
+ pip uninstall -y ninja && \
+ pip install --no-cache-dir ninja && \
+ pip install --no-cache-dir flash-attn --no-build-isolation; \
+ fi
+
+# Set up volumes
+# VOLUME [ "/root/.cache/huggingface", "/app/shared_data", "/app/output" ]
+
+# Expose port 7860 for LLaMA Board
+ENV GRADIO_SERVER_PORT=7860
+EXPOSE 7860
+
+# Expose port 8000 for API service
+ENV API_PORT=8000
+EXPOSE 8000
+
+# unset proxy
+ENV http_proxy=
+ENV https_proxy=
+
+# Reset pip config
+RUN pip config unset global.index-url && \
+ pip config unset global.extra-index-url
diff --git a/src/train_sft/docker/docker-rocm/docker-compose.yml b/src/train_sft/docker/docker-rocm/docker-compose.yml
new file mode 100644
index 0000000000000000000000000000000000000000..32cdf5633ef8a80863bf5b018d24612b2e923f7b
--- /dev/null
+++ b/src/train_sft/docker/docker-rocm/docker-compose.yml
@@ -0,0 +1,21 @@
+services:
+ llamafactory:
+ build:
+ dockerfile: ./docker/docker-rocm/Dockerfile
+ context: ../..
+ args:
+ PIP_INDEX: https://pypi.org/simple
+ EXTRAS: metrics
+ container_name: llamafactory
+ ports:
+ - "7860:7860"
+ - "8000:8000"
+ ipc: host
+ tty: true
+ # shm_size: "16gb" # ipc: host is set
+ stdin_open: true
+ command: bash
+ devices:
+ - /dev/kfd:/dev/kfd
+ - /dev/dri:/dev/dri
+ restart: unless-stopped
diff --git a/src/train_sft/examples/extras/apollo/llama3_full_sft.yaml b/src/train_sft/examples/extras/apollo/llama3_full_sft.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..d9fb6c2002df08018b7110437471797bdfef777e
--- /dev/null
+++ b/src/train_sft/examples/extras/apollo/llama3_full_sft.yaml
@@ -0,0 +1,48 @@
+### model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+trust_remote_code: true
+
+### method
+stage: sft
+do_train: true
+finetuning_type: full
+use_apollo: true
+apollo_layerwise: true # choices: [true, false], use false for DDP training
+apollo_target: all
+apollo_rank: 128
+apollo_scale: 32.0
+apollo_scale_type: channel
+
+### dataset
+dataset: identity,alpaca_en_demo
+template: llama3
+cutoff_len: 2048
+max_samples: 1000
+overwrite_cache: true
+preprocessing_num_workers: 16
+dataloader_num_workers: 4
+
+### output
+output_dir: saves/llama3-8b/full/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+save_only_model: false
+report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
+
+### train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 1 # use 1 for layerwise apollo
+learning_rate: 1.0e-5
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_ratio: 0.1
+pure_bf16: true
+ddp_timeout: 180000000
+
+### eval
+# val_size: 0.1
+# per_device_eval_batch_size: 1
+# eval_strategy: steps
+# eval_steps: 500
diff --git a/src/train_sft/examples/extras/fsdp_qlora/llama3_lora_sft.yaml b/src/train_sft/examples/extras/fsdp_qlora/llama3_lora_sft.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..1a8d9743035e3c5848ba943eb0fc47eb7b1da6be
--- /dev/null
+++ b/src/train_sft/examples/extras/fsdp_qlora/llama3_lora_sft.yaml
@@ -0,0 +1,45 @@
+### model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+quantization_bit: 4
+trust_remote_code: true
+
+### method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_rank: 8
+lora_target: all
+
+### dataset
+dataset: identity,alpaca_en_demo
+template: llama3
+cutoff_len: 2048
+max_samples: 1000
+overwrite_cache: true
+preprocessing_num_workers: 16
+dataloader_num_workers: 4
+
+### output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+save_only_model: false
+report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
+
+### train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 1.0e-4
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_ratio: 0.1
+bf16: true
+ddp_timeout: 180000000
+
+### eval
+# val_size: 0.1
+# per_device_eval_batch_size: 1
+# eval_strategy: steps
+# eval_steps: 500
diff --git a/src/train_sft/examples/extras/fsdp_qlora/train.sh b/src/train_sft/examples/extras/fsdp_qlora/train.sh
new file mode 100644
index 0000000000000000000000000000000000000000..fac8cdee8781750d96e29999ab8a6b9b4f1bc322
--- /dev/null
+++ b/src/train_sft/examples/extras/fsdp_qlora/train.sh
@@ -0,0 +1,6 @@
+#!/bin/bash
+# DO NOT use GPTQ/AWQ model in FSDP+QLoRA
+
+CUDA_VISIBLE_DEVICES=0,1 accelerate launch \
+ --config_file examples/accelerate/fsdp_config.yaml \
+ src/train.py examples/extras/fsdp_qlora/llama3_lora_sft.yaml
diff --git a/src/train_sft/examples/extras/galore/llama3_full_sft.yaml b/src/train_sft/examples/extras/galore/llama3_full_sft.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..99730932ae5150e1eacdd2c20ad9b9a7b0e51263
--- /dev/null
+++ b/src/train_sft/examples/extras/galore/llama3_full_sft.yaml
@@ -0,0 +1,47 @@
+### model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+trust_remote_code: true
+
+### method
+stage: sft
+do_train: true
+finetuning_type: full
+use_galore: true
+galore_layerwise: true # choices: [true, false], use false for DDP training
+galore_target: all
+galore_rank: 128
+galore_scale: 2.0
+
+### dataset
+dataset: identity,alpaca_en_demo
+template: llama3
+cutoff_len: 2048
+max_samples: 1000
+overwrite_cache: true
+preprocessing_num_workers: 16
+dataloader_num_workers: 4
+
+### output
+output_dir: saves/llama3-8b/full/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+save_only_model: false
+report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
+
+### train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 1 # use 1 for layerwise galore
+learning_rate: 1.0e-5
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_ratio: 0.1
+pure_bf16: true
+ddp_timeout: 180000000
+
+### eval
+# val_size: 0.1
+# per_device_eval_batch_size: 1
+# eval_strategy: steps
+# eval_steps: 500
diff --git a/src/train_sft/examples/extras/pissa/llama3_lora_sft.yaml b/src/train_sft/examples/extras/pissa/llama3_lora_sft.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..1668343bbec09711819d875a684ed646a54f8638
--- /dev/null
+++ b/src/train_sft/examples/extras/pissa/llama3_lora_sft.yaml
@@ -0,0 +1,47 @@
+### model
+model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct
+trust_remote_code: true
+
+### method
+stage: sft
+do_train: true
+finetuning_type: lora
+lora_rank: 8
+lora_target: all
+pissa_init: true
+pissa_iter: 16
+pissa_convert: true
+
+### dataset
+dataset: identity,alpaca_en_demo
+template: llama3
+cutoff_len: 2048
+max_samples: 1000
+overwrite_cache: true
+preprocessing_num_workers: 16
+dataloader_num_workers: 4
+
+### output
+output_dir: saves/llama3-8b/lora/sft
+logging_steps: 10
+save_steps: 500
+plot_loss: true
+overwrite_output_dir: true
+save_only_model: false
+report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
+
+### train
+per_device_train_batch_size: 1
+gradient_accumulation_steps: 8
+learning_rate: 1.0e-4
+num_train_epochs: 3.0
+lr_scheduler_type: cosine
+warmup_ratio: 0.1
+bf16: true
+ddp_timeout: 180000000
+
+### eval
+# val_size: 0.1
+# per_device_eval_batch_size: 1
+# eval_strategy: steps
+# eval_steps: 500
diff --git a/src/train_sft/pyproject.toml b/src/train_sft/pyproject.toml
new file mode 100644
index 0000000000000000000000000000000000000000..c9d753cbb25b34c4cdfde4aa98263360555981e3
--- /dev/null
+++ b/src/train_sft/pyproject.toml
@@ -0,0 +1,95 @@
+[build-system]
+requires = ["setuptools>=61.0"]
+build-backend = "setuptools.build_meta"
+
+[project]
+name = "llamafactory"
+dynamic = [
+ "version",
+ "dependencies",
+ "optional-dependencies",
+ "requires-python",
+ "scripts",
+ "authors",
+ "description",
+ "readme",
+ "license",
+ "keywords",
+ "classifiers"
+]
+
+[tool.ruff]
+target-version = "py39"
+line-length = 119
+indent-width = 4
+
+[tool.ruff.lint]
+ignore = [
+ "C408", # collection
+ "C901", # complex
+ "E501", # line too long
+ "E731", # lambda function
+ "E741", # ambiguous var name
+ "D100", # no doc public module
+ "D101", # no doc public class
+ "D102", # no doc public method
+ "D103", # no doc public function
+ "D104", # no doc public package
+ "D105", # no doc magic method
+ "D107", # no doc __init__
+]
+extend-select = [
+ "C", # complexity
+ "E", # error
+ "F", # pyflakes
+ "I", # isort
+ "W", # warning
+ "UP", # pyupgrade
+ "D", # pydocstyle
+ "PT009", # pytest assert
+ "RUF022", # sort __all__
+]
+
+[tool.ruff.lint.isort]
+lines-after-imports = 2
+known-first-party = ["llamafactory"]
+known-third-party = [
+ "accelerate",
+ "datasets",
+ "gradio",
+ "numpy",
+ "peft",
+ "torch",
+ "transformers",
+ "trl",
+]
+
+[tool.ruff.lint.pydocstyle]
+convention = "google"
+
+[tool.ruff.format]
+quote-style = "double"
+indent-style = "space"
+docstring-code-format = true
+skip-magic-trailing-comma = false
+line-ending = "auto"
+
+[tool.uv]
+conflicts = [
+ [
+ { extra = "torch-npu" },
+ { extra = "aqlm" },
+ ],
+ [
+ { extra = "torch-npu" },
+ { extra = "vllm" },
+ ],
+ [
+ { extra = "torch-npu" },
+ { extra = "sglang" },
+ ],
+ [
+ { extra = "vllm" },
+ { extra = "sglang" },
+ ],
+]
diff --git a/src/train_sft/requirements.txt b/src/train_sft/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..650d08c30b08b11b7c3dc53021b44d17ae9d4dbf
--- /dev/null
+++ b/src/train_sft/requirements.txt
@@ -0,0 +1,37 @@
+# core deps
+transformers>=4.49.0,<=4.55.0,!=4.52.0
+datasets>=2.16.0,<=3.6.0
+accelerate>=1.3.0,<=1.7.0
+peft>=0.14.0,<=0.15.2
+trl>=0.8.6,<=0.9.6
+tokenizers>=0.19.0,<=0.21.1
+# gui
+gradio>=4.38.0,<=5.31.0
+matplotlib>=3.7.0
+tyro<0.9.0
+# ops
+einops
+numpy<2.0.0
+pandas>=2.0.0
+scipy
+# model and tokenizer
+sentencepiece
+tiktoken
+modelscope>=1.14.0
+hf-transfer
+safetensors<=0.5.3
+# python
+fire
+omegaconf
+packaging
+protobuf
+pyyaml
+pydantic<=2.10.6
+# api
+uvicorn
+fastapi
+sse-starlette
+# media
+av
+librosa
+wandb
diff --git a/src/train_sft/scripts/llama_pro.py b/src/train_sft/scripts/llama_pro.py
new file mode 100644
index 0000000000000000000000000000000000000000..7e4b9448505769104f5155ce7bc4c3ef9ec01bc6
--- /dev/null
+++ b/src/train_sft/scripts/llama_pro.py
@@ -0,0 +1,129 @@
+# Copyright 2025 Tencent Inc. and the LlamaFactory team.
+#
+# This code is inspired by the Tencent's LLaMA-Pro library.
+# https://github.com/TencentARC/LLaMA-Pro/blob/main/scripts/block_expansion.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+from collections import OrderedDict
+from typing import TYPE_CHECKING
+
+import fire
+import torch
+from huggingface_hub import split_torch_state_dict_into_shards
+from safetensors.torch import save_file
+from tqdm import tqdm
+from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, PreTrainedModel
+from transformers.modeling_utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
+
+
+if TYPE_CHECKING:
+ from transformers import PretrainedConfig
+
+
+def change_name(name: str, old_index: int, new_index: int) -> str:
+ return name.replace(f".{old_index:d}.", f".{new_index:d}.")
+
+
+def block_expansion(
+ model_name_or_path: str,
+ output_dir: str,
+ num_expand: int,
+ shard_size: str = "5GB",
+ save_safetensors: bool = True,
+):
+ r"""Perform block expansion for LLaMA, Mistral, Qwen2 or Yi models.
+
+ Usage: python llama_pro.py --model_name_or_path meta-llama/Llama-2-7b-hf --output_dir llama2_pro --num_expand 8
+ """
+ config: PretrainedConfig = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
+ num_layers = getattr(config, "num_hidden_layers")
+ if num_layers % num_expand != 0:
+ raise ValueError(f"`num_layers` {num_layers} should be divisible by `num_expand` {num_expand}.")
+
+ setattr(config, "num_hidden_layers", num_layers + num_expand)
+ config.save_pretrained(output_dir)
+
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
+ tokenizer.save_pretrained(output_dir)
+
+ print(f"Expanding model of {num_layers} layers to {num_layers + num_expand} layers.")
+ model = AutoModelForCausalLM.from_pretrained(
+ model_name_or_path, torch_dtype="auto", device_map="cpu", trust_remote_code=True, low_cpu_mem_usage=True
+ )
+ assert isinstance(model, PreTrainedModel) # type hint
+ if save_safetensors and getattr(model.config, "tie_word_embeddings", False):
+ del model.lm_head # safetensors does not allow shared weights
+
+ split = num_layers // num_expand
+ layer_cnt = 0
+ state_dict = model.state_dict()
+ output_state_dict: dict[str, torch.Tensor] = OrderedDict()
+ for i in range(num_layers):
+ for key, value in state_dict.items():
+ if f".{i:d}." in key:
+ output_state_dict[change_name(key, i, layer_cnt)] = value
+
+ print(f"Add layer {layer_cnt} copied from layer {i}.")
+ layer_cnt += 1
+ if (i + 1) % split == 0:
+ for key, value in state_dict.items():
+ if f".{i:d}." in key:
+ if "down_proj" in key or "o_proj" in key:
+ output_state_dict[change_name(key, i, layer_cnt)] = torch.zeros_like(value)
+ else:
+ output_state_dict[change_name(key, i, layer_cnt)] = torch.clone(value)
+
+ print(f"Add layer {layer_cnt} expanded from layer {i}.")
+ layer_cnt += 1
+
+ for key, value in state_dict.items():
+ if key not in output_state_dict:
+ output_state_dict[key] = value
+
+ weights_name = SAFE_WEIGHTS_NAME if save_safetensors else WEIGHTS_NAME
+ filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
+ state_dict_split = split_torch_state_dict_into_shards(
+ output_state_dict, filename_pattern=filename_pattern, max_shard_size=shard_size
+ )
+ for shard_file, tensors in tqdm(state_dict_split.filename_to_tensors.items(), desc="Save weights"):
+ shard = {tensor: output_state_dict[tensor].contiguous() for tensor in tensors}
+ if save_safetensors:
+ save_file(shard, os.path.join(output_dir, shard_file), metadata={"format": "pt"})
+ else:
+ torch.save(shard, os.path.join(output_dir, shard_file))
+
+ if not state_dict_split.is_sharded:
+ print(f"Model weights saved in {os.path.join(output_dir, weights_name)}.")
+ else:
+ index = {
+ "metadata": state_dict_split.metadata,
+ "weight_map": state_dict_split.tensor_to_filename,
+ }
+ index_name = SAFE_WEIGHTS_INDEX_NAME if save_safetensors else WEIGHTS_INDEX_NAME
+ with open(os.path.join(output_dir, index_name), "w", encoding="utf-8") as f:
+ json.dump(index, f, indent=2, sort_keys=True)
+
+ print(f"Model weights saved in {output_dir}.")
+
+ print("- Fine-tune this model with:")
+ print(f"model_name_or_path: {output_dir}")
+ print("finetuning_type: freeze")
+ print(f"freeze_trainable_layers: {num_expand}")
+ print("use_llama_pro: true")
+
+
+if __name__ == "__main__":
+ fire.Fire(block_expansion)
diff --git a/src/train_sft/scripts/loftq_init.py b/src/train_sft/scripts/loftq_init.py
new file mode 100644
index 0000000000000000000000000000000000000000..3a7933889be55a254b2417e9dca2ce2b7d691401
--- /dev/null
+++ b/src/train_sft/scripts/loftq_init.py
@@ -0,0 +1,88 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is based on the HuggingFace's PEFT library.
+# https://github.com/huggingface/peft/blob/v0.10.0/examples/loftq_finetuning/quantize_save_load.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+from typing import TYPE_CHECKING
+
+import fire
+from peft import LoftQConfig, LoraConfig, TaskType, get_peft_model
+from transformers import AutoModelForCausalLM, AutoTokenizer
+
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedModel
+
+
+def quantize_loftq(
+ model_name_or_path: str,
+ output_dir: str,
+ loftq_bits: int = 4,
+ loftq_iter: int = 4,
+ lora_alpha: int = None,
+ lora_rank: int = 16,
+ lora_dropout: float = 0,
+ lora_target: tuple = ("q_proj", "v_proj"),
+ save_safetensors: bool = True,
+):
+ r"""Initialize LoRA weights with LoRA-fine-tuning-aware Quantization (LoftQ).
+
+ Usage: python loftq_init.py --model_name_or_path path_to_model --output_dir output_dir
+ """
+ if isinstance(lora_target, str):
+ lora_target = [name.strip() for name in lora_target.split(",")]
+
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path, trust_remote_code=True, torch_dtype="auto")
+
+ loftq_config = LoftQConfig(loftq_bits=loftq_bits, loftq_iter=loftq_iter)
+ lora_config = LoraConfig(
+ task_type=TaskType.CAUSAL_LM,
+ inference_mode=True,
+ r=lora_rank,
+ lora_alpha=lora_alpha if lora_alpha is not None else lora_rank * 2,
+ lora_dropout=lora_dropout,
+ target_modules=lora_target,
+ init_lora_weights="loftq",
+ loftq_config=loftq_config,
+ )
+
+ # Init LoftQ model
+ print("Initializing LoftQ weights, it may be take several minutes, wait patiently.")
+ peft_model = get_peft_model(model, lora_config)
+ loftq_dir = os.path.join(output_dir, "loftq_init")
+
+ # Save LoftQ model
+ setattr(peft_model.peft_config["default"], "base_model_name_or_path", os.path.abspath(output_dir))
+ setattr(peft_model.peft_config["default"], "init_lora_weights", True) # don't apply loftq again
+ peft_model.save_pretrained(loftq_dir, safe_serialization=save_safetensors)
+ print(f"Adapter weights saved in {loftq_dir}")
+
+ # Save base model
+ base_model: PreTrainedModel = peft_model.unload()
+ base_model.save_pretrained(output_dir, safe_serialization=save_safetensors)
+ tokenizer.save_pretrained(output_dir)
+ print(f"Model weights saved in {output_dir}")
+
+ print("- Fine-tune this model with:")
+ print(f"model_name_or_path: {output_dir}")
+ print(f"adapter_name_or_path: {loftq_dir}")
+ print("finetuning_type: lora")
+ print(f"quantization_bit: {loftq_bits}")
+
+
+if __name__ == "__main__":
+ fire.Fire(quantize_loftq)
diff --git a/src/train_sft/setup.py b/src/train_sft/setup.py
new file mode 100644
index 0000000000000000000000000000000000000000..6a079ac85ba7103b4b8a1b0954eb7bfc2add6bbe
--- /dev/null
+++ b/src/train_sft/setup.py
@@ -0,0 +1,113 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import re
+
+from setuptools import find_packages, setup
+
+
+def get_version() -> str:
+ with open(os.path.join("src", "llamafactory", "extras", "env.py"), encoding="utf-8") as f:
+ file_content = f.read()
+ pattern = r"{}\W*=\W*\"([^\"]+)\"".format("VERSION")
+ (version,) = re.findall(pattern, file_content)
+ return version
+
+
+def get_requires() -> list[str]:
+ with open("requirements.txt", encoding="utf-8") as f:
+ file_content = f.read()
+ lines = [line.strip() for line in file_content.strip().split("\n") if not line.startswith("#")]
+ return lines
+
+
+def get_console_scripts() -> list[str]:
+ console_scripts = ["llamafactory-cli = llamafactory.cli:main"]
+ if os.getenv("ENABLE_SHORT_CONSOLE", "1").lower() in ["true", "y", "1"]:
+ console_scripts.append("lmf = llamafactory.cli:main")
+
+ return console_scripts
+
+
+extra_require = {
+ "torch": ["torch>=2.0.0", "torchvision>=0.15.0"],
+ "torch-npu": ["torch-npu==2.5.1", "torchvision==0.20.1", "decorator"],
+ "metrics": ["nltk", "jieba", "rouge-chinese"],
+ "deepspeed": ["deepspeed>=0.10.0,<=0.16.9"],
+ "liger-kernel": ["liger-kernel>=0.5.5"],
+ "bitsandbytes": ["bitsandbytes>=0.39.0"],
+ "hqq": ["hqq"],
+ "eetq": ["eetq"],
+ "gptq": ["optimum>=1.24.0", "gptqmodel>=2.0.0"],
+ "aqlm": ["aqlm[gpu]>=1.1.0"],
+ "vllm": ["vllm>=0.4.3,<=0.10.0"],
+ "sglang": ["sglang[srt]>=0.4.5", "transformers==4.51.1"],
+ "galore": ["galore-torch"],
+ "apollo": ["apollo-torch"],
+ "badam": ["badam>=1.2.1"],
+ "adam-mini": ["adam-mini"],
+ "minicpm_v": [
+ "soundfile",
+ "torchvision",
+ "torchaudio",
+ "vector_quantize_pytorch",
+ "vocos",
+ "msgpack",
+ "referencing",
+ "jsonschema_specifications",
+ ],
+ "openmind": ["openmind"],
+ "swanlab": ["swanlab"],
+ "dev": ["pre-commit", "ruff", "pytest", "build"],
+}
+
+
+def main():
+ setup(
+ name="llamafactory",
+ version=get_version(),
+ author="hiyouga",
+ author_email="hiyouga@buaa.edu.cn",
+ description="Unified Efficient Fine-Tuning of 100+ LLMs",
+ long_description=open("README.md", encoding="utf-8").read(),
+ long_description_content_type="text/markdown",
+ keywords=["AI", "LLM", "GPT", "ChatGPT", "Llama", "Transformer", "DeepSeek", "Pytorch"],
+ license="Apache 2.0 License",
+ url="https://github.com/hiyouga/LLaMA-Factory",
+ package_dir={"": "src"},
+ packages=find_packages("src"),
+ python_requires=">=3.9.0",
+ install_requires=get_requires(),
+ extras_require=extra_require,
+ entry_points={"console_scripts": get_console_scripts()},
+ classifiers=[
+ "Development Status :: 4 - Beta",
+ "Intended Audience :: Developers",
+ "Intended Audience :: Education",
+ "Intended Audience :: Science/Research",
+ "License :: OSI Approved :: Apache Software License",
+ "Operating System :: OS Independent",
+ "Programming Language :: Python :: 3",
+ "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
+ "Programming Language :: Python :: 3.12",
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
+ ],
+ )
+
+
+if __name__ == "__main__":
+ main()
diff --git a/src/train_sft/src/llamafactory/__init__.py b/src/train_sft/src/llamafactory/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..b1567ef572714881cc464db25d3da3d08a460963
--- /dev/null
+++ b/src/train_sft/src/llamafactory/__init__.py
@@ -0,0 +1,31 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+r"""Efficient fine-tuning of large language models.
+
+Level:
+ api, webui > chat, eval, train > data, model > hparams > extras
+
+Disable version checking: DISABLE_VERSION_CHECK=1
+Enable VRAM recording: RECORD_VRAM=1
+Force using torchrun: FORCE_TORCHRUN=1
+Set logging verbosity: LLAMAFACTORY_VERBOSITY=WARN
+Use modelscope: USE_MODELSCOPE_HUB=1
+Use openmind: USE_OPENMIND_HUB=1
+"""
+
+from .extras.env import VERSION
+
+
+__version__ = VERSION
diff --git a/src/train_sft/src/llamafactory/__pycache__/__init__.cpython-311.pyc b/src/train_sft/src/llamafactory/__pycache__/__init__.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..ec8cbfaccfc909c4f9e352922f685a926660f0c9
Binary files /dev/null and b/src/train_sft/src/llamafactory/__pycache__/__init__.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/__pycache__/__init__.cpython-313.pyc b/src/train_sft/src/llamafactory/__pycache__/__init__.cpython-313.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..652d20053dc5fe0a6080a0084b57ddfc23a33f76
Binary files /dev/null and b/src/train_sft/src/llamafactory/__pycache__/__init__.cpython-313.pyc differ
diff --git a/src/train_sft/src/llamafactory/__pycache__/cli.cpython-311.pyc b/src/train_sft/src/llamafactory/__pycache__/cli.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..46d6cd0214311d1956ef258f5a1bd2e7944ea163
Binary files /dev/null and b/src/train_sft/src/llamafactory/__pycache__/cli.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/__pycache__/launcher.cpython-311.pyc b/src/train_sft/src/llamafactory/__pycache__/launcher.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..ec9245ab8d21867a410f2407ad5e3af8e8aa753f
Binary files /dev/null and b/src/train_sft/src/llamafactory/__pycache__/launcher.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/api/__init__.py b/src/train_sft/src/llamafactory/api/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/train_sft/src/llamafactory/api/app.py b/src/train_sft/src/llamafactory/api/app.py
new file mode 100644
index 0000000000000000000000000000000000000000..e0621d80b064f00970e8fb58909ec8656ba0fb6b
--- /dev/null
+++ b/src/train_sft/src/llamafactory/api/app.py
@@ -0,0 +1,133 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+import os
+from contextlib import asynccontextmanager
+from functools import partial
+from typing import Annotated, Optional
+
+from ..chat import ChatModel
+from ..extras.constants import EngineName
+from ..extras.misc import torch_gc
+from ..extras.packages import is_fastapi_available, is_starlette_available, is_uvicorn_available
+from .chat import (
+ create_chat_completion_response,
+ create_score_evaluation_response,
+ create_stream_chat_completion_response,
+)
+from .protocol import (
+ ChatCompletionRequest,
+ ChatCompletionResponse,
+ ModelCard,
+ ModelList,
+ ScoreEvaluationRequest,
+ ScoreEvaluationResponse,
+)
+
+
+if is_fastapi_available():
+ from fastapi import Depends, FastAPI, HTTPException, status
+ from fastapi.middleware.cors import CORSMiddleware
+ from fastapi.security.http import HTTPAuthorizationCredentials, HTTPBearer
+
+
+if is_starlette_available():
+ from sse_starlette import EventSourceResponse
+
+
+if is_uvicorn_available():
+ import uvicorn
+
+
+async def sweeper() -> None:
+ while True:
+ torch_gc()
+ await asyncio.sleep(300)
+
+
+@asynccontextmanager
+async def lifespan(app: "FastAPI", chat_model: "ChatModel"): # collects GPU memory
+ if chat_model.engine.name == EngineName.HF:
+ asyncio.create_task(sweeper())
+
+ yield
+ torch_gc()
+
+
+def create_app(chat_model: "ChatModel") -> "FastAPI":
+ root_path = os.getenv("FASTAPI_ROOT_PATH", "")
+ app = FastAPI(lifespan=partial(lifespan, chat_model=chat_model), root_path=root_path)
+ app.add_middleware(
+ CORSMiddleware,
+ allow_origins=["*"],
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+ )
+ api_key = os.getenv("API_KEY")
+ security = HTTPBearer(auto_error=False)
+
+ async def verify_api_key(auth: Annotated[Optional[HTTPAuthorizationCredentials], Depends(security)]):
+ if api_key and (auth is None or auth.credentials != api_key):
+ raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key.")
+
+ @app.get(
+ "/v1/models",
+ response_model=ModelList,
+ status_code=status.HTTP_200_OK,
+ dependencies=[Depends(verify_api_key)],
+ )
+ async def list_models():
+ model_card = ModelCard(id=os.getenv("API_MODEL_NAME", "gpt-3.5-turbo"))
+ return ModelList(data=[model_card])
+
+ @app.post(
+ "/v1/chat/completions",
+ response_model=ChatCompletionResponse,
+ status_code=status.HTTP_200_OK,
+ dependencies=[Depends(verify_api_key)],
+ )
+ async def create_chat_completion(request: ChatCompletionRequest):
+ if not chat_model.engine.can_generate:
+ raise HTTPException(status_code=status.HTTP_405_METHOD_NOT_ALLOWED, detail="Not allowed")
+
+ if request.stream:
+ generate = create_stream_chat_completion_response(request, chat_model)
+ return EventSourceResponse(generate, media_type="text/event-stream", sep="\n")
+ else:
+ return await create_chat_completion_response(request, chat_model)
+
+ @app.post(
+ "/v1/score/evaluation",
+ response_model=ScoreEvaluationResponse,
+ status_code=status.HTTP_200_OK,
+ dependencies=[Depends(verify_api_key)],
+ )
+ async def create_score_evaluation(request: ScoreEvaluationRequest):
+ if chat_model.engine.can_generate:
+ raise HTTPException(status_code=status.HTTP_405_METHOD_NOT_ALLOWED, detail="Not allowed")
+
+ return await create_score_evaluation_response(request, chat_model)
+
+ return app
+
+
+def run_api() -> None:
+ chat_model = ChatModel()
+ app = create_app(chat_model)
+ api_host = os.getenv("API_HOST", "0.0.0.0")
+ api_port = int(os.getenv("API_PORT", "8000"))
+ print(f"Visit http://localhost:{api_port}/docs for API document.")
+ uvicorn.run(app, host=api_host, port=api_port)
diff --git a/src/train_sft/src/llamafactory/api/chat.py b/src/train_sft/src/llamafactory/api/chat.py
new file mode 100644
index 0000000000000000000000000000000000000000..c0d87196c5d09de7b223c6d161da57405a526628
--- /dev/null
+++ b/src/train_sft/src/llamafactory/api/chat.py
@@ -0,0 +1,285 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import base64
+import io
+import json
+import os
+import re
+import uuid
+from collections.abc import AsyncGenerator
+from typing import TYPE_CHECKING, Optional
+
+from ..data import Role as DataRole
+from ..extras import logging
+from ..extras.constants import AUDIO_PLACEHOLDER, IMAGE_PLACEHOLDER, VIDEO_PLACEHOLDER
+from ..extras.misc import is_env_enabled
+from ..extras.packages import is_fastapi_available, is_pillow_available, is_requests_available
+from .common import dictify, jsonify
+from .protocol import (
+ ChatCompletionMessage,
+ ChatCompletionResponse,
+ ChatCompletionResponseChoice,
+ ChatCompletionResponseUsage,
+ ChatCompletionStreamResponse,
+ ChatCompletionStreamResponseChoice,
+ Finish,
+ Function,
+ FunctionCall,
+ Role,
+ ScoreEvaluationResponse,
+)
+
+
+if is_fastapi_available():
+ from fastapi import HTTPException, status
+
+
+if is_pillow_available():
+ from PIL import Image
+
+
+if is_requests_available():
+ import requests
+
+
+if TYPE_CHECKING:
+ from ..chat import ChatModel
+ from ..data.mm_plugin import AudioInput, ImageInput, VideoInput
+ from .protocol import ChatCompletionRequest, ScoreEvaluationRequest
+
+
+logger = logging.get_logger(__name__)
+ROLE_MAPPING = {
+ Role.USER: DataRole.USER.value,
+ Role.ASSISTANT: DataRole.ASSISTANT.value,
+ Role.SYSTEM: DataRole.SYSTEM.value,
+ Role.FUNCTION: DataRole.FUNCTION.value,
+ Role.TOOL: DataRole.OBSERVATION.value,
+}
+
+
+def _process_request(
+ request: "ChatCompletionRequest",
+) -> tuple[
+ list[dict[str, str]],
+ Optional[str],
+ Optional[str],
+ Optional[list["ImageInput"]],
+ Optional[list["VideoInput"]],
+ Optional[list["AudioInput"]],
+]:
+ if is_env_enabled("API_VERBOSE", "1"):
+ logger.info_rank0(f"==== request ====\n{json.dumps(dictify(request), indent=2, ensure_ascii=False)}")
+
+ if len(request.messages) == 0:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid length")
+
+ if request.messages[0].role == Role.SYSTEM:
+ content = request.messages.pop(0).content
+ system = content[0].text if isinstance(content, list) else content
+ else:
+ system = None
+
+ if len(request.messages) % 2 == 0:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Only supports u/a/u/a/u...")
+
+ input_messages = []
+ images, videos, audios = [], [], []
+ for i, message in enumerate(request.messages):
+ if i % 2 == 0 and message.role not in [Role.USER, Role.TOOL]:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid role")
+ elif i % 2 == 1 and message.role not in [Role.ASSISTANT, Role.FUNCTION]:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid role")
+
+ if message.role == Role.ASSISTANT and isinstance(message.tool_calls, list) and len(message.tool_calls):
+ tool_calls = [
+ {"name": tool_call.function.name, "arguments": tool_call.function.arguments}
+ for tool_call in message.tool_calls
+ ]
+ content = json.dumps(tool_calls, ensure_ascii=False)
+ input_messages.append({"role": ROLE_MAPPING[Role.FUNCTION], "content": content})
+ elif isinstance(message.content, list):
+ text_content = ""
+ for input_item in message.content:
+ if input_item.type == "text":
+ text_content += input_item.text
+ elif input_item.type == "image_url":
+ text_content += IMAGE_PLACEHOLDER
+ image_url = input_item.image_url.url
+ if re.match(r"^data:image\/(png|jpg|jpeg|gif|bmp);base64,(.+)$", image_url): # base64 image
+ image_stream = io.BytesIO(base64.b64decode(image_url.split(",", maxsplit=1)[1]))
+ elif os.path.isfile(image_url): # local file
+ image_stream = open(image_url, "rb")
+ else: # web uri
+ image_stream = requests.get(image_url, stream=True).raw
+
+ images.append(Image.open(image_stream).convert("RGB"))
+ elif input_item.type == "video_url":
+ text_content += VIDEO_PLACEHOLDER
+ video_url = input_item.video_url.url
+ if re.match(r"^data:video\/(mp4|mkv|avi|mov);base64,(.+)$", video_url): # base64 video
+ video_stream = io.BytesIO(base64.b64decode(video_url.split(",", maxsplit=1)[1]))
+ elif os.path.isfile(video_url): # local file
+ video_stream = video_url
+ else: # web uri
+ video_stream = requests.get(video_url, stream=True).raw
+
+ videos.append(video_stream)
+ elif input_item.type == "audio_url":
+ text_content += AUDIO_PLACEHOLDER
+ audio_url = input_item.audio_url.url
+ if re.match(r"^data:audio\/(mpeg|mp3|wav|ogg);base64,(.+)$", audio_url): # base64 audio
+ audio_stream = io.BytesIO(base64.b64decode(audio_url.split(",", maxsplit=1)[1]))
+ elif os.path.isfile(audio_url): # local file
+ audio_stream = audio_url
+ else: # web uri
+ audio_stream = requests.get(audio_url, stream=True).raw
+
+ audios.append(audio_stream)
+ else:
+ raise HTTPException(
+ status_code=status.HTTP_400_BAD_REQUEST, detail=f"Invalid input type {input_item.type}."
+ )
+
+ input_messages.append({"role": ROLE_MAPPING[message.role], "content": text_content})
+ else:
+ input_messages.append({"role": ROLE_MAPPING[message.role], "content": message.content})
+
+ tool_list = request.tools
+ if isinstance(tool_list, list) and len(tool_list):
+ try:
+ tools = json.dumps([dictify(tool.function) for tool in tool_list], ensure_ascii=False)
+ except json.JSONDecodeError:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid tools")
+ else:
+ tools = None
+
+ return input_messages, system, tools, images or None, videos or None, audios or None
+
+
+def _create_stream_chat_completion_chunk(
+ completion_id: str,
+ model: str,
+ delta: "ChatCompletionMessage",
+ index: Optional[int] = 0,
+ finish_reason: Optional["Finish"] = None,
+) -> str:
+ choice_data = ChatCompletionStreamResponseChoice(index=index, delta=delta, finish_reason=finish_reason)
+ chunk = ChatCompletionStreamResponse(id=completion_id, model=model, choices=[choice_data])
+ return jsonify(chunk)
+
+
+async def create_chat_completion_response(
+ request: "ChatCompletionRequest", chat_model: "ChatModel"
+) -> "ChatCompletionResponse":
+ completion_id = f"chatcmpl-{uuid.uuid4().hex}"
+ input_messages, system, tools, images, videos, audios = _process_request(request)
+ responses = await chat_model.achat(
+ input_messages,
+ system,
+ tools,
+ images,
+ videos,
+ audios,
+ do_sample=request.do_sample,
+ temperature=request.temperature,
+ top_p=request.top_p,
+ max_new_tokens=request.max_tokens,
+ num_return_sequences=request.n,
+ repetition_penalty=request.presence_penalty,
+ stop=request.stop,
+ )
+
+ prompt_length, response_length = 0, 0
+ choices = []
+ for i, response in enumerate(responses):
+ if tools:
+ result = chat_model.engine.template.extract_tool(response.response_text)
+ else:
+ result = response.response_text
+
+ if isinstance(result, list):
+ tool_calls = []
+ for tool in result:
+ function = Function(name=tool.name, arguments=tool.arguments)
+ tool_calls.append(FunctionCall(id=f"call_{uuid.uuid4().hex}", function=function))
+
+ response_message = ChatCompletionMessage(role=Role.ASSISTANT, tool_calls=tool_calls)
+ finish_reason = Finish.TOOL
+ else:
+ response_message = ChatCompletionMessage(role=Role.ASSISTANT, content=result)
+ finish_reason = Finish.STOP if response.finish_reason == "stop" else Finish.LENGTH
+
+ choices.append(ChatCompletionResponseChoice(index=i, message=response_message, finish_reason=finish_reason))
+ prompt_length = response.prompt_length
+ response_length += response.response_length
+
+ usage = ChatCompletionResponseUsage(
+ prompt_tokens=prompt_length,
+ completion_tokens=response_length,
+ total_tokens=prompt_length + response_length,
+ )
+
+ return ChatCompletionResponse(id=completion_id, model=request.model, choices=choices, usage=usage)
+
+
+async def create_stream_chat_completion_response(
+ request: "ChatCompletionRequest", chat_model: "ChatModel"
+) -> AsyncGenerator[str, None]:
+ completion_id = f"chatcmpl-{uuid.uuid4().hex}"
+ input_messages, system, tools, images, videos, audios = _process_request(request)
+ if tools:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Cannot stream function calls.")
+
+ if request.n > 1:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Cannot stream multiple responses.")
+
+ yield _create_stream_chat_completion_chunk(
+ completion_id=completion_id, model=request.model, delta=ChatCompletionMessage(role=Role.ASSISTANT, content="")
+ )
+ async for new_token in chat_model.astream_chat(
+ input_messages,
+ system,
+ tools,
+ images,
+ videos,
+ audios,
+ do_sample=request.do_sample,
+ temperature=request.temperature,
+ top_p=request.top_p,
+ max_new_tokens=request.max_tokens,
+ repetition_penalty=request.presence_penalty,
+ stop=request.stop,
+ ):
+ if len(new_token) != 0:
+ yield _create_stream_chat_completion_chunk(
+ completion_id=completion_id, model=request.model, delta=ChatCompletionMessage(content=new_token)
+ )
+
+ yield _create_stream_chat_completion_chunk(
+ completion_id=completion_id, model=request.model, delta=ChatCompletionMessage(), finish_reason=Finish.STOP
+ )
+ yield "[DONE]"
+
+
+async def create_score_evaluation_response(
+ request: "ScoreEvaluationRequest", chat_model: "ChatModel"
+) -> "ScoreEvaluationResponse":
+ score_id = f"scoreval-{uuid.uuid4().hex}"
+ if len(request.messages) == 0:
+ raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="Invalid request")
+
+ scores = await chat_model.aget_scores(request.messages, max_length=request.max_length)
+ return ScoreEvaluationResponse(id=score_id, model=request.model, scores=scores)
diff --git a/src/train_sft/src/llamafactory/api/common.py b/src/train_sft/src/llamafactory/api/common.py
new file mode 100644
index 0000000000000000000000000000000000000000..f4d0c2fb68da41b072e5d73340a88b4203398087
--- /dev/null
+++ b/src/train_sft/src/llamafactory/api/common.py
@@ -0,0 +1,34 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+from typing import TYPE_CHECKING, Any
+
+
+if TYPE_CHECKING:
+ from pydantic import BaseModel
+
+
+def dictify(data: "BaseModel") -> dict[str, Any]:
+ try: # pydantic v2
+ return data.model_dump(exclude_unset=True)
+ except AttributeError: # pydantic v1
+ return data.dict(exclude_unset=True)
+
+
+def jsonify(data: "BaseModel") -> str:
+ try: # pydantic v2
+ return json.dumps(data.model_dump(exclude_unset=True), ensure_ascii=False)
+ except AttributeError: # pydantic v1
+ return data.json(exclude_unset=True, ensure_ascii=False)
diff --git a/src/train_sft/src/llamafactory/api/protocol.py b/src/train_sft/src/llamafactory/api/protocol.py
new file mode 100644
index 0000000000000000000000000000000000000000..889d938e0b727ef1fca63d95fa20926c31830c52
--- /dev/null
+++ b/src/train_sft/src/llamafactory/api/protocol.py
@@ -0,0 +1,157 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import time
+from enum import Enum, unique
+from typing import Any, Optional, Union
+
+from pydantic import BaseModel, Field
+from typing_extensions import Literal
+
+
+@unique
+class Role(str, Enum):
+ USER = "user"
+ ASSISTANT = "assistant"
+ SYSTEM = "system"
+ FUNCTION = "function"
+ TOOL = "tool"
+
+
+@unique
+class Finish(str, Enum):
+ STOP = "stop"
+ LENGTH = "length"
+ TOOL = "tool_calls"
+
+
+class ModelCard(BaseModel):
+ id: str
+ object: Literal["model"] = "model"
+ created: int = Field(default_factory=lambda: int(time.time()))
+ owned_by: Literal["owner"] = "owner"
+
+
+class ModelList(BaseModel):
+ object: Literal["list"] = "list"
+ data: list[ModelCard] = []
+
+
+class Function(BaseModel):
+ name: str
+ arguments: str
+
+
+class FunctionDefinition(BaseModel):
+ name: str
+ description: str
+ parameters: dict[str, Any]
+
+
+class FunctionAvailable(BaseModel):
+ type: Literal["function", "code_interpreter"] = "function"
+ function: Optional[FunctionDefinition] = None
+
+
+class FunctionCall(BaseModel):
+ id: str
+ type: Literal["function"] = "function"
+ function: Function
+
+
+class URL(BaseModel):
+ url: str
+ detail: Literal["auto", "low", "high"] = "auto"
+
+
+class MultimodalInputItem(BaseModel):
+ type: Literal["text", "image_url", "video_url", "audio_url"]
+ text: Optional[str] = None
+ image_url: Optional[URL] = None
+ video_url: Optional[URL] = None
+ audio_url: Optional[URL] = None
+
+
+class ChatMessage(BaseModel):
+ role: Role
+ content: Optional[Union[str, list[MultimodalInputItem]]] = None
+ tool_calls: Optional[list[FunctionCall]] = None
+
+
+class ChatCompletionMessage(BaseModel):
+ role: Optional[Role] = None
+ content: Optional[str] = None
+ tool_calls: Optional[list[FunctionCall]] = None
+
+
+class ChatCompletionRequest(BaseModel):
+ model: str
+ messages: list[ChatMessage]
+ tools: Optional[list[FunctionAvailable]] = None
+ do_sample: Optional[bool] = None
+ temperature: Optional[float] = None
+ top_p: Optional[float] = None
+ n: int = 1
+ presence_penalty: Optional[float] = None
+ max_tokens: Optional[int] = None
+ stop: Optional[Union[str, list[str]]] = None
+ stream: bool = False
+
+
+class ChatCompletionResponseChoice(BaseModel):
+ index: int
+ message: ChatCompletionMessage
+ finish_reason: Finish
+
+
+class ChatCompletionStreamResponseChoice(BaseModel):
+ index: int
+ delta: ChatCompletionMessage
+ finish_reason: Optional[Finish] = None
+
+
+class ChatCompletionResponseUsage(BaseModel):
+ prompt_tokens: int
+ completion_tokens: int
+ total_tokens: int
+
+
+class ChatCompletionResponse(BaseModel):
+ id: str
+ object: Literal["chat.completion"] = "chat.completion"
+ created: int = Field(default_factory=lambda: int(time.time()))
+ model: str
+ choices: list[ChatCompletionResponseChoice]
+ usage: ChatCompletionResponseUsage
+
+
+class ChatCompletionStreamResponse(BaseModel):
+ id: str
+ object: Literal["chat.completion.chunk"] = "chat.completion.chunk"
+ created: int = Field(default_factory=lambda: int(time.time()))
+ model: str
+ choices: list[ChatCompletionStreamResponseChoice]
+
+
+class ScoreEvaluationRequest(BaseModel):
+ model: str
+ messages: list[str]
+ max_length: Optional[int] = None
+
+
+class ScoreEvaluationResponse(BaseModel):
+ id: str
+ object: Literal["score.evaluation"] = "score.evaluation"
+ model: str
+ scores: list[float]
diff --git a/src/train_sft/src/llamafactory/chat/__init__.py b/src/train_sft/src/llamafactory/chat/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..15d8b9ba2d77d6f300d59300da5a49abd3ed4e57
--- /dev/null
+++ b/src/train_sft/src/llamafactory/chat/__init__.py
@@ -0,0 +1,19 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from .base_engine import BaseEngine
+from .chat_model import ChatModel
+
+
+__all__ = ["BaseEngine", "ChatModel"]
diff --git a/src/train_sft/src/llamafactory/chat/base_engine.py b/src/train_sft/src/llamafactory/chat/base_engine.py
new file mode 100644
index 0000000000000000000000000000000000000000..6d497c1ae927f94f396c18833b18cdb894cbd59d
--- /dev/null
+++ b/src/train_sft/src/llamafactory/chat/base_engine.py
@@ -0,0 +1,98 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from abc import ABC, abstractmethod
+from collections.abc import AsyncGenerator
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Literal, Optional, Union
+
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedModel, PreTrainedTokenizer
+ from vllm import AsyncLLMEngine
+
+ from ..data import Template
+ from ..data.mm_plugin import AudioInput, ImageInput, VideoInput
+ from ..extras.constants import EngineName
+ from ..hparams import DataArguments, FinetuningArguments, GeneratingArguments, ModelArguments
+
+
+@dataclass
+class Response:
+ response_text: str
+ response_length: int
+ prompt_length: int
+ finish_reason: Literal["stop", "length"]
+
+
+class BaseEngine(ABC):
+ r"""Base class for inference engine of chat models.
+
+ Must implements async methods: chat(), stream_chat() and get_scores().
+ """
+
+ name: "EngineName"
+ model: Union["PreTrainedModel", "AsyncLLMEngine"]
+ tokenizer: "PreTrainedTokenizer"
+ can_generate: bool
+ template: "Template"
+ generating_args: dict[str, Any]
+
+ @abstractmethod
+ def __init__(
+ self,
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ finetuning_args: "FinetuningArguments",
+ generating_args: "GeneratingArguments",
+ ) -> None:
+ r"""Initialize an inference engine."""
+ ...
+
+ @abstractmethod
+ async def chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> list["Response"]:
+ r"""Get a list of responses of the chat model."""
+ ...
+
+ @abstractmethod
+ async def stream_chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> AsyncGenerator[str, None]:
+ r"""Get the response token-by-token of the chat model."""
+ ...
+
+ @abstractmethod
+ async def get_scores(
+ self,
+ batch_input: list[str],
+ **input_kwargs,
+ ) -> list[float]:
+ r"""Get a list of scores of the reward model."""
+ ...
diff --git a/src/train_sft/src/llamafactory/chat/chat_model.py b/src/train_sft/src/llamafactory/chat/chat_model.py
new file mode 100644
index 0000000000000000000000000000000000000000..0022eed95d13f38f8d7f61153731b35737fd2e36
--- /dev/null
+++ b/src/train_sft/src/llamafactory/chat/chat_model.py
@@ -0,0 +1,184 @@
+# Copyright 2025 THUDM and the LlamaFactory team.
+#
+# This code is inspired by the THUDM's ChatGLM implementation.
+# https://github.com/THUDM/ChatGLM-6B/blob/main/cli_demo.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+import os
+from collections.abc import AsyncGenerator, Generator
+from threading import Thread
+from typing import TYPE_CHECKING, Any, Optional
+
+from ..extras.constants import EngineName
+from ..extras.misc import torch_gc
+from ..hparams import get_infer_args
+from .hf_engine import HuggingfaceEngine
+from .sglang_engine import SGLangEngine
+from .vllm_engine import VllmEngine
+
+
+if TYPE_CHECKING:
+ from ..data.mm_plugin import AudioInput, ImageInput, VideoInput
+ from .base_engine import BaseEngine, Response
+
+
+def _start_background_loop(loop: "asyncio.AbstractEventLoop") -> None:
+ asyncio.set_event_loop(loop)
+ loop.run_forever()
+
+
+class ChatModel:
+ r"""General class for chat models. Backed by huggingface or vllm engines.
+
+ Supports both sync and async methods.
+ Sync methods: chat(), stream_chat() and get_scores().
+ Async methods: achat(), astream_chat() and aget_scores().
+ """
+
+ def __init__(self, args: Optional[dict[str, Any]] = None) -> None:
+ model_args, data_args, finetuning_args, generating_args = get_infer_args(args)
+ if model_args.infer_backend == EngineName.HF:
+ self.engine: BaseEngine = HuggingfaceEngine(model_args, data_args, finetuning_args, generating_args)
+ elif model_args.infer_backend == EngineName.VLLM:
+ self.engine: BaseEngine = VllmEngine(model_args, data_args, finetuning_args, generating_args)
+ elif model_args.infer_backend == EngineName.SGLANG:
+ self.engine: BaseEngine = SGLangEngine(model_args, data_args, finetuning_args, generating_args)
+ else:
+ raise NotImplementedError(f"Unknown backend: {model_args.infer_backend}")
+
+ self._loop = asyncio.new_event_loop()
+ self._thread = Thread(target=_start_background_loop, args=(self._loop,), daemon=True)
+ self._thread.start()
+
+ def chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> list["Response"]:
+ r"""Get a list of responses of the chat model."""
+ task = asyncio.run_coroutine_threadsafe(
+ self.achat(messages, system, tools, images, videos, audios, **input_kwargs), self._loop
+ )
+ return task.result()
+
+ async def achat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> list["Response"]:
+ r"""Asynchronously get a list of responses of the chat model."""
+ return await self.engine.chat(messages, system, tools, images, videos, audios, **input_kwargs)
+
+ def stream_chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> Generator[str, None, None]:
+ r"""Get the response token-by-token of the chat model."""
+ generator = self.astream_chat(messages, system, tools, images, videos, audios, **input_kwargs)
+ while True:
+ try:
+ task = asyncio.run_coroutine_threadsafe(generator.__anext__(), self._loop)
+ yield task.result()
+ except StopAsyncIteration:
+ break
+
+ async def astream_chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> AsyncGenerator[str, None]:
+ r"""Asynchronously get the response token-by-token of the chat model."""
+ async for new_token in self.engine.stream_chat(
+ messages, system, tools, images, videos, audios, **input_kwargs
+ ):
+ yield new_token
+
+ def get_scores(
+ self,
+ batch_input: list[str],
+ **input_kwargs,
+ ) -> list[float]:
+ r"""Get a list of scores of the reward model."""
+ task = asyncio.run_coroutine_threadsafe(self.aget_scores(batch_input, **input_kwargs), self._loop)
+ return task.result()
+
+ async def aget_scores(
+ self,
+ batch_input: list[str],
+ **input_kwargs,
+ ) -> list[float]:
+ r"""Asynchronously get a list of scores of the reward model."""
+ return await self.engine.get_scores(batch_input, **input_kwargs)
+
+
+def run_chat() -> None:
+ if os.name != "nt":
+ try:
+ import readline # noqa: F401
+ except ImportError:
+ print("Install `readline` for a better experience.")
+
+ chat_model = ChatModel()
+ messages = []
+ print("Welcome to the CLI application, use `clear` to remove the history, use `exit` to exit the application.")
+
+ while True:
+ try:
+ query = input("\nUser: ")
+ except UnicodeDecodeError:
+ print("Detected decoding error at the inputs, please set the terminal encoding to utf-8.")
+ continue
+ except Exception:
+ raise
+
+ if query.strip() == "exit":
+ break
+
+ if query.strip() == "clear":
+ messages = []
+ torch_gc()
+ print("History has been removed.")
+ continue
+
+ messages.append({"role": "user", "content": query})
+ print("Assistant: ", end="", flush=True)
+
+ response = ""
+ for new_text in chat_model.stream_chat(messages):
+ print(new_text, end="", flush=True)
+ response += new_text
+ print()
+ messages.append({"role": "assistant", "content": response})
diff --git a/src/train_sft/src/llamafactory/chat/hf_engine.py b/src/train_sft/src/llamafactory/chat/hf_engine.py
new file mode 100644
index 0000000000000000000000000000000000000000..adaaaa872786446fde2eba4a2a8f32f7ec4cc462
--- /dev/null
+++ b/src/train_sft/src/llamafactory/chat/hf_engine.py
@@ -0,0 +1,412 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+import os
+from collections.abc import AsyncGenerator
+from threading import Thread
+from typing import TYPE_CHECKING, Any, Callable, Optional, Union
+
+import torch
+from transformers import GenerationConfig, TextIteratorStreamer
+from typing_extensions import override
+
+from ..data import get_template_and_fix_tokenizer
+from ..extras import logging
+from ..extras.constants import AUDIO_PLACEHOLDER, IMAGE_PLACEHOLDER, VIDEO_PLACEHOLDER, EngineName
+from ..model import load_model, load_tokenizer
+from .base_engine import BaseEngine, Response
+
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedModel, PreTrainedTokenizer, ProcessorMixin
+ from trl import PreTrainedModelWrapper
+
+ from ..data import Template
+ from ..data.mm_plugin import AudioInput, ImageInput, VideoInput
+ from ..hparams import DataArguments, FinetuningArguments, GeneratingArguments, ModelArguments
+
+
+logger = logging.get_logger(__name__)
+
+
+class HuggingfaceEngine(BaseEngine):
+ def __init__(
+ self,
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ finetuning_args: "FinetuningArguments",
+ generating_args: "GeneratingArguments",
+ ) -> None:
+ self.name = EngineName.HF
+ self.can_generate = finetuning_args.stage == "sft"
+ tokenizer_module = load_tokenizer(model_args)
+ self.tokenizer = tokenizer_module["tokenizer"]
+ self.processor = tokenizer_module["processor"]
+ self.tokenizer.padding_side = "left" if self.can_generate else "right"
+ self.template = get_template_and_fix_tokenizer(self.tokenizer, data_args)
+ self.model = load_model(
+ self.tokenizer, model_args, finetuning_args, is_trainable=False, add_valuehead=(not self.can_generate)
+ ) # must after fixing tokenizer to resize vocab
+ self.generating_args = generating_args.to_dict()
+ try:
+ asyncio.get_event_loop()
+ except RuntimeError:
+ logger.warning_rank0_once("There is no current event loop, creating a new one.")
+ loop = asyncio.new_event_loop()
+ asyncio.set_event_loop(loop)
+
+ self.semaphore = asyncio.Semaphore(int(os.getenv("MAX_CONCURRENT", "1")))
+
+ @staticmethod
+ def _process_args(
+ model: "PreTrainedModel",
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["ProcessorMixin"],
+ template: "Template",
+ generating_args: dict[str, Any],
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ input_kwargs: Optional[dict[str, Any]] = {},
+ ) -> tuple[dict[str, Any], int]:
+ mm_input_dict = {"images": [], "videos": [], "audios": [], "imglens": [0], "vidlens": [0], "audlens": [0]}
+ if images is not None:
+ mm_input_dict.update({"images": images, "imglens": [len(images)]})
+ if not any(IMAGE_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = IMAGE_PLACEHOLDER * len(images) + messages[0]["content"]
+
+ if videos is not None:
+ mm_input_dict.update({"videos": videos, "vidlens": [len(videos)]})
+ if not any(VIDEO_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = VIDEO_PLACEHOLDER * len(videos) + messages[0]["content"]
+
+ if audios is not None:
+ mm_input_dict.update({"audios": audios, "audlens": [len(audios)]})
+ if not any(AUDIO_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = AUDIO_PLACEHOLDER * len(audios) + messages[0]["content"]
+
+ messages = template.mm_plugin.process_messages(
+ messages, mm_input_dict["images"], mm_input_dict["videos"], mm_input_dict["audios"], processor
+ )
+ paired_messages = messages + [{"role": "assistant", "content": ""}]
+ prompt_ids, _ = template.encode_oneturn(tokenizer, paired_messages, system, tools)
+ prompt_ids, _ = template.mm_plugin.process_token_ids(
+ prompt_ids,
+ None,
+ mm_input_dict["images"],
+ mm_input_dict["videos"],
+ mm_input_dict["audios"],
+ tokenizer,
+ processor,
+ )
+ prompt_length = len(prompt_ids)
+ inputs = torch.tensor([prompt_ids], device=model.device)
+ attention_mask = torch.ones_like(inputs, dtype=torch.long)
+
+ do_sample: Optional[bool] = input_kwargs.pop("do_sample", None)
+ temperature: Optional[float] = input_kwargs.pop("temperature", None)
+ top_p: Optional[float] = input_kwargs.pop("top_p", None)
+ top_k: Optional[float] = input_kwargs.pop("top_k", None)
+ num_return_sequences: int = input_kwargs.pop("num_return_sequences", 1)
+ repetition_penalty: Optional[float] = input_kwargs.pop("repetition_penalty", None)
+ length_penalty: Optional[float] = input_kwargs.pop("length_penalty", None)
+ skip_special_tokens: Optional[bool] = input_kwargs.pop("skip_special_tokens", None)
+ max_length: Optional[int] = input_kwargs.pop("max_length", None)
+ max_new_tokens: Optional[int] = input_kwargs.pop("max_new_tokens", None)
+ stop: Optional[Union[str, list[str]]] = input_kwargs.pop("stop", None)
+
+ if stop is not None:
+ logger.warning_rank0("Stop parameter is not supported by the huggingface engine yet.")
+
+ generating_args = generating_args.copy()
+ generating_args.update(
+ dict(
+ do_sample=do_sample if do_sample is not None else generating_args["do_sample"],
+ temperature=temperature if temperature is not None else generating_args["temperature"],
+ top_p=top_p if top_p is not None else generating_args["top_p"],
+ top_k=top_k if top_k is not None else generating_args["top_k"],
+ num_return_sequences=num_return_sequences,
+ repetition_penalty=repetition_penalty
+ if repetition_penalty is not None
+ else generating_args["repetition_penalty"],
+ length_penalty=length_penalty if length_penalty is not None else generating_args["length_penalty"],
+ skip_special_tokens=skip_special_tokens
+ if skip_special_tokens is not None
+ else generating_args["skip_special_tokens"],
+ eos_token_id=template.get_stop_token_ids(tokenizer),
+ pad_token_id=tokenizer.pad_token_id,
+ )
+ )
+
+ if isinstance(num_return_sequences, int) and num_return_sequences > 1: # do_sample needs temperature > 0
+ generating_args["do_sample"] = True
+ generating_args["temperature"] = generating_args["temperature"] or 1.0
+
+ if not generating_args["temperature"]:
+ generating_args["do_sample"] = False
+
+ if not generating_args["do_sample"]:
+ generating_args.pop("temperature", None)
+ generating_args.pop("top_p", None)
+
+ if max_length:
+ generating_args.pop("max_new_tokens", None)
+ generating_args["max_length"] = max_length
+
+ if max_new_tokens:
+ generating_args.pop("max_length", None)
+ generating_args["max_new_tokens"] = max_new_tokens
+
+ gen_kwargs = dict(
+ inputs=inputs,
+ attention_mask=attention_mask,
+ generation_config=GenerationConfig(**generating_args),
+ )
+
+ mm_inputs = template.mm_plugin.get_mm_inputs(**mm_input_dict, batch_ids=[prompt_ids], processor=processor)
+ for key, value in mm_inputs.items():
+ if isinstance(value, list) and isinstance(value[0], torch.Tensor): # for pixtral inputs
+ value = torch.stack(value) # assume they have same sizes
+ elif (
+ isinstance(value, list) and isinstance(value[0], list) and isinstance(value[0][0], torch.Tensor)
+ ): # for minicpmv inputs
+ value = torch.stack([torch.stack(v) for v in value])
+ elif not isinstance(value, torch.Tensor):
+ value = torch.tensor(value)
+
+ if torch.is_floating_point(value): # cast data dtype for paligemma
+ value = value.to(model.dtype)
+
+ if key == "second_per_grid_ts": # qwen2.5vl special case
+ gen_kwargs[key] = value.tolist()
+ else:
+ gen_kwargs[key] = value.to(model.device)
+
+ if getattr(model.config, "model_type", None) in ["minicpmv", "minicpmo"]:
+ gen_kwargs["input_ids"] = inputs
+ gen_kwargs["tokenizer"] = tokenizer
+ if "audio_feature_lens" in mm_inputs:
+ gen_kwargs["audio_feature_lens"] = mm_inputs["audio_feature_lens"]
+
+ gen_kwargs.pop("image_sizes", None)
+
+ return gen_kwargs, prompt_length
+
+ @staticmethod
+ @torch.inference_mode()
+ def _chat(
+ model: "PreTrainedModel",
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["ProcessorMixin"],
+ template: "Template",
+ generating_args: dict[str, Any],
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ input_kwargs: Optional[dict[str, Any]] = {},
+ ) -> list["Response"]:
+ gen_kwargs, prompt_length = HuggingfaceEngine._process_args(
+ model,
+ tokenizer,
+ processor,
+ template,
+ generating_args,
+ messages,
+ system,
+ tools,
+ images,
+ videos,
+ audios,
+ input_kwargs,
+ )
+ generate_output = model.generate(**gen_kwargs)
+ if isinstance(generate_output, tuple):
+ generate_output = generate_output[1][0] # post-process the minicpm_o output
+
+ response_ids = generate_output[:, prompt_length:]
+ response = tokenizer.batch_decode(
+ response_ids,
+ skip_special_tokens=getattr(gen_kwargs["generation_config"], "skip_special_tokens", True),
+ clean_up_tokenization_spaces=True,
+ )
+ results = []
+ for i in range(len(response)):
+ eos_index = (response_ids[i] == tokenizer.eos_token_id).nonzero()
+ response_length = (eos_index[0].item() + 1) if len(eos_index) else len(response_ids[i])
+ results.append(
+ Response(
+ response_text=response[i],
+ response_length=response_length,
+ prompt_length=prompt_length,
+ finish_reason="stop" if len(eos_index) else "length",
+ )
+ )
+
+ return results
+
+ @staticmethod
+ @torch.inference_mode()
+ def _stream_chat(
+ model: "PreTrainedModel",
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["ProcessorMixin"],
+ template: "Template",
+ generating_args: dict[str, Any],
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ input_kwargs: Optional[dict[str, Any]] = {},
+ ) -> Callable[[], str]:
+ gen_kwargs, _ = HuggingfaceEngine._process_args(
+ model,
+ tokenizer,
+ processor,
+ template,
+ generating_args,
+ messages,
+ system,
+ tools,
+ images,
+ videos,
+ audios,
+ input_kwargs,
+ )
+ streamer = TextIteratorStreamer(
+ tokenizer,
+ skip_prompt=True,
+ skip_special_tokens=getattr(gen_kwargs["generation_config"], "skip_special_tokens", True),
+ )
+ gen_kwargs["streamer"] = streamer
+ thread = Thread(target=model.generate, kwargs=gen_kwargs, daemon=True)
+ thread.start()
+
+ def stream():
+ try:
+ return streamer.__next__()
+ except StopIteration:
+ raise StopAsyncIteration()
+
+ return stream
+
+ @staticmethod
+ @torch.inference_mode()
+ def _get_scores(
+ model: "PreTrainedModelWrapper",
+ tokenizer: "PreTrainedTokenizer",
+ batch_input: list[str],
+ input_kwargs: Optional[dict[str, Any]] = {},
+ ) -> list[float]:
+ max_length: Optional[int] = input_kwargs.pop("max_length", None)
+ device = getattr(model.pretrained_model, "device", "cuda")
+ inputs: dict[str, torch.Tensor] = tokenizer(
+ batch_input,
+ padding=True,
+ truncation=True,
+ max_length=max_length or getattr(model.config, "max_position_embeddings", 1024),
+ return_tensors="pt",
+ add_special_tokens=False,
+ ).to(device)
+ values: torch.Tensor = model(**inputs, return_dict=True, use_cache=False)[-1]
+ scores = values.gather(dim=-1, index=(inputs["attention_mask"].sum(dim=-1, keepdim=True) - 1))
+ return scores
+
+ @override
+ async def chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> list["Response"]:
+ if not self.can_generate:
+ raise ValueError("The current model does not support `chat`.")
+
+ input_args = (
+ self.model,
+ self.tokenizer,
+ self.processor,
+ self.template,
+ self.generating_args,
+ messages,
+ system,
+ tools,
+ images,
+ videos,
+ audios,
+ input_kwargs,
+ )
+ async with self.semaphore:
+ return await asyncio.to_thread(self._chat, *input_args)
+
+ @override
+ async def stream_chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> AsyncGenerator[str, None]:
+ if not self.can_generate:
+ raise ValueError("The current model does not support `stream_chat`.")
+
+ input_args = (
+ self.model,
+ self.tokenizer,
+ self.processor,
+ self.template,
+ self.generating_args,
+ messages,
+ system,
+ tools,
+ images,
+ videos,
+ audios,
+ input_kwargs,
+ )
+ async with self.semaphore:
+ stream = self._stream_chat(*input_args)
+ while True:
+ try:
+ yield await asyncio.to_thread(stream)
+ except StopAsyncIteration:
+ break
+
+ @override
+ async def get_scores(
+ self,
+ batch_input: list[str],
+ **input_kwargs,
+ ) -> list[float]:
+ if self.can_generate:
+ raise ValueError("Cannot get scores using an auto-regressive model.")
+
+ input_args = (self.model, self.tokenizer, batch_input, input_kwargs)
+ async with self.semaphore:
+ return await asyncio.to_thread(self._get_scores, *input_args)
diff --git a/src/train_sft/src/llamafactory/chat/sglang_engine.py b/src/train_sft/src/llamafactory/chat/sglang_engine.py
new file mode 100644
index 0000000000000000000000000000000000000000..b1d2ead33823bc70d51cda59750d25580f972083
--- /dev/null
+++ b/src/train_sft/src/llamafactory/chat/sglang_engine.py
@@ -0,0 +1,289 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import asyncio
+import atexit
+import json
+from collections.abc import AsyncGenerator, AsyncIterator, Sequence
+from typing import TYPE_CHECKING, Any, Optional, Union
+
+import requests
+from typing_extensions import override
+
+from ..data import get_template_and_fix_tokenizer
+from ..extras import logging
+from ..extras.constants import AUDIO_PLACEHOLDER, IMAGE_PLACEHOLDER, VIDEO_PLACEHOLDER, EngineName
+from ..extras.misc import get_device_count, torch_gc
+from ..extras.packages import is_sglang_available
+from ..hparams import DataArguments, FinetuningArguments, GeneratingArguments, ModelArguments
+from ..model import load_config, load_tokenizer
+from ..model.model_utils.quantization import QuantizationMethod
+from .base_engine import BaseEngine, Response
+
+
+if is_sglang_available():
+ from sglang.utils import launch_server_cmd, terminate_process, wait_for_server # type: ignore
+
+
+if TYPE_CHECKING:
+ from ..data.mm_plugin import AudioInput, ImageInput, VideoInput
+
+
+logger = logging.get_logger(__name__)
+
+
+class SGLangEngine(BaseEngine):
+ """Inference engine for SGLang models.
+
+ This class wraps the SGLang engine to provide a consistent interface for text generation
+ that matches LLaMA Factory's requirements. It uses the SGLang HTTP server approach for
+ better interaction and performance. The engine launches a server process and communicates
+ with it via HTTP requests.
+
+ For more details on the SGLang HTTP server approach, see:
+ https://docs.sglang.ai/backend/send_request.html
+ """
+
+ def __init__(
+ self,
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ finetuning_args: "FinetuningArguments",
+ generating_args: "GeneratingArguments",
+ ) -> None:
+ self.name = EngineName.SGLANG
+ self.model_args = model_args
+ config = load_config(model_args) # may download model from ms hub
+ if getattr(config, "quantization_config", None): # gptq models should use float16
+ quantization_config: dict[str, Any] = getattr(config, "quantization_config", None)
+ quant_method = quantization_config.get("quant_method", "")
+ if quant_method == QuantizationMethod.GPTQ and model_args.infer_dtype == "auto":
+ model_args.infer_dtype = "float16"
+
+ self.can_generate = finetuning_args.stage == "sft"
+ tokenizer_module = load_tokenizer(model_args)
+ self.tokenizer = tokenizer_module["tokenizer"]
+ self.processor = tokenizer_module["processor"]
+ self.tokenizer.padding_side = "left"
+ self.template = get_template_and_fix_tokenizer(self.tokenizer, data_args)
+ self.template.mm_plugin.expand_mm_tokens = False # for sglang generate
+ self.generating_args = generating_args.to_dict()
+ if model_args.adapter_name_or_path is not None:
+ self.lora_request = True
+ else:
+ self.lora_request = False
+
+ launch_cmd = [
+ "python3 -m sglang.launch_server",
+ f"--model-path {model_args.model_name_or_path}",
+ f"--dtype {model_args.infer_dtype}",
+ f"--context-length {model_args.sglang_maxlen}",
+ f"--mem-fraction-static {model_args.sglang_mem_fraction}",
+ f"--tp-size {model_args.sglang_tp_size if model_args.sglang_tp_size != -1 else get_device_count() or 1}",
+ f"--download-dir {model_args.cache_dir}",
+ "--log-level error",
+ ]
+ if self.lora_request:
+ launch_cmd.extend(
+ [
+ "--max-loras-per-batch 1",
+ f"--lora-backend {model_args.sglang_lora_backend}",
+ f"--lora-paths lora0={model_args.adapter_name_or_path[0]}",
+ "--disable-radix-cache",
+ ]
+ )
+ launch_cmd = " ".join(launch_cmd)
+ logger.info_rank0(f"Starting SGLang server with command: {launch_cmd}")
+ try:
+ torch_gc()
+ self.server_process, port = launch_server_cmd(launch_cmd)
+ self.base_url = f"http://localhost:{port}"
+ atexit.register(self._cleanup_server)
+
+ logger.info_rank0(f"Waiting for SGLang server to be ready at {self.base_url}")
+ wait_for_server(self.base_url, timeout=300)
+ logger.info_rank0(f"SGLang server initialized successfully at {self.base_url}")
+ try:
+ response = requests.get(f"{self.base_url}/get_model_info", timeout=5)
+ if response.status_code == 200:
+ model_info = response.json()
+ logger.info(f"SGLang server model info: {model_info}")
+ except Exception as e:
+ logger.debug(f"Note: could not get model info: {str(e)}")
+
+ except Exception as e:
+ logger.error(f"Failed to start SGLang server: {str(e)}")
+ self._cleanup_server() # make sure to clean up any started process
+ raise RuntimeError(f"SGLang server initialization failed: {str(e)}.")
+
+ def _cleanup_server(self):
+ r"""Clean up the server process when the engine is destroyed."""
+ if hasattr(self, "server_process") and self.server_process:
+ try:
+ logger.info("Terminating SGLang server process")
+ terminate_process(self.server_process)
+ logger.info("SGLang server process terminated")
+ except Exception as e:
+ logger.warning(f"Error terminating SGLang server: {str(e)}")
+
+ async def _generate(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> AsyncIterator[dict[str, Any]]:
+ if images is not None and not any(IMAGE_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = IMAGE_PLACEHOLDER * len(images) + messages[0]["content"]
+
+ if videos is not None and not any(VIDEO_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = VIDEO_PLACEHOLDER * len(videos) + messages[0]["content"]
+
+ if audios is not None and not any(AUDIO_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = AUDIO_PLACEHOLDER * len(audios) + messages[0]["content"]
+
+ messages = self.template.mm_plugin.process_messages(
+ messages, images or [], videos or [], audios or [], self.processor
+ )
+ paired_messages = messages + [{"role": "assistant", "content": ""}]
+ prompt_ids, _ = self.template.encode_oneturn(self.tokenizer, paired_messages, system, tools)
+ prompt_length = len(prompt_ids)
+
+ temperature: Optional[float] = input_kwargs.pop("temperature", None)
+ top_p: Optional[float] = input_kwargs.pop("top_p", None)
+ top_k: Optional[float] = input_kwargs.pop("top_k", None)
+ num_return_sequences: int = input_kwargs.pop("num_return_sequences", 1)
+ repetition_penalty: Optional[float] = input_kwargs.pop("repetition_penalty", None)
+ skip_special_tokens: Optional[bool] = input_kwargs.pop("skip_special_tokens", None)
+ max_length: Optional[int] = input_kwargs.pop("max_length", None)
+ max_new_tokens: Optional[int] = input_kwargs.pop("max_new_tokens", None)
+ stop: Optional[Union[str, list[str]]] = input_kwargs.pop("stop", None)
+
+ if num_return_sequences != 1:
+ raise NotImplementedError("SGLang only supports n=1.")
+
+ if "max_new_tokens" in self.generating_args:
+ max_tokens = self.generating_args["max_new_tokens"]
+ elif "max_length" in self.generating_args:
+ if self.generating_args["max_length"] > prompt_length:
+ max_tokens = self.generating_args["max_length"] - prompt_length
+ else:
+ max_tokens = 1
+
+ if max_length:
+ max_tokens = max_length - prompt_length if max_length > prompt_length else 1
+
+ if max_new_tokens:
+ max_tokens = max_new_tokens
+
+ sampling_params = {
+ "temperature": temperature if temperature is not None else self.generating_args["temperature"],
+ "top_p": (top_p if top_p is not None else self.generating_args["top_p"]) or 1.0, # top_p must > 0
+ "top_k": (top_k if top_k is not None else self.generating_args["top_k"]) or -1, # top_k must > 0
+ "stop": stop,
+ "stop_token_ids": self.template.get_stop_token_ids(self.tokenizer),
+ "max_new_tokens": max_tokens,
+ "repetition_penalty": (
+ repetition_penalty if repetition_penalty is not None else self.generating_args["repetition_penalty"]
+ )
+ or 1.0, # repetition_penalty must > 0
+ "skip_special_tokens": skip_special_tokens
+ if skip_special_tokens is not None
+ else self.generating_args["skip_special_tokens"],
+ }
+
+ def stream_request():
+ json_data = {
+ "input_ids": prompt_ids,
+ "sampling_params": sampling_params,
+ "stream": True,
+ }
+ if self.lora_request:
+ json_data["lora_request"] = ["lora0"]
+ response = requests.post(f"{self.base_url}/generate", json=json_data, stream=True)
+ if response.status_code != 200:
+ raise RuntimeError(f"SGLang server error: {response.status_code}, {response.text}")
+
+ for chunk in response.iter_lines(decode_unicode=False):
+ chunk = str(chunk.decode("utf-8"))
+ if chunk == "data: [DONE]":
+ break
+
+ if chunk and chunk.startswith("data:"):
+ yield json.loads(chunk[5:].strip("\n"))
+
+ return await asyncio.to_thread(stream_request)
+
+ @override
+ async def chat(
+ self,
+ messages: Sequence[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[Sequence["ImageInput"]] = None,
+ videos: Optional[Sequence["VideoInput"]] = None,
+ audios: Optional[Sequence["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> list["Response"]:
+ final_output = None
+ generator = await self._generate(messages, system, tools, images, videos, audios, **input_kwargs)
+ for request_output in generator:
+ final_output = request_output
+
+ results = [
+ Response(
+ response_text=final_output["text"],
+ response_length=final_output["meta_info"]["completion_tokens"],
+ prompt_length=final_output["meta_info"]["prompt_tokens"],
+ finish_reason="stop" if final_output["meta_info"]["finish_reason"] == "stop" else "length",
+ )
+ ]
+ return results
+
+ @override
+ async def stream_chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> AsyncGenerator[str, None]:
+ generated_text = ""
+ generator = await self._generate(messages, system, tools, images, videos, audios, **input_kwargs)
+ for result in generator:
+ delta_text = result["text"][len(generated_text) :]
+ generated_text = result["text"]
+ yield delta_text
+
+ @override
+ async def get_scores(
+ self,
+ batch_input: list[str],
+ **input_kwargs,
+ ) -> list[float]:
+ raise NotImplementedError("SGLang engine does not support `get_scores`.")
+
+ def __del__(self):
+ r"""Ensure server is cleaned up when object is deleted."""
+ self._cleanup_server()
+ try:
+ atexit.unregister(self._cleanup_server)
+ except Exception:
+ pass
diff --git a/src/train_sft/src/llamafactory/chat/vllm_engine.py b/src/train_sft/src/llamafactory/chat/vllm_engine.py
new file mode 100644
index 0000000000000000000000000000000000000000..b33705279da13d398f8089b94c72b22d742f2c6f
--- /dev/null
+++ b/src/train_sft/src/llamafactory/chat/vllm_engine.py
@@ -0,0 +1,263 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import uuid
+from collections.abc import AsyncGenerator, AsyncIterator
+from typing import TYPE_CHECKING, Any, Optional, Union
+
+from typing_extensions import override
+
+from ..data import get_template_and_fix_tokenizer
+from ..extras import logging
+from ..extras.constants import AUDIO_PLACEHOLDER, IMAGE_PLACEHOLDER, VIDEO_PLACEHOLDER, EngineName
+from ..extras.misc import get_device_count
+from ..extras.packages import is_vllm_available
+from ..model import load_config, load_tokenizer
+from ..model.model_utils.quantization import QuantizationMethod
+from ..model.model_utils.visual import LlavaMultiModalProjectorForYiVLForVLLM
+from .base_engine import BaseEngine, Response
+
+
+if is_vllm_available():
+ from vllm import AsyncEngineArgs, AsyncLLMEngine, RequestOutput, SamplingParams
+ from vllm.lora.request import LoRARequest
+
+
+if TYPE_CHECKING:
+ from ..data.mm_plugin import AudioInput, ImageInput, VideoInput
+ from ..hparams import DataArguments, FinetuningArguments, GeneratingArguments, ModelArguments
+
+
+logger = logging.get_logger(__name__)
+
+
+class VllmEngine(BaseEngine):
+ def __init__(
+ self,
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ finetuning_args: "FinetuningArguments",
+ generating_args: "GeneratingArguments",
+ ) -> None:
+ self.name = EngineName.VLLM
+ self.model_args = model_args
+ config = load_config(model_args) # may download model from ms hub
+ if getattr(config, "quantization_config", None): # gptq models should use float16
+ quantization_config: dict[str, Any] = getattr(config, "quantization_config", None)
+ quant_method = quantization_config.get("quant_method", "")
+ if quant_method == QuantizationMethod.GPTQ and model_args.infer_dtype == "auto":
+ model_args.infer_dtype = "float16"
+
+ self.can_generate = finetuning_args.stage == "sft"
+ tokenizer_module = load_tokenizer(model_args)
+ self.tokenizer = tokenizer_module["tokenizer"]
+ self.processor = tokenizer_module["processor"]
+ self.tokenizer.padding_side = "left"
+ self.template = get_template_and_fix_tokenizer(self.tokenizer, data_args)
+ self.template.mm_plugin.expand_mm_tokens = False # for vllm generate
+ self.generating_args = generating_args.to_dict()
+
+ engine_args = {
+ "model": model_args.model_name_or_path,
+ "trust_remote_code": model_args.trust_remote_code,
+ "download_dir": model_args.cache_dir,
+ "dtype": model_args.infer_dtype,
+ "max_model_len": model_args.vllm_maxlen,
+ "tensor_parallel_size": get_device_count() or 1,
+ "gpu_memory_utilization": model_args.vllm_gpu_util,
+ "disable_log_stats": True,
+ "disable_log_requests": True,
+ "enforce_eager": model_args.vllm_enforce_eager,
+ "enable_lora": model_args.adapter_name_or_path is not None,
+ "max_lora_rank": model_args.vllm_max_lora_rank,
+ }
+ if self.template.mm_plugin.__class__.__name__ != "BasePlugin":
+ engine_args["limit_mm_per_prompt"] = {"image": 4, "video": 2, "audio": 2}
+
+ if isinstance(model_args.vllm_config, dict):
+ engine_args.update(model_args.vllm_config)
+
+ if getattr(config, "is_yi_vl_derived_model", None):
+ import vllm.model_executor.models.llava
+
+ logger.info_rank0("Detected Yi-VL model, applying projector patch.")
+ vllm.model_executor.models.llava.LlavaMultiModalProjector = LlavaMultiModalProjectorForYiVLForVLLM
+
+ self.model = AsyncLLMEngine.from_engine_args(AsyncEngineArgs(**engine_args))
+ if model_args.adapter_name_or_path is not None:
+ self.lora_request = LoRARequest("default", 1, model_args.adapter_name_or_path[0])
+ else:
+ self.lora_request = None
+
+ async def _generate(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> AsyncIterator["RequestOutput"]:
+ request_id = f"chatcmpl-{uuid.uuid4().hex}"
+ if images is not None and not any(IMAGE_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = IMAGE_PLACEHOLDER * len(images) + messages[0]["content"]
+
+ if videos is not None and not any(VIDEO_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = VIDEO_PLACEHOLDER * len(videos) + messages[0]["content"]
+
+ if audios is not None and not any(AUDIO_PLACEHOLDER in message["content"] for message in messages):
+ messages[0]["content"] = AUDIO_PLACEHOLDER * len(audios) + messages[0]["content"]
+
+ messages = self.template.mm_plugin.process_messages(
+ messages, images or [], videos or [], audios or [], self.processor
+ )
+ paired_messages = messages + [{"role": "assistant", "content": ""}]
+ prompt_ids, _ = self.template.encode_oneturn(self.tokenizer, paired_messages, system, tools)
+ prompt_length = len(prompt_ids)
+
+ temperature: Optional[float] = input_kwargs.pop("temperature", None)
+ top_p: Optional[float] = input_kwargs.pop("top_p", None)
+ top_k: Optional[float] = input_kwargs.pop("top_k", None)
+ num_return_sequences: int = input_kwargs.pop("num_return_sequences", 1)
+ repetition_penalty: Optional[float] = input_kwargs.pop("repetition_penalty", None)
+ length_penalty: Optional[float] = input_kwargs.pop("length_penalty", None)
+ skip_special_tokens: Optional[bool] = input_kwargs.pop("skip_special_tokens", None)
+ max_length: Optional[int] = input_kwargs.pop("max_length", None)
+ max_new_tokens: Optional[int] = input_kwargs.pop("max_new_tokens", None)
+ stop: Optional[Union[str, list[str]]] = input_kwargs.pop("stop", None)
+
+ if length_penalty is not None:
+ logger.warning_rank0("Length penalty is not supported by the vllm engine yet.")
+
+ if "max_new_tokens" in self.generating_args:
+ max_tokens = self.generating_args["max_new_tokens"]
+ elif "max_length" in self.generating_args:
+ if self.generating_args["max_length"] > prompt_length:
+ max_tokens = self.generating_args["max_length"] - prompt_length
+ else:
+ max_tokens = 1
+
+ if max_length:
+ max_tokens = max_length - prompt_length if max_length > prompt_length else 1
+
+ if max_new_tokens:
+ max_tokens = max_new_tokens
+
+ sampling_params = SamplingParams(
+ n=num_return_sequences,
+ repetition_penalty=(
+ repetition_penalty if repetition_penalty is not None else self.generating_args["repetition_penalty"]
+ )
+ or 1.0, # repetition_penalty must > 0
+ temperature=temperature if temperature is not None else self.generating_args["temperature"],
+ top_p=(top_p if top_p is not None else self.generating_args["top_p"]) or 1.0, # top_p must > 0
+ top_k=(top_k if top_k is not None else self.generating_args["top_k"]) or -1, # top_k must > 0
+ stop=stop,
+ stop_token_ids=self.template.get_stop_token_ids(self.tokenizer),
+ max_tokens=max_tokens,
+ skip_special_tokens=skip_special_tokens
+ if skip_special_tokens is not None
+ else self.generating_args["skip_special_tokens"],
+ )
+
+ if images is not None: # add image features
+ multi_modal_data = {
+ "image": self.template.mm_plugin._regularize_images(
+ images,
+ image_max_pixels=self.model_args.image_max_pixels,
+ image_min_pixels=self.model_args.image_min_pixels,
+ )["images"]
+ }
+ elif videos is not None:
+ multi_modal_data = {
+ "video": self.template.mm_plugin._regularize_videos(
+ videos,
+ image_max_pixels=self.model_args.video_max_pixels,
+ image_min_pixels=self.model_args.video_min_pixels,
+ video_fps=self.model_args.video_fps,
+ video_maxlen=self.model_args.video_maxlen,
+ )["videos"]
+ }
+ elif audios is not None:
+ audio_data = self.template.mm_plugin._regularize_audios(
+ audios,
+ sampling_rate=self.model_args.audio_sampling_rate,
+ )
+ multi_modal_data = {"audio": zip(audio_data["audios"], audio_data["sampling_rates"])}
+ else:
+ multi_modal_data = None
+
+ result_generator = self.model.generate(
+ {"prompt_token_ids": prompt_ids, "multi_modal_data": multi_modal_data},
+ sampling_params=sampling_params,
+ request_id=request_id,
+ lora_request=self.lora_request,
+ )
+ return result_generator
+
+ @override
+ async def chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> list["Response"]:
+ final_output = None
+ generator = await self._generate(messages, system, tools, images, videos, audios, **input_kwargs)
+ async for request_output in generator:
+ final_output = request_output
+
+ results = []
+ for output in final_output.outputs:
+ results.append(
+ Response(
+ response_text=output.text,
+ response_length=len(output.token_ids),
+ prompt_length=len(final_output.prompt_token_ids),
+ finish_reason=output.finish_reason,
+ )
+ )
+
+ return results
+
+ @override
+ async def stream_chat(
+ self,
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ images: Optional[list["ImageInput"]] = None,
+ videos: Optional[list["VideoInput"]] = None,
+ audios: Optional[list["AudioInput"]] = None,
+ **input_kwargs,
+ ) -> AsyncGenerator[str, None]:
+ generated_text = ""
+ generator = await self._generate(messages, system, tools, images, videos, audios, **input_kwargs)
+ async for result in generator:
+ delta_text = result.outputs[0].text[len(generated_text) :]
+ generated_text = result.outputs[0].text
+ yield delta_text
+
+ @override
+ async def get_scores(
+ self,
+ batch_input: list[str],
+ **input_kwargs,
+ ) -> list[float]:
+ raise NotImplementedError("vLLM engine does not support `get_scores`.")
diff --git a/src/train_sft/src/llamafactory/cli.py b/src/train_sft/src/llamafactory/cli.py
new file mode 100644
index 0000000000000000000000000000000000000000..fb94b35c2d44ea094befc15b9bf0411c42f91f3b
--- /dev/null
+++ b/src/train_sft/src/llamafactory/cli.py
@@ -0,0 +1,160 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import subprocess
+import sys
+from copy import deepcopy
+from functools import partial
+
+
+USAGE = (
+ "-" * 70
+ + "\n"
+ + "| Usage: |\n"
+ + "| llamafactory-cli api -h: launch an OpenAI-style API server |\n"
+ + "| llamafactory-cli chat -h: launch a chat interface in CLI |\n"
+ + "| llamafactory-cli eval -h: evaluate models |\n"
+ + "| llamafactory-cli export -h: merge LoRA adapters and export model |\n"
+ + "| llamafactory-cli train -h: train models |\n"
+ + "| llamafactory-cli webchat -h: launch a chat interface in Web UI |\n"
+ + "| llamafactory-cli webui: launch LlamaBoard |\n"
+ + "| llamafactory-cli version: show version info |\n"
+ + "-" * 70
+)
+
+
+def main():
+ from . import launcher
+ from .api.app import run_api
+ from .chat.chat_model import run_chat
+ from .eval.evaluator import run_eval
+ from .extras import logging
+ from .extras.env import VERSION, print_env
+ from .extras.misc import find_available_port, get_device_count, is_env_enabled, use_ray
+ from .train.tuner import export_model, run_exp
+ from .webui.interface import run_web_demo, run_web_ui
+
+ logger = logging.get_logger(__name__)
+
+ WELCOME = (
+ "-" * 58
+ + "\n"
+ + f"| Welcome to LLaMA Factory, version {VERSION}"
+ + " " * (21 - len(VERSION))
+ + "|\n|"
+ + " " * 56
+ + "|\n"
+ + "| Project page: https://github.com/hiyouga/LLaMA-Factory |\n"
+ + "-" * 58
+ )
+
+ COMMAND_MAP = {
+ "api": run_api,
+ "chat": run_chat,
+ "env": print_env,
+ "eval": run_eval,
+ "export": export_model,
+ "train": run_exp,
+ "webchat": run_web_demo,
+ "webui": run_web_ui,
+ "version": partial(print, WELCOME),
+ "help": partial(print, USAGE),
+ }
+
+ command = sys.argv.pop(1) if len(sys.argv) > 1 else "help"
+ if command == "train" and (is_env_enabled("FORCE_TORCHRUN") or (get_device_count() > 1 and not use_ray())):
+ # launch distributed training
+ nnodes = os.getenv("NNODES", "1")
+ node_rank = os.getenv("NODE_RANK", "0")
+ nproc_per_node = os.getenv("NPROC_PER_NODE", str(get_device_count()))
+ master_addr = os.getenv("MASTER_ADDR", "127.0.0.1")
+ master_port = os.getenv("MASTER_PORT", str(find_available_port()))
+ logger.info_rank0(f"Initializing {nproc_per_node} distributed tasks at: {master_addr}:{master_port}")
+ if int(nnodes) > 1:
+ logger.info_rank0(f"Multi-node training enabled: num nodes: {nnodes}, node rank: {node_rank}")
+
+ # elastic launch support
+ max_restarts = os.getenv("MAX_RESTARTS", "0")
+ rdzv_id = os.getenv("RDZV_ID")
+ min_nnodes = os.getenv("MIN_NNODES")
+ max_nnodes = os.getenv("MAX_NNODES")
+
+ env = deepcopy(os.environ)
+ if is_env_enabled("OPTIM_TORCH", "1"):
+ # optimize DDP, see https://zhuanlan.zhihu.com/p/671834539
+ env["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"
+ env["TORCH_NCCL_AVOID_RECORD_STREAMS"] = "1"
+
+ if rdzv_id is not None:
+ # launch elastic job with fault tolerant support when possible
+ # see also https://docs.pytorch.org/docs/stable/elastic/train_script.html
+ rdzv_nnodes = nnodes
+ # elastic number of nodes if MIN_NNODES and MAX_NNODES are set
+ if min_nnodes is not None and max_nnodes is not None:
+ rdzv_nnodes = f"{min_nnodes}:{max_nnodes}"
+
+ process = subprocess.run(
+ (
+ "torchrun --nnodes {rdzv_nnodes} --nproc-per-node {nproc_per_node} "
+ "--rdzv-id {rdzv_id} --rdzv-backend c10d --rdzv-endpoint {master_addr}:{master_port} "
+ "--max-restarts {max_restarts} {file_name} {args}"
+ )
+ .format(
+ rdzv_nnodes=rdzv_nnodes,
+ nproc_per_node=nproc_per_node,
+ rdzv_id=rdzv_id,
+ master_addr=master_addr,
+ master_port=master_port,
+ max_restarts=max_restarts,
+ file_name=launcher.__file__,
+ args=" ".join(sys.argv[1:]),
+ )
+ .split(),
+ env=env,
+ check=True,
+ )
+ else:
+ # NOTE: DO NOT USE shell=True to avoid security risk
+ process = subprocess.run(
+ (
+ "torchrun --nnodes {nnodes} --node_rank {node_rank} --nproc_per_node {nproc_per_node} "
+ "--master_addr {master_addr} --master_port {master_port} {file_name} {args}"
+ )
+ .format(
+ nnodes=nnodes,
+ node_rank=node_rank,
+ nproc_per_node=nproc_per_node,
+ master_addr=master_addr,
+ master_port=master_port,
+ file_name=launcher.__file__,
+ args=" ".join(sys.argv[1:]),
+ )
+ .split(),
+ env=env,
+ check=True,
+ )
+
+ sys.exit(process.returncode)
+ elif command in COMMAND_MAP:
+ COMMAND_MAP[command]()
+ else:
+ print(f"Unknown command: {command}.\n{USAGE}")
+
+
+if __name__ == "__main__":
+ from multiprocessing import freeze_support
+
+ freeze_support()
+ main()
diff --git a/src/train_sft/src/llamafactory/data/__init__.py b/src/train_sft/src/llamafactory/data/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..11c8c9fcecd10e736e240196fde98f833c9df3dc
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/__init__.py
@@ -0,0 +1,37 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from .collator import (
+ KTODataCollatorWithPadding,
+ MultiModalDataCollatorForSeq2Seq,
+ PairwiseDataCollatorWithPadding,
+ SFTDataCollatorWith4DAttentionMask,
+)
+from .data_utils import Role, split_dataset
+from .loader import get_dataset
+from .template import TEMPLATES, Template, get_template_and_fix_tokenizer
+
+
+__all__ = [
+ "TEMPLATES",
+ "KTODataCollatorWithPadding",
+ "MultiModalDataCollatorForSeq2Seq",
+ "PairwiseDataCollatorWithPadding",
+ "Role",
+ "SFTDataCollatorWith4DAttentionMask",
+ "Template",
+ "get_dataset",
+ "get_template_and_fix_tokenizer",
+ "split_dataset",
+]
diff --git a/src/train_sft/src/llamafactory/data/__pycache__/__init__.cpython-311.pyc b/src/train_sft/src/llamafactory/data/__pycache__/__init__.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..0d88d6adf5ed762ae1ba255e75f2f17ad8f8f042
Binary files /dev/null and b/src/train_sft/src/llamafactory/data/__pycache__/__init__.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/data/__pycache__/converter.cpython-311.pyc b/src/train_sft/src/llamafactory/data/__pycache__/converter.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..77d98e3734c29805313aceb164f2d5b4f6c55615
Binary files /dev/null and b/src/train_sft/src/llamafactory/data/__pycache__/converter.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/data/__pycache__/group_layout_augmentor_new.cpython-311.pyc b/src/train_sft/src/llamafactory/data/__pycache__/group_layout_augmentor_new.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d2c60ecf7609228c0499be032e894e1e13c22690
Binary files /dev/null and b/src/train_sft/src/llamafactory/data/__pycache__/group_layout_augmentor_new.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/data/__pycache__/loader.cpython-311.pyc b/src/train_sft/src/llamafactory/data/__pycache__/loader.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..184d2c11e04d41978e871610456db8fa3d325378
Binary files /dev/null and b/src/train_sft/src/llamafactory/data/__pycache__/loader.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/data/__pycache__/tool_utils.cpython-311.pyc b/src/train_sft/src/llamafactory/data/__pycache__/tool_utils.cpython-311.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..a96680052fc260685698e63e2ab2f84f84e30107
Binary files /dev/null and b/src/train_sft/src/llamafactory/data/__pycache__/tool_utils.cpython-311.pyc differ
diff --git a/src/train_sft/src/llamafactory/data/augment_example.py b/src/train_sft/src/llamafactory/data/augment_example.py
new file mode 100644
index 0000000000000000000000000000000000000000..c932329a56f9c13b893d9ae3fd19c62aa2ed4bc7
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/augment_example.py
@@ -0,0 +1,604 @@
+import numpy as np
+import pickle
+import pandas as pd
+import glob
+from trl import DataCollatorForCompletionOnlyLM
+from datasets import Dataset
+import copy
+import os
+from scipy.spatial.transform import Rotation as R
+import json
+from tqdm import tqdm
+import pdb
+from pathlib import Path
+from shapely.geometry import Polygon, box
+import random
+import torch
+from copy import deepcopy
+import matplotlib.pyplot as plt
+
+from src.utils import inherit_props_by_id, get_pths_dataset_split, remove_and_recreate_folder, get_system_prompt_sgllm
+
+def create_floor_plan_polygon(bounds):
+ return Polygon(np.array(bounds)[:, [0, 2]].tolist())
+
+def rotate_around_y(point, angle_radians):
+ rotation_matrix = np.array([
+ [np.cos(angle_radians), 0, np.sin(angle_radians)],
+ [0, 1, 0],
+ [-np.sin(angle_radians), 0, np.cos(angle_radians)]
+ ])
+ rot_point = np.dot(rotation_matrix, point).tolist()
+ rot_point = [ round(elem, 2) for elem in rot_point ]
+
+ return rot_point
+
+def combine_quaternion_with_y_rot_for_global_rot(original_quat, angle_radians):
+ y_axis_rotation = R.from_euler('y', angle_radians).as_quat()
+ original_rotation = R.from_quat(original_quat)
+ combined_rotation = R.from_quat(y_axis_rotation) * original_rotation
+ return [round(elem, 5) for elem in combined_rotation.as_quat().tolist()]
+
+def rotate_obj(obj, angle_radians):
+ obj["pos"] = rotate_around_y(obj["pos"], angle_radians)
+ obj["rot"] = combine_quaternion_with_y_rot_for_global_rot(obj["rot"], angle_radians)
+
+def rotate_scenegraph(sample_scene, angle_radians):
+ if sample_scene.get("bounds_top"):
+ for key in ["bounds_top", "bounds_bottom"]:
+ sample_scene[key] = [rotate_around_y(point, angle_radians) for point in sample_scene[key]]
+ if sample_scene.get("objects"):
+ for obj in sample_scene.get("objects"):
+ rotate_obj(obj, angle_radians)
+ if sample_scene.get("pos"):
+ rotate_obj(sample_scene, angle_radians)
+
+def offset_bounds(sample_scene, shift_amount):
+ if sample_scene.get("bounds_top") and shift_amount != 0:
+ sample_scene["bounds_top"] = sample_scene["bounds_top"][shift_amount:] + sample_scene["bounds_top"][:shift_amount]
+ sample_scene["bounds_bottom"] = sample_scene["bounds_bottom"][shift_amount:] + sample_scene["bounds_bottom"][:shift_amount]
+
+def get_2d_bbox(pos, size):
+ x, _, z = pos
+ width, _, depth = size
+ half_width, half_depth = width/2, depth/2
+ return box(x - half_width, z - half_depth, x + half_width, z + half_depth)
+
+def perturb_value_with_bounds(value, bounds, min_delta=-0.02, max_delta=0.02):
+ min_val, max_val = bounds
+ delta = np.random.uniform(min_delta, max_delta)
+ new_val = round(value + delta, 2)
+ # return np.clip(new_val, min_val, max_val)
+ return new_val
+
+def get_safe_perturbation(pos, size, floor_polygon, desc, max_attempts=10):
+ # Get bounds from the floor polygon
+ minx, minz, maxx, maxz = floor_polygon.bounds
+
+ # if not floor_polygon.contains(get_2d_bbox(pos, size)):
+ # print("object not contained in floor polygon already !")
+ # print(f"\tdesc: {desc}")
+ # print(f"\tpos: {pos}")
+ # print(f"\tsize: {size}")
+ # print(f"\tbounds: ", minx, minz, maxx, maxz)
+
+ for i in range(max_attempts):
+ # print(f"trying to find perturbation... ({i}/{max_attempts})")
+ pos_perturbed = pos.copy()
+ pos_perturbed[0] = perturb_value_with_bounds(pos[0], bounds=(minx, maxx))
+ pos_perturbed[2] = perturb_value_with_bounds(pos[2], bounds=(minz, maxz))
+
+ size_perturbed = size.copy()
+ size_perturbed[0] = perturb_value_with_bounds(size[0], bounds=(minx, maxx))
+ size_perturbed[2] = perturb_value_with_bounds(size[2], bounds=(minx, maxx))
+
+ new_bbox = get_2d_bbox(pos_perturbed, size_perturbed)
+ if floor_polygon.contains(new_bbox):
+ # print("valid perturbation found at iter ", i)
+ # print(f"\tdesc: {desc}")
+ # print(f"\tbounds: ", minx, minz, maxx, maxz)
+ # print(f"\tpos: {pos} -> {pos_perturbed}")
+ # print(f"\tsize: {size} -> {size_perturbed}")
+ return pos_perturbed, size_perturbed
+
+ # if no valid perturbation found, return original values
+ # print("no valid perturbation found, returning original values")
+ # print(f"\tdesc: {desc}")
+ return pos, size
+
+def perturb_scene(sample_scene, floor_polygon):
+ if sample_scene.get("objects"):
+ for obj in sample_scene["objects"]:
+ new_pos, new_size = get_safe_perturbation(obj["pos"], obj["size"], floor_polygon, obj["desc"])
+ obj["pos"] = new_pos
+ obj["size"] = new_size
+ if sample_scene.get("pos"):
+ new_pos, new_size = get_safe_perturbation(sample_scene["pos"], sample_scene["size"], floor_polygon, sample_scene["desc"])
+ sample_scene["pos"] = new_pos
+ sample_scene["size"] = new_size
+
+def do_random_augm_on_sgs(sample, augm_prob=0.85):
+
+ sg_input = sample.get("sg_input")
+ sg_output_add = sample.get("sg_output_add")
+
+ # (1) with 15% prob, we don't perform any augmentation (return original data)
+ # TODO: could be hyperparam itself actually...
+ if np.random.rand() > augm_prob:
+ return sg_input, sg_output_add
+
+ sg_input_augm = copy.deepcopy(json.loads(sg_input))
+ sg_output_add_augm = copy.deepcopy(json.loads(sg_output_add))
+
+ # (2) do random rotation
+ angle_radians = np.radians(np.random.choice([0, 90, 180, 270]))
+ rotate_scenegraph(sg_input_augm, angle_radians)
+ rotate_scenegraph(sg_output_add_augm, angle_radians)
+
+ # (3) circular shift of room boundaries
+ shift_amount = np.random.randint(0, len(sg_input_augm.get("bounds_bottom")))
+ offset_bounds(sg_input_augm, shift_amount)
+ offset_bounds(sg_output_add_augm, shift_amount)
+
+ # (4) scale entire "polygon" of bounds equally by +[0, 5] cm
+ # TODO
+ # scale_factor = np.random.uniform(0, 0.05)
+ # ...
+
+ # (5) Perturb object sizes and positions
+ floor_polygon = create_floor_plan_polygon(sg_input_augm.get("bounds_bottom"))
+ perturb_scene(sg_input_augm, floor_polygon)
+ perturb_scene(sg_output_add_augm, floor_polygon)
+
+ return json.dumps(sg_input_augm), json.dumps(sg_output_add_augm)
+
+def create_dataset_from_files(pth_output, room_type, dataset_split):
+ data = {
+ "room_type": [],
+ "n_objects": [],
+ "pth_orig_file": [],
+ "split": [],
+ "scene": [],
+ }
+
+ pth_root = os.getenv("PTH_STAGE_2_DEDUP")
+
+ all_pths = get_pths_dataset_split(room_type, dataset_split)
+
+ for pth_scene in tqdm(all_pths, desc=f"Loading {room_type or 'all'} ({dataset_split} split)"):
+ with open(os.path.join(pth_root, pth_scene), 'r') as f:
+ scene = json.load(f)
+
+ data["room_type"].append(scene.get("room_type"))
+ data["n_objects"].append(len(scene.get("objects")))
+ data["split"].append(dataset_split)
+ data["scene"].append(scene)
+ data["pth_orig_file"].append(pth_scene)
+
+ # data["sg_input"].append(sample.get("sg_input"))
+ # data["sg_output_add"].append(sample.get("sg_output_add"))
+ #data["prompt_var"].append(sample.get("prompt_var"))
+ #data["n_objects_query"].append(sample.get("n_objects_query"))
+ #data["n_objects_full"].append(sample.get("n_objects_full"))
+ #data["is_complete"].append(sample.get("is_complete"))
+
+ dataset = Dataset.from_dict(data)
+
+ with open(pth_output, 'wb') as fp:
+ pickle.dump(dataset, fp)
+
+ return dataset
+
+def simplify_descs_for_ablation(sg_raw, all_assets_metadata_simple_descs):
+ sg_simplified = copy.deepcopy(json.loads(sg_raw))
+ if sg_simplified.get("objects"):
+ for obj in sg_simplified["objects"]:
+ obj["desc"] = all_assets_metadata_simple_descs.get(obj["desc"])
+ if sg_simplified.get("desc"):
+ sg_simplified["desc"] = all_assets_metadata_simple_descs.get(sg_simplified["desc"])
+ return json.dumps(sg_simplified)
+
+def simplify_sample(sample, all_assets_metadata_simple_descs):
+ sample["sg_input"] = simplify_descs_for_ablation(sample["sg_input"], all_assets_metadata_simple_descs)
+ sample["sg_output_add"] = simplify_descs_for_ablation(sample["sg_output_add"], all_assets_metadata_simple_descs)
+ return sample
+
+def create_full_scene_from_before_and_added(scene_before, obj_add):
+ scene_after = copy.deepcopy(scene_before)
+ scene_after["objects"].append(obj_add)
+ return scene_after
+
+def ensure_order_of_keys_for_sg_input_dict(sg_input, do_keep_jids=False):
+ sg_input_ordered = {}
+
+ sg_input_ordered["room_type"] = sg_input.get("room_type")
+ sg_input_ordered["bounds_top"] = sg_input.get("bounds_top")
+ sg_input_ordered["bounds_bottom"] = sg_input.get("bounds_bottom")
+
+ # for each object in the scene, ensure fixed order such that we always have "desc", "size", "pos", "rot":
+ objects_ordered = []
+ for obj in sg_input.get("objects"):
+ obj_ordered = {}
+ obj_ordered["desc"] = obj.get("desc")
+ obj_ordered["size"] = obj.get("size")
+ obj_ordered["pos"] = obj.get("pos")
+ obj_ordered["rot"] = obj.get("rot")
+ if do_keep_jids:
+ obj_ordered["jid"] = obj.get("jid")
+ objects_ordered.append(obj_ordered)
+ sg_input_ordered["objects"] = objects_ordered
+
+ return sg_input_ordered
+
+def create_instruction_dict(prompt, scene_query, obj_add, room_type, n_objects_full, is_complete=False, do_keep_jids=False):
+ # remove id from sg_input
+ scene_query.pop("room_id")
+
+ # ensure fixed order of keys for sg_input
+ scene_query = ensure_order_of_keys_for_sg_input_dict(scene_query, do_keep_jids=do_keep_jids)
+
+ return {
+ "instr_type": "add",
+ "prompt": prompt,
+ "room_type": room_type,
+ "sg_input": json.dumps(scene_query),
+ "sg_output_add": json.dumps(obj_add),
+ "n_objects_query": len(scene_query["objects"]),
+ "n_objects_full": n_objects_full,
+ "is_complete": is_complete
+ }
+
+def clean_copy_of_objects(objects, do_keep_jids=False):
+ cleaned = deepcopy(objects)
+
+ if do_keep_jids:
+ return cleaned
+
+ if isinstance(cleaned, list):
+ for obj in cleaned:
+ obj.pop("jid", None)
+ else:
+ cleaned.pop("jid", None)
+ return cleaned
+
+def get_exposure_factor(n_objects, lambda_instr_exp=None):
+ if lambda_instr_exp is None:
+ return 1
+
+ # lambda_instr_exp should be in range 0.0 - 0.9 for reasonably scales ?
+
+ # base = 4 * np.log(n_objects + 1) # +1 to handle n_objects=1 case
+ # scaled = base * (lambda_instr_exp ** 2)
+ # exposure = max(1, int(scaled + 1))
+ # print(f"n_objects: {n_objects}, n_instructions: {n_instructions}")
+ exposure = np.exp(lambda_instr_exp * n_objects)
+
+ return exposure
+
+def plot_scaling_curves():
+ n_objects = range(2, 50)
+ for param in [0.1, 0.5, 1.0, 1.5, 2.0]:
+ instructions = [get_exposure_factor(n, param) for n in n_objects]
+ plt.plot(n_objects, instructions, label=f'param={param}')
+
+ plt.xlabel('Number of Objects')
+ plt.ylabel('Number of Instructions')
+ plt.title('Instruction Exposure Scaling')
+ plt.legend()
+ plt.show()
+
+def get_sampling_weights(dataset, lambda_instr_exp):
+ weights = [get_exposure_factor(sample["n_objects"], lambda_instr_exp) for sample in dataset]
+ return np.array(weights) / sum(weights)
+
+class WeightedRandomSampler(torch.utils.data.Sampler):
+ def __init__(self, weights, num_samples, replacement=True):
+ self.weights = weights
+ self.num_samples = num_samples
+ self.replacement = replacement
+
+ def __iter__(self):
+ return iter(torch.multinomial(self.weights, self.num_samples, self.replacement).tolist())
+
+ def __len__(self):
+ return self.num_samples
+
+def load_train_val_test_datasets(lambda_instr_exp=None, use_cached_dataset=True, room_type="all", do_sanity_check=False, seed=1234, accelerator=None):
+ pth_train = f"{os.getenv('PTH_DATASET_CACHE')}/dataset_{room_type}_train.pkl"
+ pth_val = f"{os.getenv('PTH_DATASET_CACHE')}/dataset_{room_type}_val.pkl"
+ pth_test = f"{os.getenv('PTH_DATASET_CACHE')}/dataset_{room_type}_test.pkl"
+
+ if not use_cached_dataset:
+ dataset_train = create_dataset_from_files(pth_train, room_type, "train")
+ dataset_val = create_dataset_from_files(pth_val, room_type, "val")
+ dataset_test = create_dataset_from_files(pth_test, room_type, "test")
+ else:
+ print(f"loading cached dataset from {os.getenv('PTH_DATASET_CACHE')}")
+ dataset_train = pd.read_pickle(pth_train)
+ dataset_val = pd.read_pickle(pth_val)
+ dataset_test = pd.read_pickle(pth_test)
+
+ # ONLY FOR TRAINING SPLIT: get normalized sampling weights
+ if lambda_instr_exp is not None:
+ train_weights = get_sampling_weights(dataset_train, lambda_instr_exp)
+ dataset_train = dataset_train.add_column("sampling_weight", train_weights)
+
+ # plots weights vs number of objects and save as image
+ # x axis: number of objects, y axis: sampling weight
+ # plt.scatter([sample["n_objects"] for sample in dataset_train], train_weights)
+ # plt.xlabel('Number of Objects')
+ # plt.ylabel('Sampling Weight')
+ # plt.title('Sampling Weights vs Number of Objects')
+ # plt.savefig(f"sampling_weights_{room_type}.png")
+
+ gen = np.random.default_rng(seed)
+ dataset_train = dataset_train.shuffle(generator=gen)
+ dataset_val = dataset_val.shuffle(generator=gen)
+ dataset_test = dataset_test.shuffle(generator=gen)
+
+ if do_sanity_check:
+ n_max = 32
+ dataset_train = dataset_train.select(range(n_max))
+ dataset_val = dataset_val.select(range(n_max))
+ dataset_test = dataset_test.select(range(n_max))
+
+ if accelerator:
+ dataset_train = accelerator.prepare(dataset_train)
+ dataset_val = accelerator.prepare(dataset_val)
+ dataset_test = accelerator.prepare(dataset_test)
+
+ print(f"len of train dataset: {len(dataset_train)}")
+ print(f"len of val dataset: {len(dataset_val)}")
+ print(f"len of test dataset: {len(dataset_test)}")
+
+ return dataset_train, dataset_val, dataset_test
+
+def build_full_instruction_from_prompt(prompt, sg_input):
+ sg_input_str = json.dumps(ensure_order_of_keys_for_sg_input_dict(json.loads(sg_input)))
+ return f"\n\t{prompt} \n \n\n\t{sg_input_str}\n "
+
+def sample_prompt(all_prompts, jid):
+ if "-(" in jid:
+ jid_clean = jid.split("-(")[0]
+ else:
+ jid_clean = jid
+ return random.choice(all_prompts[jid_clean])
+
+def create_instruction_from_scene(sample, all_prompts, all_assets_metadata_simple_descs=None, do_simple_descs=False, do_keep_jids=False):
+ scene = sample["scene"]
+ n_objects = sample["n_objects"]
+ room_type = sample["room_type"]
+
+ # get weight for instruction generation
+ # n_zero_start = min(1 // n_objects, 0.1)
+ # n_full_scene = min(1 // n_objects, 0.1)
+
+ n_zero_start = 0.1
+ n_full_scene = 0.1
+
+ n_random = 1.0 - n_zero_start - n_full_scene
+
+ complete_scene = deepcopy(scene)
+ n_objects_full = len(complete_scene["objects"])
+
+ # sample instruction style from probs
+ instr_style = np.random.choice(["zero_start", "full_scene", "random"], p=[n_zero_start, n_full_scene, n_random])
+
+ # print(instr_style)
+
+ if instr_style == "zero_start":
+ # Start with empty scene but keep everything else
+ scene_query = deepcopy(scene)
+ scene_query["objects"] = []
+
+ # select random object to add
+ obj_add = random.choice(scene["objects"])
+ prompt = sample_prompt(all_prompts, obj_add.get("jid"))
+ obj_add = clean_copy_of_objects(obj_add, do_keep_jids)
+
+ instr = create_instruction_dict(prompt, scene_query, obj_add, room_type, n_objects_full, do_keep_jids=do_keep_jids)
+
+ elif instr_style == "full_scene":
+ # shuffle scene then pick first one and remove from list
+ scene_query = deepcopy(scene)
+
+ obj_add = random.choice(scene["objects"])
+ prompt = sample_prompt(all_prompts, obj_add.get("jid"))
+ obj_add = clean_copy_of_objects(obj_add, do_keep_jids)
+
+ remaining_objects = clean_copy_of_objects(scene["objects"], do_keep_jids)
+ random.shuffle(remaining_objects)
+ scene_query["objects"] = [obj for obj in remaining_objects if obj != obj_add]
+
+ instr = create_instruction_dict(prompt, scene_query, obj_add, room_type, n_objects_full, is_complete=True, do_keep_jids=do_keep_jids)
+
+ else:
+ scene_query = deepcopy(scene)
+ random.shuffle(scene_query["objects"])
+
+ # drop between 0 and N-1 objects
+ n_total = len(scene_query["objects"])
+ m_drop = np.random.choice(np.arange(0, n_total-1))
+ n_total_new = n_total - m_drop
+ scene_query["objects"] = scene_query["objects"][:n_total_new]
+
+ obj_add = scene_query["objects"][-1]
+ prompt = sample_prompt(all_prompts, obj_add.get("jid"))
+ obj_add = clean_copy_of_objects(obj_add, do_keep_jids)
+
+ scene_query["objects"].pop()
+ scene_query["objects"] = clean_copy_of_objects(scene_query["objects"], do_keep_jids)
+
+ instr = create_instruction_dict(prompt, scene_query, obj_add, room_type, n_objects_full, is_complete=(True if n_total_new == n_objects_full else False), do_keep_jids=do_keep_jids)
+
+ # simplify descriptions if needed
+ if do_simple_descs:
+ instr = simplify_sample(instr, all_assets_metadata_simple_descs)
+
+ return instr
+
+def format_and_tokenize(tokenizer, full_sample_instr, sample_sg_output_full, max_seq_length, padding_free, truncate=True):
+
+ formatted_text = format_with_chat_template(tokenizer, full_sample_instr, sample_sg_output_full)
+
+ if truncate:
+ tokenized_inputs = tokenizer(
+ formatted_text,
+ truncation=True,
+ max_length=max_seq_length,
+ padding="max_length" if not padding_free else False,
+ return_tensors="pt",
+ return_length=True
+ )
+ else:
+ tokenized_inputs = tokenizer(
+ formatted_text,
+ truncation=False,
+ return_tensors="pt",
+ return_length=True
+ )
+
+ length = tokenized_inputs.get("length", None)
+
+ return tokenized_inputs, length
+
+def process_scene_sample(orig_sample, tokenizer, max_seq_length, all_prompts, all_assets_metadata_simple_descs, do_simple_descs, do_augm=False, do_full_sg_outputs=False, do_keep_jids=False):
+ while True:
+ # Create instruction from scene
+ sample = create_instruction_from_scene(orig_sample, all_prompts, all_assets_metadata_simple_descs, do_simple_descs, do_keep_jids=do_keep_jids)
+
+ # Apply data augmentation if enabled
+ if sample.get("split") == "train" and do_augm:
+ sample_sg_input, sample_sg_output_add = do_random_augm_on_sgs(sample)
+ else:
+ sample_sg_input, sample_sg_output_add = sample["sg_input"], sample["sg_output_add"]
+
+ # Prepare the scene output/completion
+ if do_full_sg_outputs:
+ scene = json.loads(sample_sg_input)
+ scene["objects"].append(json.loads(sample_sg_output_add))
+ completion = json.dumps(scene)
+ else:
+ completion = sample_sg_output_add
+
+ # Build the full instruction
+ full_sample_instr = build_full_instruction_from_prompt(sample["prompt"], sample_sg_input)
+
+ # check tok length and if it exceeds max length, retry
+ _, tok_length = format_and_tokenize(tokenizer, full_sample_instr, completion, max_seq_length, padding_free=True, truncate=False)
+ # subtract 150 tokens for the next object as the latter needs to fit into the context as well
+ if tok_length <= (max_seq_length - 150):
+ break
+ else:
+ print(f"sample exceeded max length ({tok_length} > {max_seq_length}-150), # of objects: {len(json.loads(sample_sg_input).get('objects'))}, retrying...")
+
+ return full_sample_instr, completion, sample["prompt"], sample
+
+def format_with_chat_template(tokenizer, prompt, completion=None):
+ messages = [
+ {"role": "system", "content": get_system_prompt_sgllm()},
+ {"role": "user", "content": prompt}
+ ]
+ if completion is not None:
+ messages.append({"role": "assistant", "content": completion})
+
+ return tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=(True if completion is None else False))
+
+class SFTSceneDataCollator(DataCollatorForCompletionOnlyLM):
+ def __init__(self, do_augm, response_template, tokenizer, padding_free, max_seq_length, do_simple_descs, do_full_sg_outputs, **kwargs):
+ super().__init__(response_template=response_template, tokenizer=tokenizer, padding_free=padding_free, mlm=False, **kwargs)
+
+ self.tokenizer = tokenizer
+ self.max_seq_length = max_seq_length
+ self.padding_free = padding_free
+ self.do_augm = do_augm
+ self.do_simple_descs = do_simple_descs
+ self.do_full_sg_outputs = do_full_sg_outputs
+ self.all_prompts = json.load(open(os.getenv("PTH_ASSETS_METADATA_PROMPTS")))
+ self.all_assets_metadata_simple_descs = json.load(open(os.getenv("PTH_ASSETS_METADATA_SIMPLE_DESCS")))
+
+ def __call__(self, samples):
+ batch_input_ids = []
+ batch_attention_masks = []
+
+ # print room type and number of object for each sample
+ # for sample in samples:
+ # print(f"room_type: {sample['room_type']}, n_objects: {sample['n_objects']}")
+
+ # make histogram with n_objects and count for each bin
+ # plt.clf()
+ # n_objects = [sample["n_objects"] for sample in samples]
+ # plt.hist(n_objects, bins=range(0, max(n_objects), 1))
+ # plt.xlabel('Number of Objects')
+ # plt.ylabel('Count')
+ # plt.title('Distribution of Number of Objects in Scenes')
+ # plt.savefig("batch_n_objects_histogram_0.1.png")
+ # exit()
+
+ for idx, sample in enumerate(samples):
+
+ full_sample_instr, sample_sg_output_full, _, _ = process_scene_sample(sample, self.tokenizer, self.max_seq_length, self.all_prompts, self.all_assets_metadata_simple_descs, self.do_simple_descs, self.do_augm, self.do_full_sg_outputs)
+
+ tok_inputs, tok_length = format_and_tokenize(self.tokenizer, full_sample_instr, sample_sg_output_full, self.max_seq_length, self.padding_free)
+
+ if tok_length is not None and tok_length > self.max_seq_length:
+ print(f"Input was truncated. Original length: {tok_length}, Max length: {self.max_seq_length}")
+
+ batch_input_ids.append(tok_inputs["input_ids"].squeeze(0))
+ batch_attention_masks.append(tok_inputs["attention_mask"].squeeze(0))
+
+ batch = [{"input_ids": input_ids, "attention_mask": attention_mask} for input_ids, attention_mask in zip(batch_input_ids, batch_attention_masks)]
+
+ return super().__call__(batch)
+
+def count_samples_exceeding_max_length(dataset, tokenizer, max_seq_length, all_prompts, all_assets_metadata_simple_descs, do_simple_descs=False, do_full_sg_outputs=False):
+ for attempt in range(10):
+ count_exceeding = 0
+
+ for sample in tqdm(dataset, desc="Counting samples exceeding max length"):
+
+ full_sample_instr, sample_sg_output_full, _, _ = process_scene_sample(sample, tokenizer, max_seq_length, all_prompts, all_assets_metadata_simple_descs, do_simple_descs, False, do_full_sg_outputs)
+
+ print("\n\n")
+
+ # tok_inputs, tok_length = format_and_tokenize(tokenizer, full_sample_instr, sample_sg_output_full, max_seq_length, False, truncate=False)
+
+ # Format with BOTH input and output included
+ # formatted_text = format_with_chat_template(tokenizer, full_sample_instr, sample_sg_output_full)
+
+ # Tokenize the COMPLETE text (input + output)
+ # tokenized = tokenizer(formatted_text, truncation=False)
+
+ # if tok_length > max_seq_length:
+ # print("exceeding", tok_length)
+ # print("number of corners: ", len(sample["scene"].get("bounds_top")))
+ # print("n_objects: ", sample["n_objects"])
+ # print("")
+ # count_exceeding += 1
+
+ # print("total exceeding samples: ", count_exceeding)
+ # print("")
+
+def count_samples_testset_seeds_exceeding_max_length(dataset, tokenizer, max_seq_length, all_test_instrs, all_prompts, all_assets_metadata_simple_descs, do_simple_descs=False, do_full_sg_outputs=False):
+ for sample in tqdm(dataset, desc="Counting samples exceeding max length"):
+
+ # full_sample_instr, sample_sg_output_full, _, _ = process_scene_sample(sample, tokenizer, max_seq_length, all_prompts, all_assets_metadata_simple_descs, do_simple_descs, False, do_full_sg_outputs)
+
+ for seed in [1234, 3456, 5678]:
+ instr_sample = all_test_instrs.get(sample.get("pth_orig_file"))[seed]
+
+ full_instruction = build_full_instruction_from_prompt(instr_sample.get("prompt"), instr_sample.get("sg_input"))
+
+ tok_inputs, tok_length = format_and_tokenize(tokenizer, full_instruction, instr_sample.get("sg_output_add"), max_seq_length, False, truncate=False)
+
+ if tok_length > max_seq_length:
+ print("exceeding", tok_length)
+ print("number of corners: ", len(sample["scene"].get("bounds_top")))
+ print("n_objects: ", sample["n_objects"])
+ print("")
+
+def get_random_sample(dataset, idx=None):
+ if idx is None:
+ idx = np.random.choice(len(dataset))
+ print("choosing random sample with idx:", idx)
+ return dataset.select([idx])[0]
\ No newline at end of file
diff --git a/src/train_sft/src/llamafactory/data/augmentor.py b/src/train_sft/src/llamafactory/data/augmentor.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/train_sft/src/llamafactory/data/collator.py b/src/train_sft/src/llamafactory/data/collator.py
new file mode 100644
index 0000000000000000000000000000000000000000..cd7802db515b91b1f2c1d3c298891a61f144d502
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/collator.py
@@ -0,0 +1,483 @@
+# Copyright 2025 OpenAccess AI Collective and the LlamaFactory team.
+#
+# This code is inspired by the OpenAccess AI Collective's axolotl library.
+# https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/src/axolotl/monkeypatch/utils.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Literal, Optional
+
+import numpy as np
+import torch
+import torch.nn.functional as F
+from peft import PeftModel
+from transformers import DataCollatorForSeq2Seq
+import time
+
+from ..extras.constants import AUDIO_PLACEHOLDER, IGNORE_INDEX, IMAGE_PLACEHOLDER
+from ..extras.packages import is_pillow_available
+
+
+if is_pillow_available():
+ from PIL import Image
+
+
+if TYPE_CHECKING:
+ from transformers import ProcessorMixin
+
+ from .template import Template
+
+
+def prepare_4d_attention_mask(attention_mask_with_indices: "torch.Tensor", dtype: "torch.dtype") -> "torch.Tensor":
+ r"""Expand 2d attention mask to 4d attention mask.
+
+ Expand the attention mask with indices from (batch_size, seq_len) to (batch_size, 1, seq_len, seq_len),
+ handle packed sequences and transforms the mask to lower triangular form to prevent future peeking.
+
+ e.g.
+ ```python
+ # input
+ [[1, 1, 2, 2, 2, 0]]
+ # output
+ [
+ [
+ [
+ [o, x, x, x, x, x],
+ [o, o, x, x, x, x],
+ [x, x, o, x, x, x],
+ [x, x, o, o, x, x],
+ [x, x, o, o, o, x],
+ [x, x, x, x, x, x],
+ ]
+ ]
+ ]
+ ```
+ where `o` equals to `0.0`, `x` equals to `min_dtype`.
+ """
+ _, seq_len = attention_mask_with_indices.size()
+ min_dtype = torch.finfo(dtype).min
+ zero_tensor = torch.tensor(0, dtype=dtype)
+
+ # Create a non-padding mask.
+ non_padding_mask = (attention_mask_with_indices != 0).unsqueeze(1).unsqueeze(2)
+ # Create indices for comparison.
+ indices = attention_mask_with_indices.unsqueeze(1).unsqueeze(2) # [bsz, 1, 1, seq_len]
+ indices_t = attention_mask_with_indices.unsqueeze(1).unsqueeze(3) # [bsz, 1, seq_len, 1]
+ # Create a lower triangular mask.
+ tril_mask = torch.tril(torch.ones((seq_len, seq_len), dtype=torch.bool))
+ attention_mask_4d = (indices == indices_t) & non_padding_mask & tril_mask
+ # Invert the attention mask.
+ attention_mask_4d = torch.where(attention_mask_4d, zero_tensor, min_dtype)
+ return attention_mask_4d
+
+
+@dataclass
+class MultiModalDataCollatorForSeq2Seq(DataCollatorForSeq2Seq):
+ r"""Data collator that supports VLMs.
+
+ Features should contain input_ids, attention_mask, labels, and optionally contain images, videos and audios.
+ """
+
+ template: Optional["Template"] = None
+ processor: Optional["ProcessorMixin"] = None
+
+ def __post_init__(self):
+ if self.template is None:
+ raise ValueError("Template is required for MultiModalDataCollator.")
+
+ if isinstance(self.model, PeftModel):
+ self.model = self.model.base_model.model
+
+ if self.model is not None and hasattr(self.model, "get_rope_index"): # for qwen2vl mrope
+ self.get_rope_func = self.model.get_rope_index # transformers < 4.52.0 or qwen2.5 omni
+ elif self.model is not None and hasattr(self.model, "model") and hasattr(self.model.model, "get_rope_index"):
+ self.get_rope_func = self.model.model.get_rope_index # transformers >= 4.52.0
+ else:
+ self.get_rope_func = None
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ batch_images, batch_videos, batch_audios = [], [], []
+ batch_imglens, batch_vidlens, batch_audlens, batch_input_ids = [], [], [], []
+ for feature in features:
+ images = feature.pop("images", None) or []
+ videos = feature.pop("videos", None) or []
+ audios = feature.pop("audios", None) or []
+ batch_images.extend(images)
+ batch_videos.extend(videos)
+ batch_audios.extend(audios)
+ batch_imglens.append(len(images))
+ batch_vidlens.append(len(videos))
+ batch_audlens.append(len(audios))
+ batch_input_ids.append(feature["input_ids"])
+
+ fake_input_ids = []
+ if (
+ self.template.mm_plugin.image_token is not None and sum(batch_imglens) == 0 and sum(batch_vidlens) == 0
+ ): # avoid process hanging in zero3/fsdp case
+ fake_messages = [{"role": "user", "content": IMAGE_PLACEHOLDER}]
+ fake_images = [Image.new("RGB", (64, 64), (255, 255, 255))]
+ fake_messages = self.template.mm_plugin.process_messages(
+ fake_messages, fake_images, [], [], self.processor
+ )
+ _fake_input_ids = self.tokenizer.encode(fake_messages[0]["content"], add_special_tokens=False)
+ _fake_input_ids, _ = self.template.mm_plugin.process_token_ids(
+ _fake_input_ids, None, fake_images, [], [], self.tokenizer, self.processor
+ )
+ fake_input_ids.extend(_fake_input_ids)
+ batch_images = fake_images
+ batch_imglens[0] = 1
+
+ if (
+ self.template.mm_plugin.audio_token is not None and sum(batch_audlens) == 0
+ ): # avoid process hanging in zero3/fsdp case
+ fake_messages = [{"role": "user", "content": AUDIO_PLACEHOLDER}]
+ fake_audios = [np.zeros(1600)]
+ fake_messages = self.template.mm_plugin.process_messages(
+ fake_messages, [], [], fake_audios, self.processor
+ )
+ _fake_input_ids = self.tokenizer.encode(fake_messages[0]["content"], add_special_tokens=False)
+ _fake_input_ids, _ = self.template.mm_plugin.process_token_ids(
+ _fake_input_ids, None, [], [], fake_audios, self.tokenizer, self.processor
+ )
+ fake_input_ids.extend(_fake_input_ids)
+ batch_audios = fake_audios
+ batch_audlens[0] = 1
+
+ if len(fake_input_ids) != 0:
+ if self.tokenizer.padding_side == "right":
+ features[0]["input_ids"] = features[0]["input_ids"] + fake_input_ids
+ features[0]["attention_mask"] = features[0]["attention_mask"] + [0] * len(fake_input_ids)
+ features[0]["labels"] = features[0]["labels"] + [IGNORE_INDEX] * len(fake_input_ids)
+ else:
+ features[0]["input_ids"] = fake_input_ids + features[0]["input_ids"]
+ features[0]["attention_mask"] = [0] * len(fake_input_ids) + features[0]["attention_mask"]
+ features[0]["labels"] = [IGNORE_INDEX] * len(fake_input_ids) + features[0]["labels"]
+
+ batch_input_ids[0] = features[0]["input_ids"]
+
+ mm_inputs = self.template.mm_plugin.get_mm_inputs(
+ batch_images,
+ batch_videos,
+ batch_audios,
+ batch_imglens,
+ batch_vidlens,
+ batch_audlens,
+ batch_input_ids,
+ self.processor,
+ )
+ if "token_type_ids" in mm_inputs:
+ token_type_ids = mm_inputs.pop("token_type_ids")
+ for i, feature in enumerate(features):
+ feature["token_type_ids"] = token_type_ids[i]
+
+ features: dict[str, torch.Tensor] = super().__call__(features)
+
+ if self.get_rope_func is not None:
+ rope_index_kwargs = {
+ "input_ids": features["input_ids"],
+ "image_grid_thw": mm_inputs.get("image_grid_thw"),
+ "video_grid_thw": mm_inputs.get("video_grid_thw"),
+ "attention_mask": (features["attention_mask"] >= 1).float(),
+ }
+ if "second_per_grid_ts" in mm_inputs: # for qwen2vl
+ rope_index_kwargs["second_per_grid_ts"] = mm_inputs.get("second_per_grid_ts")
+ elif "video_second_per_grid" in mm_inputs: # for qwen2.5 omni
+ rope_index_kwargs["second_per_grids"] = mm_inputs.get("video_second_per_grid")
+
+ if getattr(self.model.config, "model_type", None) == "qwen2_5_omni_thinker": # for qwen2.5 omni
+ rope_index_kwargs["use_audio_in_video"] = getattr(self.processor, "use_audio_in_video", False)
+ feature_attention_mask = mm_inputs.get("feature_attention_mask", None)
+ if feature_attention_mask is not None: # FIXME: need to get video image lengths
+ audio_feature_lengths = torch.sum(feature_attention_mask, dim=1)
+ rope_index_kwargs["audio_seqlens"] = audio_feature_lengths # prepare for input
+
+ features["position_ids"], rope_deltas = self.get_rope_func(**rope_index_kwargs)
+ features["rope_deltas"] = rope_deltas - (1 - rope_index_kwargs["attention_mask"]).sum(
+ dim=-1
+ ).unsqueeze(-1)
+ else: # for qwen2vl
+ features["position_ids"], features["rope_deltas"] = self.get_rope_func(**rope_index_kwargs)
+
+ if (
+ self.model is not None
+ and getattr(self.model.config, "model_type", None)
+ in ["glm4v", "Keye", "qwen2_vl", "qwen2_5_vl", "qwen2_5_omni_thinker"]
+ and ("position_ids" not in features or features["position_ids"].dim() != 3)
+ ):
+ raise ValueError(f"{self.model.config.model_type} requires 3D position ids for mrope.")
+
+ if "cross_attention_mask" in mm_inputs: # for mllama inputs when pad_to_multiple_of is enabled
+ cross_attention_mask = mm_inputs.pop("cross_attention_mask")
+ seq_len = features["input_ids"].size(1)
+ orig_len = cross_attention_mask.size(1)
+ mm_inputs["cross_attention_mask"] = F.pad(cross_attention_mask, (0, 0, 0, 0, 0, seq_len - orig_len))
+
+ features.update(mm_inputs)
+
+ if "image_bound" in features: # for minicpmv inputs
+ bsz, seq_length = features["input_ids"].shape
+ features["position_ids"] = torch.arange(seq_length).long().repeat(bsz, 1)
+ return {"data": features, "input_ids": features["input_ids"], "labels": features["labels"]}
+
+ return features
+
+
+@dataclass
+class SFTDataCollatorWith4DAttentionMask(MultiModalDataCollatorForSeq2Seq):
+ r"""Data collator for 4d attention mask with data augmentation support."""
+
+ block_diag_attn: bool = False
+ attn_implementation: Literal["eager", "sdpa", "flash_attention_2"] = "eager"
+ compute_dtype: "torch.dtype" = torch.float32
+ data_args: Optional[Any] = None # 用于访问数据增强配置
+
+ def __post_init__(self):
+ super().__post_init__()
+
+ # 初始化数据增强器
+ self.augmentor = None
+ if self.data_args and getattr(self.data_args, 'enable_data_augmentation', False):
+ try:
+ from .group_layout_augmentor_new import GroupLayoutAugmentor
+
+ # 正确处理augmentation_types字符串格式
+ augmentation_types_str = getattr(self.data_args, 'augmentation_types', 'random_group_shuffle,random_object_shuffle')
+ print("augmentation_types:", augmentation_types_str)
+ if isinstance(augmentation_types_str, str):
+ augmentation_types = [t.strip() for t in augmentation_types_str.split(',') if t.strip()]
+ else:
+ augmentation_types = augmentation_types_str or ['random_group_shuffle', 'random_object_shuffle']
+
+ augmentation_ratio = getattr(self.data_args, 'augmentation_ratio', 0.85)
+
+ self.augmentor = GroupLayoutAugmentor(
+ augmentation_types=augmentation_types,
+ augmentation_ratio=augmentation_ratio
+ )
+ print(f"✓ 数据增强器初始化成功: types={augmentation_types}, ratio={augmentation_ratio}")
+ except Exception as e:
+ print(f"警告: 数据增强器初始化失败: {e}")
+ self.augmentor = None
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ # 如果启用了数据增强且数据格式为原始对话格式,则进行增强处理
+ # print("self.augmentor:", self.augmentor)
+ # print("feature[0]:", features[0])
+ if self.augmentor is not None and features and "conversations" in features[0]:
+ features = self._apply_augmentation_and_tokenization(features)
+
+ # 确保所有features都有必需的字段
+ for feature in features:
+ # print("feature:", feature)
+ # time.sleep(10)
+ if "input_ids" not in feature:
+ raise ValueError(f"Feature missing 'input_ids' field. Available keys: {list(feature.keys())}")
+
+ features = super().__call__(features)
+ # print("features input_ids:", features["input_ids"][-10:])
+ if self.block_diag_attn and self.attn_implementation != "flash_attention_2":
+ features["attention_mask"] = prepare_4d_attention_mask(features["attention_mask"], self.compute_dtype)
+
+ for key, value in features.items(): # cast data dtype for paligemma
+ if torch.is_tensor(value) and torch.is_floating_point(value):
+ features[key] = value.to(self.compute_dtype)
+
+ return features
+
+ def _apply_augmentation_and_tokenization(self, features: list[dict[str, Any]]) -> list[dict[str, Any]]:
+ """对原始对话数据应用增强并进行tokenization"""
+ processed_features = []
+
+ for feature in features:
+ try:
+ conversation = feature["conversations"]
+
+ # 构建用户消息和助手回复
+ prompt_parts = conversation["prompt"]
+ response_parts = conversation["response"]
+
+ # 提取最后一个用户消息(通常是实际的用户输入)
+ user_content = prompt_parts[-1]["content"] if prompt_parts and isinstance(prompt_parts[-1], dict) else (prompt_parts[-1] if prompt_parts else "")
+
+ # 提取助手回复
+ assistant_content = response_parts[0]["content"] if response_parts and isinstance(response_parts[0], dict) else (response_parts[0] if response_parts else "")
+
+ # 应用数据增强
+ if self.augmentor and assistant_content:
+ try:
+ user_content, assistant_content = self.augmentor.augment_sample(user_content, assistant_content)
+ except Exception as e:
+ print(f"数据增强失败: {e}")
+ # 如果增强失败,使用原始数据
+
+ # print("assistant_content:", assistant_content)
+
+ # 构建完整的消息列表
+ messages = []
+
+ # 添加系统消息(如果有)
+ if conversation.get("system"):
+ messages.append({"role": "system", "content": conversation["system"]})
+
+ # 添加历史对话(除了最后一个用户消息)
+ for msg in prompt_parts[:-1]:
+ if isinstance(msg, dict):
+ messages.append(msg)
+ else:
+ # 如果是旧格式,需要转换
+ messages.append({"role": "user", "content": str(msg)})
+
+ # 添加当前用户消息和助手回复
+ messages.append({"role": "user", "content": user_content})
+ messages.append({"role": "assistant", "content": assistant_content})
+
+ # 应用chat template
+ formatted_text = self.tokenizer.apply_chat_template(
+ messages,
+ tokenize=False,
+ add_generation_prompt=False
+ )
+
+ # print(f"formatted_text: {formatted_text}")
+
+ # tokenize
+ tokenized = self.tokenizer(
+ formatted_text,
+ padding=False,
+ truncation=True,
+ max_length=getattr(self.data_args, 'cutoff_len', 2048),
+ return_tensors=None
+ )
+
+ # 创建labels:只对assistant回复部分计算loss
+ input_ids = tokenized["input_ids"]
+ labels = input_ids.copy()
+
+ # 找到assistant回复的起始位置
+ assistant_start = self._find_assistant_start(formatted_text, assistant_content)
+ if assistant_start is not None and assistant_start < len(labels):
+ # 将assistant回复之前的部分设为IGNORE_INDEX
+ labels[:assistant_start] = [IGNORE_INDEX] * assistant_start
+ else:
+ # 如果找不到assistant回复,将整个序列设为IGNORE_INDEX(这样不会计算loss)
+ labels = [IGNORE_INDEX] * len(labels)
+
+ processed_feature = {
+ "input_ids": input_ids,
+ "attention_mask": tokenized["attention_mask"],
+ "labels": labels,
+ "images": conversation.get("images", []),
+ "videos": conversation.get("videos", []),
+ "audios": conversation.get("audios", [])
+ }
+
+ processed_features.append(processed_feature)
+
+ except Exception as e:
+ print(f"❌ 处理feature失败: {e}")
+ print(f"Feature keys: {list(feature.keys()) if isinstance(feature, dict) else 'Not a dict'}")
+ if isinstance(feature, dict) and "conversations" in feature:
+ conv = feature["conversations"]
+ print(f"Conversations type: {type(conv)}")
+ if isinstance(conv, dict):
+ print(f"Conversations keys: {list(conv.keys())}")
+ # 返回空列表,这样会导致父类处理时出现KeyError
+ return []
+
+ return processed_features
+
+ def _find_assistant_start(self, formatted_text: str, assistant_content: str) -> Optional[int]:
+ """找到assistant回复在tokenized序列中的起始位置"""
+ try:
+ # 找到assistant内容在格式化文本中的位置
+ assistant_pos = formatted_text.find(assistant_content)
+ if assistant_pos >= 0:
+ # tokenize到assistant开始位置的文本
+ prefix_text = formatted_text[:assistant_pos]
+ prefix_tokens = self.tokenizer(prefix_text, add_special_tokens=False)["input_ids"]
+ return len(prefix_tokens)
+
+ return None
+ except Exception:
+ return None
+
+
+@dataclass
+class PairwiseDataCollatorWithPadding(MultiModalDataCollatorForSeq2Seq):
+ r"""Data collator for pairwise data."""
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ r"""Pad batched data to the longest sequence in the batch.
+
+ We generate 2 * n examples where the first n examples represent chosen examples and
+ the last n examples represent rejected examples.
+ """
+ concatenated_features = []
+ for key in ("chosen", "rejected"):
+ for feature in features:
+ target_feature = {
+ "input_ids": feature[f"{key}_input_ids"],
+ "attention_mask": feature[f"{key}_attention_mask"],
+ "labels": feature[f"{key}_labels"],
+ "images": feature["images"],
+ "videos": feature["videos"],
+ "audios": feature["audios"],
+ }
+ concatenated_features.append(target_feature)
+
+ return super().__call__(concatenated_features)
+
+
+@dataclass
+class KTODataCollatorWithPadding(MultiModalDataCollatorForSeq2Seq):
+ r"""Data collator for KTO data."""
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ target_features = []
+ kl_features = []
+ kto_tags = []
+ for feature in features:
+ target_feature = {
+ "input_ids": feature["input_ids"],
+ "attention_mask": feature["attention_mask"],
+ "labels": feature["labels"],
+ "images": feature["images"],
+ "videos": feature["videos"],
+ "audios": feature["audios"],
+ }
+ kl_feature = {
+ "input_ids": feature["kl_input_ids"],
+ "attention_mask": feature["kl_attention_mask"],
+ "labels": feature["kl_labels"],
+ "images": feature["images"],
+ "videos": feature["videos"],
+ "audios": feature["audios"],
+ }
+ target_features.append(target_feature)
+ kl_features.append(kl_feature)
+ kto_tags.append(feature["kto_tags"])
+
+ batch = super().__call__(target_features)
+ kl_batch = super().__call__(kl_features)
+ batch["kl_input_ids"] = kl_batch["input_ids"]
+ batch["kl_attention_mask"] = kl_batch["attention_mask"]
+ batch["kl_labels"] = kl_batch["labels"]
+ if "cross_attention_mask" in kl_batch: # for mllama inputs
+ batch["kl_cross_attention_mask"] = kl_batch["cross_attention_mask"]
+
+ if "token_type_ids" in kl_batch:
+ batch["kl_token_type_ids"] = kl_batch["token_type_ids"]
+
+ batch["kto_tags"] = torch.tensor(kto_tags)
+ return batch
\ No newline at end of file
diff --git a/src/train_sft/src/llamafactory/data/collator_backup.py b/src/train_sft/src/llamafactory/data/collator_backup.py
new file mode 100644
index 0000000000000000000000000000000000000000..8c0d44f210415c5ba060c747e21debf8c9cf292f
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/collator_backup.py
@@ -0,0 +1,322 @@
+# Copyright 2025 OpenAccess AI Collective and the LlamaFactory team.
+#
+# This code is inspired by the OpenAccess AI Collective's axolotl library.
+# https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/src/axolotl/monkeypatch/utils.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Literal, Optional
+
+import numpy as np
+import torch
+import torch.nn.functional as F
+from peft import PeftModel
+from transformers import DataCollatorForSeq2Seq
+
+from ..extras.constants import AUDIO_PLACEHOLDER, IGNORE_INDEX, IMAGE_PLACEHOLDER
+from ..extras.packages import is_pillow_available
+
+
+if is_pillow_available():
+ from PIL import Image
+
+
+if TYPE_CHECKING:
+ from transformers import ProcessorMixin
+
+ from .template import Template
+
+
+def prepare_4d_attention_mask(attention_mask_with_indices: "torch.Tensor", dtype: "torch.dtype") -> "torch.Tensor":
+ r"""Expand 2d attention mask to 4d attention mask.
+
+ Expand the attention mask with indices from (batch_size, seq_len) to (batch_size, 1, seq_len, seq_len),
+ handle packed sequences and transforms the mask to lower triangular form to prevent future peeking.
+
+ e.g.
+ ```python
+ # input
+ [[1, 1, 2, 2, 2, 0]]
+ # output
+ [
+ [
+ [
+ [o, x, x, x, x, x],
+ [o, o, x, x, x, x],
+ [x, x, o, x, x, x],
+ [x, x, o, o, x, x],
+ [x, x, o, o, o, x],
+ [x, x, x, x, x, x],
+ ]
+ ]
+ ]
+ ```
+ where `o` equals to `0.0`, `x` equals to `min_dtype`.
+ """
+ _, seq_len = attention_mask_with_indices.size()
+ min_dtype = torch.finfo(dtype).min
+ zero_tensor = torch.tensor(0, dtype=dtype)
+
+ # Create a non-padding mask.
+ non_padding_mask = (attention_mask_with_indices != 0).unsqueeze(1).unsqueeze(2)
+ # Create indices for comparison.
+ indices = attention_mask_with_indices.unsqueeze(1).unsqueeze(2) # [bsz, 1, 1, seq_len]
+ indices_t = attention_mask_with_indices.unsqueeze(1).unsqueeze(3) # [bsz, 1, seq_len, 1]
+ # Create a lower triangular mask.
+ tril_mask = torch.tril(torch.ones((seq_len, seq_len), dtype=torch.bool))
+ attention_mask_4d = (indices == indices_t) & non_padding_mask & tril_mask
+ # Invert the attention mask.
+ attention_mask_4d = torch.where(attention_mask_4d, zero_tensor, min_dtype)
+ return attention_mask_4d
+
+
+@dataclass
+class MultiModalDataCollatorForSeq2Seq(DataCollatorForSeq2Seq):
+ r"""Data collator that supports VLMs.
+
+ Features should contain input_ids, attention_mask, labels, and optionally contain images, videos and audios.
+ """
+
+ template: Optional["Template"] = None
+ processor: Optional["ProcessorMixin"] = None
+
+ def __post_init__(self):
+ if self.template is None:
+ raise ValueError("Template is required for MultiModalDataCollator.")
+
+ if isinstance(self.model, PeftModel):
+ self.model = self.model.base_model.model
+
+ if self.model is not None and hasattr(self.model, "get_rope_index"): # for qwen2vl mrope
+ self.get_rope_func = self.model.get_rope_index # transformers < 4.52.0 or qwen2.5 omni
+ elif self.model is not None and hasattr(self.model, "model") and hasattr(self.model.model, "get_rope_index"):
+ self.get_rope_func = self.model.model.get_rope_index # transformers >= 4.52.0
+ else:
+ self.get_rope_func = None
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ batch_images, batch_videos, batch_audios = [], [], []
+ batch_imglens, batch_vidlens, batch_audlens, batch_input_ids = [], [], [], []
+ for feature in features:
+ images = feature.pop("images", None) or []
+ videos = feature.pop("videos", None) or []
+ audios = feature.pop("audios", None) or []
+ batch_images.extend(images)
+ batch_videos.extend(videos)
+ batch_audios.extend(audios)
+ batch_imglens.append(len(images))
+ batch_vidlens.append(len(videos))
+ batch_audlens.append(len(audios))
+ batch_input_ids.append(feature["input_ids"])
+
+ fake_input_ids = []
+ if (
+ self.template.mm_plugin.image_token is not None and sum(batch_imglens) == 0 and sum(batch_vidlens) == 0
+ ): # avoid process hanging in zero3/fsdp case
+ fake_messages = [{"role": "user", "content": IMAGE_PLACEHOLDER}]
+ fake_images = [Image.new("RGB", (64, 64), (255, 255, 255))]
+ fake_messages = self.template.mm_plugin.process_messages(
+ fake_messages, fake_images, [], [], self.processor
+ )
+ _fake_input_ids = self.tokenizer.encode(fake_messages[0]["content"], add_special_tokens=False)
+ _fake_input_ids, _ = self.template.mm_plugin.process_token_ids(
+ _fake_input_ids, None, fake_images, [], [], self.tokenizer, self.processor
+ )
+ fake_input_ids.extend(_fake_input_ids)
+ batch_images = fake_images
+ batch_imglens[0] = 1
+
+ if (
+ self.template.mm_plugin.audio_token is not None and sum(batch_audlens) == 0
+ ): # avoid process hanging in zero3/fsdp case
+ fake_messages = [{"role": "user", "content": AUDIO_PLACEHOLDER}]
+ fake_audios = [np.zeros(1600)]
+ fake_messages = self.template.mm_plugin.process_messages(
+ fake_messages, [], [], fake_audios, self.processor
+ )
+ _fake_input_ids = self.tokenizer.encode(fake_messages[0]["content"], add_special_tokens=False)
+ _fake_input_ids, _ = self.template.mm_plugin.process_token_ids(
+ _fake_input_ids, None, [], [], fake_audios, self.tokenizer, self.processor
+ )
+ fake_input_ids.extend(_fake_input_ids)
+ batch_audios = fake_audios
+ batch_audlens[0] = 1
+
+ if len(fake_input_ids) != 0:
+ if self.tokenizer.padding_side == "right":
+ features[0]["input_ids"] = features[0]["input_ids"] + fake_input_ids
+ features[0]["attention_mask"] = features[0]["attention_mask"] + [0] * len(fake_input_ids)
+ features[0]["labels"] = features[0]["labels"] + [IGNORE_INDEX] * len(fake_input_ids)
+ else:
+ features[0]["input_ids"] = fake_input_ids + features[0]["input_ids"]
+ features[0]["attention_mask"] = [0] * len(fake_input_ids) + features[0]["attention_mask"]
+ features[0]["labels"] = [IGNORE_INDEX] * len(fake_input_ids) + features[0]["labels"]
+
+ batch_input_ids[0] = features[0]["input_ids"]
+
+ mm_inputs = self.template.mm_plugin.get_mm_inputs(
+ batch_images,
+ batch_videos,
+ batch_audios,
+ batch_imglens,
+ batch_vidlens,
+ batch_audlens,
+ batch_input_ids,
+ self.processor,
+ )
+ if "token_type_ids" in mm_inputs:
+ token_type_ids = mm_inputs.pop("token_type_ids")
+ for i, feature in enumerate(features):
+ feature["token_type_ids"] = token_type_ids[i]
+
+ features: dict[str, torch.Tensor] = super().__call__(features)
+
+ if self.get_rope_func is not None:
+ rope_index_kwargs = {
+ "input_ids": features["input_ids"],
+ "image_grid_thw": mm_inputs.get("image_grid_thw"),
+ "video_grid_thw": mm_inputs.get("video_grid_thw"),
+ "attention_mask": (features["attention_mask"] >= 1).float(),
+ }
+ if "second_per_grid_ts" in mm_inputs: # for qwen2vl
+ rope_index_kwargs["second_per_grid_ts"] = mm_inputs.get("second_per_grid_ts")
+ elif "video_second_per_grid" in mm_inputs: # for qwen2.5 omni
+ rope_index_kwargs["second_per_grids"] = mm_inputs.get("video_second_per_grid")
+
+ if getattr(self.model.config, "model_type", None) == "qwen2_5_omni_thinker": # for qwen2.5 omni
+ rope_index_kwargs["use_audio_in_video"] = getattr(self.processor, "use_audio_in_video", False)
+ feature_attention_mask = mm_inputs.get("feature_attention_mask", None)
+ if feature_attention_mask is not None: # FIXME: need to get video image lengths
+ audio_feature_lengths = torch.sum(feature_attention_mask, dim=1)
+ rope_index_kwargs["audio_seqlens"] = audio_feature_lengths # prepare for input
+
+ features["position_ids"], rope_deltas = self.get_rope_func(**rope_index_kwargs)
+ features["rope_deltas"] = rope_deltas - (1 - rope_index_kwargs["attention_mask"]).sum(
+ dim=-1
+ ).unsqueeze(-1)
+ else: # for qwen2vl
+ features["position_ids"], features["rope_deltas"] = self.get_rope_func(**rope_index_kwargs)
+
+ if (
+ self.model is not None
+ and getattr(self.model.config, "model_type", None)
+ in ["glm4v", "Keye", "qwen2_vl", "qwen2_5_vl", "qwen2_5_omni_thinker"]
+ and ("position_ids" not in features or features["position_ids"].dim() != 3)
+ ):
+ raise ValueError(f"{self.model.config.model_type} requires 3D position ids for mrope.")
+
+ if "cross_attention_mask" in mm_inputs: # for mllama inputs when pad_to_multiple_of is enabled
+ cross_attention_mask = mm_inputs.pop("cross_attention_mask")
+ seq_len = features["input_ids"].size(1)
+ orig_len = cross_attention_mask.size(1)
+ mm_inputs["cross_attention_mask"] = F.pad(cross_attention_mask, (0, 0, 0, 0, 0, seq_len - orig_len))
+
+ features.update(mm_inputs)
+
+ if "image_bound" in features: # for minicpmv inputs
+ bsz, seq_length = features["input_ids"].shape
+ features["position_ids"] = torch.arange(seq_length).long().repeat(bsz, 1)
+ return {"data": features, "input_ids": features["input_ids"], "labels": features["labels"]}
+
+ return features
+
+
+@dataclass
+class SFTDataCollatorWith4DAttentionMask(MultiModalDataCollatorForSeq2Seq):
+ r"""Data collator for 4d attention mask."""
+
+ block_diag_attn: bool = False
+ attn_implementation: Literal["eager", "sdpa", "flash_attention_2"] = "eager"
+ compute_dtype: "torch.dtype" = torch.float32
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ features = super().__call__(features)
+ if self.block_diag_attn and self.attn_implementation != "flash_attention_2":
+ features["attention_mask"] = prepare_4d_attention_mask(features["attention_mask"], self.compute_dtype)
+
+ for key, value in features.items(): # cast data dtype for paligemma
+ if torch.is_tensor(value) and torch.is_floating_point(value):
+ features[key] = value.to(self.compute_dtype)
+
+ return features
+
+
+@dataclass
+class PairwiseDataCollatorWithPadding(MultiModalDataCollatorForSeq2Seq):
+ r"""Data collator for pairwise data."""
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ r"""Pad batched data to the longest sequence in the batch.
+
+ We generate 2 * n examples where the first n examples represent chosen examples and
+ the last n examples represent rejected examples.
+ """
+ concatenated_features = []
+ for key in ("chosen", "rejected"):
+ for feature in features:
+ target_feature = {
+ "input_ids": feature[f"{key}_input_ids"],
+ "attention_mask": feature[f"{key}_attention_mask"],
+ "labels": feature[f"{key}_labels"],
+ "images": feature["images"],
+ "videos": feature["videos"],
+ "audios": feature["audios"],
+ }
+ concatenated_features.append(target_feature)
+
+ return super().__call__(concatenated_features)
+
+
+@dataclass
+class KTODataCollatorWithPadding(MultiModalDataCollatorForSeq2Seq):
+ r"""Data collator for KTO data."""
+
+ def __call__(self, features: list[dict[str, Any]]) -> dict[str, "torch.Tensor"]:
+ target_features = []
+ kl_features = []
+ kto_tags = []
+ for feature in features:
+ target_feature = {
+ "input_ids": feature["input_ids"],
+ "attention_mask": feature["attention_mask"],
+ "labels": feature["labels"],
+ "images": feature["images"],
+ "videos": feature["videos"],
+ "audios": feature["audios"],
+ }
+ kl_feature = {
+ "input_ids": feature["kl_input_ids"],
+ "attention_mask": feature["kl_attention_mask"],
+ "labels": feature["kl_labels"],
+ "images": feature["images"],
+ "videos": feature["videos"],
+ "audios": feature["audios"],
+ }
+ target_features.append(target_feature)
+ kl_features.append(kl_feature)
+ kto_tags.append(feature["kto_tags"])
+
+ batch = super().__call__(target_features)
+ kl_batch = super().__call__(kl_features)
+ batch["kl_input_ids"] = kl_batch["input_ids"]
+ batch["kl_attention_mask"] = kl_batch["attention_mask"]
+ batch["kl_labels"] = kl_batch["labels"]
+ if "cross_attention_mask" in kl_batch: # for mllama inputs
+ batch["kl_cross_attention_mask"] = kl_batch["cross_attention_mask"]
+
+ if "token_type_ids" in kl_batch:
+ batch["kl_token_type_ids"] = kl_batch["token_type_ids"]
+
+ batch["kto_tags"] = torch.tensor(kto_tags)
+ return batch
\ No newline at end of file
diff --git a/src/train_sft/src/llamafactory/data/converter.py b/src/train_sft/src/llamafactory/data/converter.py
new file mode 100644
index 0000000000000000000000000000000000000000..1523c6f3733d8ff9805d1ce2c04ce2280aaf067a
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/converter.py
@@ -0,0 +1,368 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+from abc import abstractmethod
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Optional, Union
+
+from ..extras import logging
+from .data_utils import Role
+import time
+import json
+
+if TYPE_CHECKING:
+ from datasets import Dataset, IterableDataset
+ from transformers import Seq2SeqTrainingArguments
+
+ from ..hparams import DataArguments
+ from .mm_plugin import AudioInput, ImageInput, VideoInput
+ from .parser import DatasetAttr
+
+ MediaType = Union[ImageInput, VideoInput, AudioInput]
+
+
+logger = logging.get_logger(__name__)
+
+
+def _get_by_path(obj: dict[str, Any], path: Optional[str]) -> Any:
+ r"""Safely get nested value by dot-separated path, return None if not found or path is falsy."""
+ if not path:
+ return None
+ cur: Any = obj
+ for part in path.split("."):
+ if isinstance(cur, dict) and part in cur:
+ cur = cur[part]
+ else:
+ return None
+ return cur
+
+
+@dataclass
+class DatasetConverter:
+ dataset_attr: "DatasetAttr"
+ data_args: "DataArguments"
+
+ def _find_medias(self, medias: Union["MediaType", list["MediaType"], None]) -> Optional[list["MediaType"]]:
+ r"""Optionally concatenate media path to media dir when loading from local disk."""
+ if medias is None:
+ return None
+ elif not isinstance(medias, list):
+ medias = [medias]
+ elif len(medias) == 0:
+ return None
+ else:
+ medias = medias[:]
+
+ if self.dataset_attr.load_from in ["script", "file"]:
+ if isinstance(medias[0], str):
+ for i in range(len(medias)):
+ media_path = os.path.join(self.data_args.media_dir, medias[i])
+ if os.path.isfile(media_path):
+ medias[i] = media_path
+ else:
+ logger.warning_rank0_once(
+ f"Media {medias[i]} does not exist in `media_dir`. Use original path."
+ )
+ elif isinstance(medias[0], list): # for processed video frames
+ # medias is a list of lists, e.g., [[frame1.jpg, frame2.jpg], [frame3.jpg, frame4.jpg]]
+ for i in range(len(medias)):
+ for j in range(len(medias[i])):
+ media_path = os.path.join(self.data_args.media_dir, medias[i][j])
+ if os.path.isfile(media_path):
+ medias[i][j] = media_path
+ else:
+ logger.warning_rank0_once(
+ f"Media {medias[i][j]} does not exist in `media_dir`. Use original path."
+ )
+
+ return medias
+
+ @abstractmethod
+ def __call__(self, example: dict[str, Any]) -> dict[str, Any]:
+ r"""Convert a single example in the dataset to the standard format."""
+ ...
+
+
+@dataclass
+class AlpacaDatasetConverter(DatasetConverter):
+ def __call__(self, example: dict[str, Any]) -> dict[str, Any]:
+ prompt = []
+ history_value = _get_by_path(example, self.dataset_attr.history) if self.dataset_attr.history else None
+ if isinstance(history_value, list):
+ for old_prompt, old_response in history_value:
+ prompt.append({"role": Role.USER.value, "content": old_prompt})
+ prompt.append({"role": Role.ASSISTANT.value, "content": old_response})
+
+ query = []
+ prompt_value = _get_by_path(example, self.dataset_attr.prompt) if self.dataset_attr.prompt else None
+ if prompt_value:
+ query.append(prompt_value)
+
+ query_value = _get_by_path(example, self.dataset_attr.query) if self.dataset_attr.query else None
+ if query_value:
+ query.append(query_value)
+
+ prompt.append({"role": Role.USER.value, "content": "\n".join(query)}) # "prompt\nquery"
+
+ response = []
+ kto_value = _get_by_path(example, self.dataset_attr.kto_tag) if self.dataset_attr.kto_tag else None
+ if isinstance(kto_value, bool): # kto example
+ resp_text = _get_by_path(example, self.dataset_attr.response) if self.dataset_attr.response else None
+ response = [{"role": Role.ASSISTANT.value, "content": resp_text or ""}]
+ if kto_value:
+ response = response + [{"role": Role.ASSISTANT.value, "content": ""}]
+ else:
+ response = [{"role": Role.ASSISTANT.value, "content": ""}] + response
+ elif self.dataset_attr.ranking:
+ chosen = _get_by_path(example, self.dataset_attr.chosen) if self.dataset_attr.chosen else None
+ rejected = _get_by_path(example, self.dataset_attr.rejected) if self.dataset_attr.rejected else None
+ if isinstance(chosen, str) and isinstance(rejected, str):
+ response = [
+ {"role": Role.ASSISTANT.value, "content": chosen},
+ {"role": Role.ASSISTANT.value, "content": rejected},
+ ]
+ else:
+ resp_text = _get_by_path(example, self.dataset_attr.response) if self.dataset_attr.response else None
+ if isinstance(resp_text, str):
+ response = [{"role": Role.ASSISTANT.value, "content": resp_text}]
+
+ output = {
+ "_prompt": prompt,
+ "_response": response,
+ "_system": _get_by_path(example, self.dataset_attr.system) if self.dataset_attr.system else (self.data_args.default_system or ""),
+ "_tools": _get_by_path(example, self.dataset_attr.tools) if self.dataset_attr.tools else "",
+ "_images": self._find_medias(_get_by_path(example, self.dataset_attr.images)) if self.dataset_attr.images else None,
+ "_videos": self._find_medias(_get_by_path(example, self.dataset_attr.videos)) if self.dataset_attr.videos else None,
+ "_audios": self._find_medias(_get_by_path(example, self.dataset_attr.audios)) if self.dataset_attr.audios else None,
+ }
+ return output
+
+
+@dataclass
+class SharegptDatasetConverter(DatasetConverter):
+ def __call__(self, example: dict[str, Any]) -> dict[str, Any]:
+ tag_mapping = {
+ self.dataset_attr.user_tag: Role.USER.value,
+ self.dataset_attr.assistant_tag: Role.ASSISTANT.value,
+ self.dataset_attr.observation_tag: Role.OBSERVATION.value,
+ self.dataset_attr.function_tag: Role.FUNCTION.value,
+ self.dataset_attr.system_tag: Role.SYSTEM.value,
+ }
+ odd_tags = (self.dataset_attr.user_tag, self.dataset_attr.observation_tag)
+ even_tags = (self.dataset_attr.assistant_tag, self.dataset_attr.function_tag)
+ accept_tags = (odd_tags, even_tags)
+ messages = _get_by_path(example, self.dataset_attr.messages)
+ if (
+ self.dataset_attr.system_tag
+ and len(messages) != 0
+ and messages[0][self.dataset_attr.role_tag] == self.dataset_attr.system_tag
+ ):
+ system = messages[0][self.dataset_attr.content_tag]
+ messages = messages[1:]
+ else:
+ system = _get_by_path(example, self.dataset_attr.system) if self.dataset_attr.system else (self.data_args.default_system or "")
+
+ aligned_messages = []
+ broken_data = False
+ for turn_idx, message in enumerate(messages):
+ if message[self.dataset_attr.role_tag] not in accept_tags[turn_idx % 2]:
+ logger.warning_rank0(f"Invalid role tag in {messages}.")
+ broken_data = True
+ break
+
+ aligned_messages.append(
+ {
+ "role": tag_mapping[message[self.dataset_attr.role_tag]],
+ "content": message[self.dataset_attr.content_tag],
+ }
+ )
+
+ if (not self.dataset_attr.ranking and len(aligned_messages) % 2 != 0) or (
+ self.dataset_attr.ranking and len(aligned_messages) % 2 == 0
+ ):
+ logger.warning_rank0(f"Invalid message count in {messages}.")
+ broken_data = True
+
+ if broken_data:
+ logger.warning_rank0("Skipping this abnormal example.")
+ prompt, response = [], []
+ elif self.dataset_attr.kto_tag and isinstance(_get_by_path(example, self.dataset_attr.kto_tag), bool): # kto example
+ prompt = aligned_messages[:-1]
+ response = aligned_messages[-1:]
+ if _get_by_path(example, self.dataset_attr.kto_tag):
+ response = response + [{"role": Role.ASSISTANT.value, "content": ""}]
+ else:
+ response = [{"role": Role.ASSISTANT.value, "content": ""}] + response
+ elif self.dataset_attr.ranking:
+ chosen = _get_by_path(example, self.dataset_attr.chosen)
+ rejected = _get_by_path(example, self.dataset_attr.rejected)
+ if (
+ isinstance(chosen, dict)
+ and isinstance(rejected, dict)
+ and chosen[self.dataset_attr.role_tag] in accept_tags[-1]
+ and rejected[self.dataset_attr.role_tag] in accept_tags[-1]
+ ):
+ prompt = aligned_messages
+ response = [
+ {
+ "role": tag_mapping[chosen[self.dataset_attr.role_tag]],
+ "content": chosen[self.dataset_attr.content_tag],
+ },
+ {
+ "role": tag_mapping[rejected[self.dataset_attr.role_tag]],
+ "content": rejected[self.dataset_attr.content_tag],
+ },
+ ]
+ else:
+ logger.warning_rank0(f"Invalid role tag in {[chosen, rejected]}.")
+ prompt, response = [], []
+ else: # normal example
+ prompt = aligned_messages[:-1]
+ response = aligned_messages[-1:]
+
+ output = {
+ "_prompt": prompt,
+ "_response": response,
+ "_system": system,
+ "_tools": _get_by_path(example, self.dataset_attr.tools) if self.dataset_attr.tools else "",
+ "_images": self._find_medias(_get_by_path(example, self.dataset_attr.images)) if self.dataset_attr.images else None,
+ "_videos": self._find_medias(_get_by_path(example, self.dataset_attr.videos)) if self.dataset_attr.videos else None,
+ "_audios": self._find_medias(_get_by_path(example, self.dataset_attr.audios)) if self.dataset_attr.audios else None,
+ }
+ return output
+
+
+@dataclass
+class GroupLayoutDatasetConverter(DatasetConverter):
+ r"""Converter for the custom group-layout dataset with fields under 'content'."""
+
+ def __call__(self, example: dict[str, Any]) -> dict[str, Any]:
+ user_input = example["content"]["user_input"]
+ cot = example["content"].get("cot", "")
+ answer_obj = json.loads(example["content"].get("answer"))
+
+ # 在 train split 中移除不需要学习的字段,例如 'jid'
+ def _strip_keys(obj: Any, keys: set[str]) -> Any:
+ if isinstance(obj, dict):
+ return {k: _strip_keys(v, keys) for k, v in obj.items() if k not in keys}
+ if isinstance(obj, list):
+ return [_strip_keys(v, keys) for v in obj]
+ return obj
+
+ def _delete_ids(obj: Any) -> Any:
+ metadata = obj.get("meta", {})
+ metadata.pop("scene_id", None)
+
+ if "functional_zones" in obj:
+ for fz in obj["functional_zones"]:
+ for asset in fz.get("assets", []):
+ asset.pop("id", None)
+ asset.pop("model_id", None)
+ asset.pop("model_uid", None)
+
+ return obj
+
+ if answer_obj is None:
+ ans_text = ""
+ else:
+ try:
+ # 仅在训练与验证(计算 CE)阶段移除 jid;测试集保留完整答案
+ split_lower = str(getattr(self.dataset_attr, "split", "")).lower()
+ if split_lower in {"train", "val"}:
+ answer_obj = _strip_keys(answer_obj, {"jid", "room_id", "model_id", "scene_id"})
+ answer_obj = _delete_ids(answer_obj)
+
+ ans_text = json.dumps(answer_obj, ensure_ascii=False)
+ except Exception:
+ ans_text = str(answer_obj)
+
+ # 统一包裹 …
+ ans_wrapped = f"{ans_text} "
+ if cot and (self.data_args.enable_thinking is None or self.data_args.enable_thinking):
+ assistant_text = f"{cot} \n" + ans_wrapped
+ else:
+ assistant_text = ans_wrapped
+
+ prompt = [{"role": Role.USER.value, "content": user_input}]
+ response = [{"role": Role.ASSISTANT.value, "content": assistant_text}]
+
+ output = {
+ "_prompt": prompt,
+ "_response": response,
+ "_system": self.data_args.default_system or "",
+ "_tools": "",
+ "_images": None,
+ "_videos": None,
+ "_audios": None,
+ }
+ return output
+
+
+DATASET_CONVERTERS = {
+ "alpaca": AlpacaDatasetConverter,
+ "sharegpt": SharegptDatasetConverter,
+ "group_layout": GroupLayoutDatasetConverter,
+}
+
+
+def register_dataset_converter(name: str, dataset_converter: type["DatasetConverter"]) -> None:
+ r"""Register a new dataset converter."""
+ if name in DATASET_CONVERTERS:
+ raise ValueError(f"Dataset converter {name} already exists.")
+
+ DATASET_CONVERTERS[name] = dataset_converter
+
+
+def get_dataset_converter(name: str, dataset_attr: "DatasetAttr", data_args: "DataArguments") -> "DatasetConverter":
+ r"""Get a dataset converter."""
+ if name not in DATASET_CONVERTERS:
+ raise ValueError(f"Dataset converter {name} not found.")
+
+ return DATASET_CONVERTERS[name](dataset_attr, data_args)
+
+
+def align_dataset(
+ dataset: Union["Dataset", "IterableDataset"],
+ dataset_attr: "DatasetAttr",
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+) -> Union["Dataset", "IterableDataset"]:
+ r"""Align the dataset to a specific format.
+
+ Aligned dataset:
+ _prompt: [{"role": "user", "content": "..."}] * (2T - 1)
+ _response: [{"role": "assistant", "content": "..."}] * N (N > 1 for ranking dataset)
+ _system: "..."
+ _tools: "..."
+ _images: []
+ _videos: []
+ _audios: []
+ """
+ column_names = list(next(iter(dataset)).keys())
+ kwargs = {}
+ if not data_args.streaming:
+ kwargs = dict(
+ num_proc=data_args.preprocessing_num_workers,
+ load_from_cache_file=(not data_args.overwrite_cache) or (training_args.local_process_index != 0),
+ desc="Converting format of dataset",
+ )
+
+ dataset_converter = get_dataset_converter(dataset_attr.formatting, dataset_attr, data_args)
+ return dataset.map(
+ dataset_converter,
+ batched=False,
+ remove_columns=column_names,
+ **kwargs,
+ )
diff --git a/src/train_sft/src/llamafactory/data/data_utils.py b/src/train_sft/src/llamafactory/data/data_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..14e261290a032466c87772915eb9f472e08d14fa
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/data_utils.py
@@ -0,0 +1,190 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+from enum import Enum, unique
+from typing import TYPE_CHECKING, Any, Optional, TypedDict, Union
+
+import fsspec
+from datasets import DatasetDict, concatenate_datasets, interleave_datasets
+
+from ..extras import logging
+
+
+if TYPE_CHECKING:
+ from datasets import Dataset, IterableDataset
+
+ from ..hparams import DataArguments
+
+
+logger = logging.get_logger(__name__)
+
+
+SLOTS = list[Union[str, set[str], dict[str, str]]]
+
+
+@unique
+class Role(str, Enum):
+ USER = "user"
+ ASSISTANT = "assistant"
+ SYSTEM = "system"
+ FUNCTION = "function"
+ OBSERVATION = "observation"
+
+
+class DatasetModule(TypedDict):
+ train_dataset: Optional[Union["Dataset", "IterableDataset"]]
+ eval_dataset: Optional[Union["Dataset", "IterableDataset", dict[str, "Dataset"]]]
+
+
+def merge_dataset(
+ all_datasets: list[Union["Dataset", "IterableDataset"]], data_args: "DataArguments", seed: int
+) -> Union["Dataset", "IterableDataset"]:
+ r"""Merge multiple datasets to a unified dataset."""
+ if len(all_datasets) == 1:
+ return all_datasets[0]
+
+ elif data_args.mix_strategy == "concat":
+ if data_args.streaming:
+ logger.warning_rank0_once("The samples between different datasets will not be mixed in streaming mode.")
+
+ return concatenate_datasets(all_datasets)
+
+ elif data_args.mix_strategy.startswith("interleave"):
+ if not data_args.streaming:
+ logger.warning_rank0_once("We recommend using `mix_strategy=concat` in non-streaming mode.")
+
+ return interleave_datasets(
+ datasets=all_datasets,
+ probabilities=data_args.interleave_probs,
+ seed=seed,
+ stopping_strategy="first_exhausted" if data_args.mix_strategy.endswith("under") else "all_exhausted",
+ )
+
+ else:
+ raise ValueError(f"Unknown mixing strategy: {data_args.mix_strategy}.")
+
+
+def split_dataset(
+ dataset: Optional[Union["Dataset", "IterableDataset"]],
+ eval_dataset: Optional[Union["Dataset", "IterableDataset", dict[str, "Dataset"]]],
+ data_args: "DataArguments",
+ seed: int,
+) -> "DatasetDict":
+ r"""Split the dataset and returns a dataset dict containing train set and validation set.
+
+ Support both map dataset and iterable dataset.
+ """
+ if eval_dataset is not None and data_args.val_size > 1e-6:
+ raise ValueError("Cannot specify `val_size` if `eval_dataset` is not None.")
+
+ dataset_dict = {}
+ if dataset is not None:
+ if data_args.streaming:
+ dataset = dataset.shuffle(buffer_size=data_args.buffer_size, seed=seed)
+
+ if data_args.val_size > 1e-6:
+ if data_args.streaming:
+ dataset_dict["validation"] = dataset.take(int(data_args.val_size))
+ dataset_dict["train"] = dataset.skip(int(data_args.val_size))
+ else:
+ val_size = int(data_args.val_size) if data_args.val_size > 1 else data_args.val_size
+ dataset_dict = dataset.train_test_split(test_size=val_size, seed=seed)
+ dataset = dataset.train_test_split(test_size=val_size, seed=seed)
+ dataset_dict = {"train": dataset["train"], "validation": dataset["test"]}
+ else:
+ dataset_dict["train"] = dataset
+
+ if eval_dataset is not None:
+ if isinstance(eval_dataset, dict):
+ dataset_dict.update({f"validation_{name}": data for name, data in eval_dataset.items()})
+ else:
+ if data_args.streaming:
+ eval_dataset = eval_dataset.shuffle(buffer_size=data_args.buffer_size, seed=seed)
+
+ dataset_dict["validation"] = eval_dataset
+
+ return DatasetDict(dataset_dict)
+
+
+def get_dataset_module(dataset: Union["Dataset", "DatasetDict"]) -> "DatasetModule":
+ r"""Convert dataset or dataset dict to dataset module."""
+ dataset_module: DatasetModule = {}
+ if isinstance(dataset, DatasetDict): # dataset dict
+ if "train" in dataset:
+ dataset_module["train_dataset"] = dataset["train"]
+
+ if "validation" in dataset:
+ dataset_module["eval_dataset"] = dataset["validation"]
+ else:
+ eval_dataset = {}
+ for key in dataset.keys():
+ if key.startswith("validation_"):
+ eval_dataset[key[len("validation_") :]] = dataset[key]
+
+ if len(eval_dataset):
+ dataset_module["eval_dataset"] = eval_dataset
+
+ else: # single dataset
+ dataset_module["train_dataset"] = dataset
+
+ return dataset_module
+
+
+def setup_fs(path: str, anon: bool = False) -> "fsspec.AbstractFileSystem":
+ r"""Set up a filesystem object based on the path protocol."""
+ storage_options = {"anon": anon} if anon else {}
+ if path.startswith("s3://"):
+ fs = fsspec.filesystem("s3", **storage_options)
+ elif path.startswith(("gs://", "gcs://")):
+ fs = fsspec.filesystem("gcs", **storage_options)
+ else:
+ raise ValueError(f"Unsupported protocol in path: {path}. Use 's3://' or 'gs://'.")
+
+ if not fs.exists(path):
+ raise ValueError(f"Path does not exist: {path}.")
+
+ return fs
+
+
+def _read_json_with_fs(fs: "fsspec.AbstractFileSystem", path: str) -> list[Any]:
+ r"""Helper function to read JSON/JSONL files using fsspec."""
+ with fs.open(path, "r") as f:
+ if path.endswith(".jsonl"):
+ return [json.loads(line) for line in f if line.strip()]
+ else:
+ return json.load(f)
+
+
+def read_cloud_json(cloud_path: str) -> list[Any]:
+ r"""Read a JSON/JSONL file from cloud storage (S3 or GCS).
+
+ Args:
+ cloud_path: str
+ Cloud path in the format:
+ - 's3://bucket-name/file.json' for AWS S3
+ - 'gs://bucket-name/file.jsonl' or 'gcs://bucket-name/file.jsonl' for Google Cloud Storage
+ """
+ try:
+ fs = setup_fs(cloud_path, anon=True) # try with anonymous access first
+ except Exception:
+ fs = setup_fs(cloud_path) # try again with credentials
+
+ # filter out non-JSON files
+ files = [x["Key"] for x in fs.listdir(cloud_path)] if fs.isdir(cloud_path) else [cloud_path]
+ files = filter(lambda file: file.endswith(".json") or file.endswith(".jsonl"), files)
+ if not files:
+ raise ValueError(f"No JSON/JSONL files found in the specified path: {cloud_path}.")
+
+ return sum([_read_json_with_fs(fs, file) for file in files], [])
diff --git a/src/train_sft/src/llamafactory/data/formatter.py b/src/train_sft/src/llamafactory/data/formatter.py
new file mode 100644
index 0000000000000000000000000000000000000000..0a101ac6787b99431807714c88da1a75d7a8b3f2
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/formatter.py
@@ -0,0 +1,142 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import re
+from abc import ABC, abstractmethod
+from dataclasses import dataclass, field
+from typing import Optional, Union
+
+from typing_extensions import override
+
+from .data_utils import SLOTS
+from .tool_utils import FunctionCall, get_tool_utils
+
+
+@dataclass
+class Formatter(ABC):
+ slots: SLOTS = field(default_factory=list)
+ tool_format: Optional[str] = None
+
+ @abstractmethod
+ def apply(self, **kwargs) -> SLOTS:
+ r"""Forms a list of slots according to the inputs to encode."""
+ ...
+
+ def extract(self, content: str) -> Union[str, list["FunctionCall"]]:
+ r"""Extract a list of tuples from the response message if using tools.
+
+ Each tuple consists of function name and function arguments.
+ """
+ raise NotImplementedError
+
+
+@dataclass
+class EmptyFormatter(Formatter):
+ def __post_init__(self):
+ has_placeholder = False
+ for slot in filter(lambda s: isinstance(s, str), self.slots):
+ if re.search(r"\{\{[a-zA-Z_][a-zA-Z0-9_]*\}\}", slot):
+ has_placeholder = True
+
+ if has_placeholder:
+ raise ValueError("Empty formatter should not contain any placeholder.")
+
+ @override
+ def apply(self, **kwargs) -> SLOTS:
+ return self.slots
+
+
+@dataclass
+class StringFormatter(Formatter):
+ def __post_init__(self):
+ has_placeholder = False
+ for slot in filter(lambda s: isinstance(s, str), self.slots):
+ if re.search(r"\{\{[a-zA-Z_][a-zA-Z0-9_]*\}\}", slot):
+ has_placeholder = True
+
+ if not has_placeholder:
+ raise ValueError("A placeholder is required in the string formatter.")
+
+ @override
+ def apply(self, **kwargs) -> SLOTS:
+ elements = []
+ for slot in self.slots:
+ if isinstance(slot, str):
+ for name, value in kwargs.items():
+ if not isinstance(value, str):
+ raise RuntimeError(f"Expected a string, got {value}")
+
+ slot = slot.replace("{{" + name + "}}", value, 1)
+ elements.append(slot)
+ elif isinstance(slot, (dict, set)):
+ elements.append(slot)
+ else:
+ raise RuntimeError(f"Input must be string, set[str] or dict[str, str], got {type(slot)}.")
+
+ return elements
+
+
+@dataclass
+class FunctionFormatter(StringFormatter):
+ def __post_init__(self):
+ super().__post_init__()
+ self.tool_utils = get_tool_utils(self.tool_format)
+
+ @override
+ def apply(self, **kwargs) -> SLOTS:
+ content: str = kwargs.pop("content")
+ regex = re.compile(r"(.*) ", re.DOTALL)
+ thought = re.search(regex, content)
+ if thought:
+ content = content.replace(thought.group(0), "")
+
+ functions: list[FunctionCall] = []
+ try:
+ tool_calls = json.loads(content)
+ if not isinstance(tool_calls, list): # parallel function call
+ tool_calls = [tool_calls]
+
+ for tool_call in tool_calls:
+ functions.append(
+ FunctionCall(tool_call["name"], json.dumps(tool_call["arguments"], ensure_ascii=False))
+ )
+
+ except json.JSONDecodeError:
+ raise RuntimeError(f"Invalid JSON format in function message: {str([content])}.") # flat string
+
+ function_str = self.tool_utils.function_formatter(functions)
+ if thought:
+ function_str = thought.group(0) + function_str
+
+ return super().apply(content=function_str)
+
+
+@dataclass
+class ToolFormatter(Formatter):
+ def __post_init__(self):
+ self.tool_utils = get_tool_utils(self.tool_format)
+
+ @override
+ def apply(self, **kwargs) -> SLOTS:
+ content = kwargs.pop("content")
+ try:
+ tools = json.loads(content)
+ return [self.tool_utils.tool_formatter(tools) if len(tools) != 0 else ""]
+ except json.JSONDecodeError:
+ raise RuntimeError(f"Invalid JSON format in tool description: {str([content])}.") # flat string
+
+ @override
+ def extract(self, content: str) -> Union[str, list["FunctionCall"]]:
+ return self.tool_utils.tool_extractor(content)
diff --git a/src/train_sft/src/llamafactory/data/group_layout_augmentor.py b/src/train_sft/src/llamafactory/data/group_layout_augmentor.py
new file mode 100644
index 0000000000000000000000000000000000000000..f083a5fdaa3726eb98a31d0340ea3303bca4dcec
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/group_layout_augmentor.py
@@ -0,0 +1,371 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import random
+import numpy as np
+import re
+from typing import Dict, List, Any, Optional, Tuple
+from copy import deepcopy
+import logging
+
+logger = logging.getLogger(__name__)
+
+
+class GroupLayoutAugmentor:
+ """室内布局数据增强器,用于在data collator阶段动态增强样本"""
+
+ def __init__(self,
+ augmentation_types: List[str],
+ augmentation_ratio: float = 0.3,
+ seed: int = 42):
+ """
+ 初始化增强器
+
+ Args:
+ augmentation_types: 增强类型列表
+ augmentation_ratio: 增强概率 (0.0-1.0)
+ seed: 随机种子
+ """
+ random.seed(seed)
+ np.random.seed(seed)
+
+ self.augmentation_types = augmentation_types
+ self.augmentation_ratio = augmentation_ratio
+
+ # 增强配置
+ self.config = {
+ 'rotation_angles': [90, 180, 270],
+ 'scale_factors': [0.8, 0.9, 1.1, 1.2],
+ 'translation_range': 0.5,
+ 'object_variations': {
+ 'sofa': ['sectional sofa', 'loveseat', 'modern sofa', 'fabric sofa', 'leather sofa'],
+ 'table': ['dining table', 'coffee table', 'side table', 'round table', 'rectangular table'],
+ 'chair': ['armchair', 'office chair', 'dining chair', 'accent chair', 'lounge chair'],
+ 'bed': ['queen bed', 'king bed', 'twin bed', 'platform bed', 'sleigh bed'],
+ 'desk': ['office desk', 'writing desk', 'computer desk', 'standing desk'],
+ 'shelf': ['bookshelf', 'wall shelf', 'floating shelf', 'storage shelf'],
+ 'lamp': ['table lamp', 'floor lamp', 'desk lamp', 'pendant lamp'],
+ 'cabinet': ['storage cabinet', 'display cabinet', 'kitchen cabinet', 'wardrobe'],
+ },
+ 'style_variations': [
+ 'modern', 'contemporary', 'traditional', 'minimalist',
+ 'industrial', 'scandinavian', 'mid-century', 'rustic'
+ ],
+ 'color_variations': [
+ 'white', 'black', 'gray', 'brown', 'beige', 'navy',
+ 'dark', 'light', 'warm', 'cool', 'neutral'
+ ]
+ }
+
+ def should_augment(self) -> bool:
+ """判断是否应该对当前样本进行增强"""
+ return random.random() < self.augmentation_ratio
+
+ def augment_sample(self, prompt_text: str, response_text: str) -> Tuple[str, str]:
+ """
+ 对单个样本进行增强
+
+ Args:
+ prompt_text: 用户输入文本
+ response_text: 助手回复文本
+
+ Returns:
+ 增强后的 (prompt_text, response_text)
+ """
+ if not self.should_augment() or not self.augmentation_types:
+ return prompt_text, response_text
+
+ try:
+ # 随机选择增强类型
+ aug_type = random.choice(self.augmentation_types)
+
+ if aug_type == 'geometric_transform':
+ return self._apply_geometric_transform(prompt_text, response_text)
+ elif aug_type == 'object_variation':
+ return self._apply_object_variation(prompt_text, response_text)
+ elif aug_type == 'style_variation':
+ return self._apply_style_variation(prompt_text, response_text)
+ elif aug_type == 'prompt_paraphrase':
+ return self._apply_prompt_paraphrase(prompt_text, response_text)
+ elif aug_type == 'layout_modification':
+ return self._apply_layout_modification(prompt_text, response_text)
+ else:
+ return prompt_text, response_text
+
+ except Exception as e:
+ logger.warning(f"Data augmentation failed with {aug_type}: {e}")
+ return prompt_text, response_text
+
+ def _apply_geometric_transform(self, prompt: str, response: str) -> Tuple[str, str]:
+ """应用几何变换"""
+ try:
+ # 从response中提取JSON
+ if '' in response and ' ' in response:
+ answer_start = response.find('') + len('')
+ answer_end = response.find(' ')
+ answer_json_str = response[answer_start:answer_end].strip()
+
+ # 解析JSON
+ answer_obj = json.loads(answer_json_str)
+
+ # 应用几何变换
+ transform_type = random.choice(['rotation', 'scaling', 'translation'])
+
+ if transform_type == 'rotation':
+ angle = random.choice(self.config['rotation_angles'])
+ answer_obj = self._rotate_layout(answer_obj, angle)
+ elif transform_type == 'scaling':
+ scale = random.choice(self.config['scale_factors'])
+ answer_obj = self._scale_layout(answer_obj, scale)
+ elif transform_type == 'translation':
+ offset = random.uniform(-self.config['translation_range'],
+ self.config['translation_range'])
+ answer_obj = self._translate_layout(answer_obj, offset)
+
+ # 重构response
+ new_answer_json = json.dumps(answer_obj, ensure_ascii=False, separators=(',', ':'))
+ new_response = response[:answer_start] + new_answer_json + response[answer_end:]
+
+ return prompt, new_response
+
+ except Exception as e:
+ logger.debug(f"Geometric transform failed: {e}")
+
+ return prompt, response
+
+ def _apply_object_variation(self, prompt: str, response: str) -> Tuple[str, str]:
+ """应用对象变换"""
+ try:
+ new_prompt = prompt
+ new_response = response
+
+ # 替换prompt中的对象描述
+ for base_obj, variations in self.config['object_variations'].items():
+ if base_obj.lower() in new_prompt.lower():
+ replacement = random.choice(variations)
+ # 使用word boundary来避免部分匹配
+ pattern = r'\b' + re.escape(base_obj) + r'\b'
+ new_prompt = re.sub(pattern, replacement, new_prompt, count=1, flags=re.IGNORECASE)
+
+ # 替换response中的对象描述
+ for base_obj, variations in self.config['object_variations'].items():
+ if base_obj.lower() in new_response.lower():
+ replacement = random.choice(variations)
+ pattern = r'\b' + re.escape(base_obj) + r'\b'
+ new_response = re.sub(pattern, replacement, new_response, count=1, flags=re.IGNORECASE)
+
+ return new_prompt, new_response
+
+ except Exception as e:
+ logger.debug(f"Object variation failed: {e}")
+
+ return prompt, response
+
+ def _apply_style_variation(self, prompt: str, response: str) -> Tuple[str, str]:
+ """应用风格变换"""
+ try:
+ style = random.choice(self.config['style_variations'])
+ color = random.choice(self.config['color_variations'])
+
+ # 在prompt中添加风格要求
+ style_requirement = f" Please use a {style} style with {color} tones."
+ new_prompt = prompt + style_requirement
+
+ # 在response的thinking部分体现风格考虑
+ new_response = response
+ if '' in response and ' ' in response:
+ think_start = response.find('') + len('')
+ think_end = response.find(' ')
+ think_content = response[think_start:think_end]
+
+ style_thinking = f" I will incorporate {style} design principles with {color} color scheme throughout the layout."
+ new_think_content = think_content + style_thinking
+
+ new_response = (response[:think_start] +
+ new_think_content +
+ response[think_end:])
+
+ return new_prompt, new_response
+
+ except Exception as e:
+ logger.debug(f"Style variation failed: {e}")
+
+ return prompt, response
+
+ def _apply_prompt_paraphrase(self, prompt: str, response: str) -> Tuple[str, str]:
+ """应用提示词改写"""
+ try:
+ paraphrases = {
+ "design": ["create", "plan", "arrange", "layout", "organize"],
+ "room": ["space", "area", "interior", "environment"],
+ "furniture": ["furnishings", "pieces", "items", "objects"],
+ "layout": ["arrangement", "configuration", "setup", "design"],
+ "place": ["position", "locate", "situate", "arrange"],
+ }
+
+ new_prompt = prompt
+ for original, alternatives in paraphrases.items():
+ if original in new_prompt.lower():
+ replacement = random.choice(alternatives)
+ pattern = r'\b' + re.escape(original) + r'\b'
+ new_prompt = re.sub(pattern, replacement, new_prompt, count=1, flags=re.IGNORECASE)
+
+ return new_prompt, response
+
+ except Exception as e:
+ logger.debug(f"Prompt paraphrase failed: {e}")
+
+ return prompt, response
+
+ def _apply_layout_modification(self, prompt: str, response: str) -> Tuple[str, str]:
+ """应用布局修改"""
+ try:
+ # 从response中提取JSON
+ if '' in response and ' ' in response:
+ answer_start = response.find('') + len('')
+ answer_end = response.find(' ')
+ answer_json_str = response[answer_start:answer_end].strip()
+
+ # 解析JSON
+ answer_obj = json.loads(answer_json_str)
+
+ # 应用布局修改
+ modification_type = random.choice(['add_object', 'adjust_position'])
+
+ if modification_type == 'add_object' and answer_obj.get('groups'):
+ self._add_decorative_object(answer_obj)
+ elif modification_type == 'adjust_position':
+ self._adjust_positions(answer_obj)
+
+ # 重构response
+ new_answer_json = json.dumps(answer_obj, ensure_ascii=False, separators=(',', ':'))
+ new_response = response[:answer_start] + new_answer_json + response[answer_end:]
+
+ return prompt, new_response
+
+ except Exception as e:
+ logger.debug(f"Layout modification failed: {e}")
+
+ return prompt, response
+
+ def _rotate_layout(self, layout_data: Dict, angle_degrees: int) -> Dict:
+ """旋转布局"""
+ angle_rad = np.radians(angle_degrees)
+ cos_a, sin_a = np.cos(angle_rad), np.sin(angle_rad)
+
+ layout_copy = deepcopy(layout_data)
+
+ # 旋转房间轮廓
+ if 'room_envelope' in layout_copy:
+ for bounds_key in ['bounds_top', 'bounds_bottom']:
+ if bounds_key in layout_copy['room_envelope']:
+ for point in layout_copy['room_envelope'][bounds_key]:
+ x, z = point[0], point[2]
+ point[0] = x * cos_a - z * sin_a
+ point[2] = x * sin_a + z * cos_a
+
+ # 旋转对象位置
+ for group in layout_copy.get('groups', []):
+ for obj in group.get('objects', []):
+ if 'pos' in obj and len(obj['pos']) >= 3:
+ x, z = obj['pos'][0], obj['pos'][2]
+ obj['pos'][0] = x * cos_a - z * sin_a
+ obj['pos'][2] = x * sin_a + z * cos_a
+
+ # 简单的旋转四元数处理
+ if 'rot' in obj and len(obj['rot']) >= 4:
+ # 围绕Y轴旋转
+ qy_add = np.sin(angle_rad / 2)
+ qw_mult = np.cos(angle_rad / 2)
+ obj['rot'][1] += qy_add
+ obj['rot'][3] *= qw_mult
+
+ return layout_copy
+
+ def _scale_layout(self, layout_data: Dict, scale_factor: float) -> Dict:
+ """缩放布局"""
+ layout_copy = deepcopy(layout_data)
+
+ # 缩放房间轮廓
+ if 'room_envelope' in layout_copy:
+ for bounds_key in ['bounds_top', 'bounds_bottom']:
+ if bounds_key in layout_copy['room_envelope']:
+ for point in layout_copy['room_envelope'][bounds_key]:
+ point[0] *= scale_factor
+ point[2] *= scale_factor
+
+ # 缩放对象位置和尺寸
+ for group in layout_copy.get('groups', []):
+ for obj in group.get('objects', []):
+ if 'pos' in obj and len(obj['pos']) >= 3:
+ obj['pos'][0] *= scale_factor
+ obj['pos'][2] *= scale_factor
+
+ if 'size' in obj and isinstance(obj['size'], list):
+ obj['size'] = [s * scale_factor for s in obj['size']]
+
+ return layout_copy
+
+ def _translate_layout(self, layout_data: Dict, offset: float) -> Dict:
+ """平移布局"""
+ layout_copy = deepcopy(layout_data)
+
+ # 平移对象位置
+ for group in layout_copy.get('groups', []):
+ for obj in group.get('objects', []):
+ if 'pos' in obj and len(obj['pos']) >= 3:
+ obj['pos'][0] += offset
+ obj['pos'][2] += offset
+
+ return layout_copy
+
+ def _add_decorative_object(self, layout_data: Dict) -> None:
+ """添加装饰性对象"""
+ if not layout_data.get('groups'):
+ return
+
+ decorative_objects = [
+ {"desc": "Table Lamp", "size": [0.3, 0.6, 0.3]},
+ {"desc": "Floor Lamp", "size": [0.4, 1.8, 0.4]},
+ {"desc": "Plant Pot", "size": [0.4, 0.8, 0.4]},
+ {"desc": "Decorative Vase", "size": [0.25, 0.4, 0.25]},
+ {"desc": "Picture Frame", "size": [0.3, 0.4, 0.05]},
+ ]
+
+ # 随机选择一个组添加对象
+ group_idx = random.randint(0, len(layout_data['groups']) - 1)
+ new_object = random.choice(decorative_objects).copy()
+
+ # 设置随机位置
+ if layout_data['groups'][group_idx]['objects']:
+ existing_obj = layout_data['groups'][group_idx]['objects'][0]
+ new_object['pos'] = [
+ existing_obj['pos'][0] + random.uniform(-1, 1),
+ 0.0,
+ existing_obj['pos'][2] + random.uniform(-1, 1)
+ ]
+ else:
+ new_object['pos'] = [random.uniform(-2, 2), 0.0, random.uniform(-2, 2)]
+
+ new_object['rot'] = [0, 0, 0, 1]
+ layout_data['groups'][group_idx]['objects'].append(new_object)
+
+ def _adjust_positions(self, layout_data: Dict) -> None:
+ """微调对象位置"""
+ for group in layout_data.get('groups', []):
+ for obj in group.get('objects', []):
+ if 'pos' in obj and len(obj['pos']) >= 3:
+ obj['pos'][0] += random.uniform(-0.3, 0.3)
+ obj['pos'][2] += random.uniform(-0.3, 0.3)
diff --git a/src/train_sft/src/llamafactory/data/group_layout_augmentor_backup.py b/src/train_sft/src/llamafactory/data/group_layout_augmentor_backup.py
new file mode 100644
index 0000000000000000000000000000000000000000..29b634bb0c9456a9b3a9c984e745972e1d51a7f5
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/group_layout_augmentor_backup.py
@@ -0,0 +1,708 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import random
+import numpy as np
+import re
+from typing import Dict, List, Any, Optional, Tuple
+from copy import deepcopy
+import logging
+
+logger = logging.getLogger(__name__)
+
+# 可选依赖
+try:
+ from shapely.geometry import Polygon, box
+ HAS_SHAPELY = True
+except ImportError:
+ HAS_SHAPELY = False
+ logger.warning("Shapely not available, some advanced augmentations will be disabled")
+
+try:
+ from scipy.spatial.transform import Rotation as R
+ HAS_SCIPY = True
+except ImportError:
+ HAS_SCIPY = False
+ logger.warning("SciPy not available, quaternion rotation may be less accurate")
+
+
+def rotate_around_y(point, angle_radians):
+ """围绕Y轴旋转点"""
+ rotation_matrix = np.array([
+ [np.cos(angle_radians), 0, np.sin(angle_radians)],
+ [0, 1, 0],
+ [-np.sin(angle_radians), 0, np.cos(angle_radians)]
+ ])
+ rot_point = np.dot(rotation_matrix, point).tolist()
+ return [round(elem, 2) for elem in rot_point]
+
+
+def combine_quaternion_with_y_rot_for_global_rot(original_quat, angle_radians):
+ """将Y轴旋转与原始四元数组合"""
+ if HAS_SCIPY:
+ y_axis_rotation = R.from_euler('y', angle_radians).as_quat()
+ original_rotation = R.from_quat(original_quat)
+ combined_rotation = R.from_quat(y_axis_rotation) * original_rotation
+ return [round(elem, 5) for elem in combined_rotation.as_quat().tolist()]
+ else:
+ # 简化的四元数旋转
+ qy_add = np.sin(angle_radians / 2)
+ qw_mult = np.cos(angle_radians / 2)
+ new_quat = original_quat.copy()
+ new_quat[1] += qy_add
+ new_quat[3] *= qw_mult
+ return [round(elem, 5) for elem in new_quat]
+
+
+def rotate_obj(obj, angle_radians):
+ """旋转单个对象"""
+ obj["pos"] = rotate_around_y(obj["pos"], angle_radians)
+ obj["rot"] = combine_quaternion_with_y_rot_for_global_rot(obj["rot"], angle_radians)
+
+
+def rotate_scenegraph(scene_data, angle_radians):
+ """旋转整个场景图"""
+ # 旋转房间轮廓
+ for bounds_key in ["bounds_top", "bounds_bottom"]:
+ if bounds_key in scene_data.get("room_envelop", {}):
+ scene_data["room_envelop"][bounds_key] = [
+ rotate_around_y(point, angle_radians)
+ for point in scene_data["room_envelop"][bounds_key]
+ ]
+
+ # 旋转所有对象
+ for group in scene_data.get("groups", []):
+ for obj in group.get("objects", []):
+ rotate_obj(obj, angle_radians)
+
+
+def offset_bounds(scene_data, shift_amount):
+ """循环偏移房间边界点"""
+ room_envelop = scene_data.get("room_envelop", {})
+ for bounds_key in ["bounds_top", "bounds_bottom"]:
+ if bounds_key in room_envelop and shift_amount != 0:
+ bounds = room_envelop[bounds_key]
+ room_envelop[bounds_key] = bounds[shift_amount:] + bounds[:shift_amount]
+
+
+def get_2d_bbox(pos, size):
+ """获取对象的2D边界框"""
+ if not HAS_SHAPELY:
+ return None
+
+ x, _, z = pos
+ width, _, depth = size
+ half_width, half_depth = width/2, depth/2
+ return box(x - half_width, z - half_depth, x + half_width, z + half_depth)
+
+
+def create_floor_plan_polygon(bounds_bottom):
+ """从底部边界创建地板多边形"""
+ if not HAS_SHAPELY or not bounds_bottom:
+ return None
+
+ try:
+ # 提取X和Z坐标
+ points_2d = [(point[0], point[2]) for point in bounds_bottom]
+ return Polygon(points_2d)
+ except:
+ return None
+
+
+def perturb_value_with_bounds(value, bounds=None, min_delta=-0.02, max_delta=0.02):
+ """在给定范围内扰动数值"""
+ delta = np.random.uniform(min_delta, max_delta)
+ new_val = round(value + delta, 2)
+
+ if bounds:
+ min_val, max_val = bounds
+ return np.clip(new_val, min_val, max_val)
+
+ return new_val
+
+
+def get_safe_perturbation(pos, size, floor_polygon, max_attempts=10):
+ """获取安全的位置和尺寸扰动"""
+ if not floor_polygon:
+ # 如果没有floor_polygon,直接进行小范围扰动
+ pos_perturbed = pos.copy()
+ pos_perturbed[0] = perturb_value_with_bounds(pos[0])
+ pos_perturbed[2] = perturb_value_with_bounds(pos[2])
+
+ size_perturbed = size.copy()
+ size_perturbed[0] = perturb_value_with_bounds(size[0])
+ size_perturbed[2] = perturb_value_with_bounds(size[2])
+
+ return pos_perturbed, size_perturbed
+
+ # 获取地板边界
+ minx, minz, maxx, maxz = floor_polygon.bounds
+
+ for i in range(max_attempts):
+ pos_perturbed = pos.copy()
+ pos_perturbed[0] = perturb_value_with_bounds(pos[0], bounds=(minx, maxx))
+ pos_perturbed[2] = perturb_value_with_bounds(pos[2], bounds=(minz, maxz))
+
+ size_perturbed = size.copy()
+ size_perturbed[0] = perturb_value_with_bounds(size[0])
+ size_perturbed[2] = perturb_value_with_bounds(size[2])
+
+ new_bbox = get_2d_bbox(pos_perturbed, size_perturbed)
+ if new_bbox and floor_polygon.contains(new_bbox):
+ return pos_perturbed, size_perturbed
+
+ # 如果找不到有效扰动,返回原始值
+ return pos, size
+
+
+def perturb_scene(scene_data, floor_polygon):
+ """扰动场景中所有对象的位置和尺寸"""
+ for group in scene_data.get("groups", []):
+ for obj in group.get("objects", []):
+ new_pos, new_size = get_safe_perturbation(
+ obj["pos"], obj["size"], floor_polygon
+ )
+ obj["pos"] = new_pos
+ obj["size"] = new_size
+
+
+class GroupLayoutAugmentor:
+ """室内布局数据增强器,专门针对group_layout格式"""
+
+ def __init__(self,
+ augmentation_types: List[str],
+ augmentation_ratio: float = 0.15, # 85%概率进行增强(与原系统一致)
+ seed: int = 42):
+ """
+ 初始化增强器
+
+ Args:
+ augmentation_types: 增强类型列表
+ augmentation_ratio: 增强概率 (0.0-1.0)
+ seed: 随机种子
+ """
+ random.seed(seed)
+ np.random.seed(seed)
+
+ self.augmentation_types = augmentation_types
+ self.augmentation_ratio = augmentation_ratio
+
+ def should_augment(self) -> bool:
+ """判断是否应该对当前样本进行增强"""
+ return random.random() < self.augmentation_ratio
+
+ def augment_sample(self, prompt_text: str, response_text: str) -> Tuple[str, str]:
+ """
+ 对单个样本进行增强
+
+ Args:
+ prompt_text: 用户输入文本
+ response_text: 助手回复文本
+
+ Returns:
+ 增强后的 (prompt_text, response_text)
+ """
+ # 85%概率不进行任何增强(与原系统保持一致)
+ if not self.should_augment():
+ return prompt_text, response_text
+
+ try:
+ # 从response中提取JSON数据
+ if '' not in response_text or ' ' not in response_text:
+ return prompt_text, response_text
+
+ answer_start = response_text.find('') + len('')
+ answer_end = response_text.find(' ')
+ answer_json_str = response_text[answer_start:answer_end].strip()
+
+ # 解析JSON
+ layout_data = json.loads(answer_json_str)
+ layout_data_augmented = deepcopy(layout_data)
+
+ # 应用增强序列(按照原系统的顺序)
+ self._apply_augmentation_sequence(layout_data_augmented)
+
+ # 重构response
+ new_answer_json = json.dumps(layout_data_augmented, ensure_ascii=False, separators=(',', ':'))
+ new_response = response_text[:answer_start] + new_answer_json + response_text[answer_end:]
+
+ return prompt_text, new_response
+
+ except Exception as e:
+ logger.warning(f"Data augmentation failed: {e}")
+ return prompt_text, response_text
+
+ def _apply_augmentation_sequence(self, layout_data: Dict) -> None:
+ """
+ 应用完整的增强序列,基于原系统的增强流程
+ """
+ # (1) 随机旋转 (0, 90, 180, 270度)
+ angle_radians = np.radians(np.random.choice([0, 90, 180, 270]))
+ rotate_scenegraph(layout_data, angle_radians)
+
+ # (2) 循环偏移房间边界
+ if layout_data.get("room_envelop", {}).get("bounds_bottom"):
+ shift_amount = np.random.randint(0, len(layout_data["room_envelop"]["bounds_bottom"]))
+ offset_bounds(layout_data, shift_amount)
+
+ # (3) 扰动对象位置和尺寸
+ floor_polygon = None
+ if layout_data.get("room_envelop", {}).get("bounds_bottom"):
+ floor_polygon = create_floor_plan_polygon(layout_data["room_envelop"]["bounds_bottom"])
+ perturb_scene(layout_data, floor_polygon)
+
+ # (4) 随机重排组的顺序
+ if layout_data.get("groups") and len(layout_data["groups"]) > 1:
+ random.shuffle(layout_data["groups"])
+
+ # (5) 随机重排每个组内对象的顺序
+ for group in layout_data.get("groups", []):
+ if group.get("objects") and len(group["objects"]) > 1:
+ random.shuffle(group["objects"])
+
+
+def do_random_augm_on_sgs(prompt_text: str, response_text: str, augm_prob: float = 0.15) -> Tuple[str, str]:
+ """
+ 兼容原系统的增强函数接口
+
+ Args:
+ prompt_text: 输入文本
+ response_text: 输出文本
+ augm_prob: 增强概率
+
+ Returns:
+ 增强后的 (prompt_text, response_text)
+ """
+ # 使用所有可用的增强类型
+ augmentation_types = ['full_sequence']
+ augmentor = GroupLayoutAugmentor(
+ augmentation_types=augmentation_types,
+ augmentation_ratio=augm_prob,
+ seed=random.randint(0, 10000) # 使用随机种子
+ )
+
+ return augmentor.augment_sample(prompt_text, response_text)
+
+import json
+import random
+import numpy as np
+import re
+from typing import Dict, List, Any, Optional, Tuple
+from copy import deepcopy
+import logging
+
+# 可选依赖
+try:
+ from scipy.spatial.transform import Rotation as R
+ HAS_SCIPY = True
+except ImportError:
+ HAS_SCIPY = False
+
+try:
+ from shapely.geometry import Polygon, box
+ HAS_SHAPELY = True
+except ImportError:
+ HAS_SHAPELY = False
+
+logger = logging.getLogger(__name__)
+
+
+class GroupLayoutAugmentor:
+ """室内布局数据增强器,基于原始增强系统的高级实现"""
+
+ def __init__(self,
+ augmentation_types: List[str],
+ augmentation_ratio: float = 0.85, # 与原系统保持一致
+ seed: int = 42):
+ """
+ 初始化增强器
+
+ Args:
+ augmentation_types: 增强类型列表
+ augmentation_ratio: 增强概率 (0.0-1.0)
+ seed: 随机种子
+ """
+ random.seed(seed)
+ np.random.seed(seed)
+
+ self.augmentation_types = augmentation_types
+ self.augmentation_ratio = augmentation_ratio
+
+ # 增强配置
+ self.config = {
+ 'rotation_angles': [90, 180, 270],
+ 'scale_factors': [0.8, 0.9, 1.1, 1.2],
+ 'translation_range': 0.5,
+ 'perturbation_range': 0.02, # 小范围扰动
+ }
+
+ def should_augment(self) -> bool:
+ """判断是否应该对当前样本进行增强"""
+ return random.random() < self.augmentation_ratio
+
+ def augment_sample(self, prompt_text: str, response_text: str) -> Tuple[str, str]:
+ """
+ 对单个样本进行高级增强,基于原始系统的do_random_augm_on_sgs方法
+
+ Args:
+ prompt_text: 用户输入文本
+ response_text: 助手回复文本
+
+ Returns:
+ 增强后的 (prompt_text, response_text)
+ """
+ # 15%概率不进行任何增强(与原系统保持一致)
+ if not self.should_augment():
+ return prompt_text, response_text
+
+ try:
+ # 从response中提取JSON数据
+ if '' not in response_text or ' ' not in response_text:
+ return prompt_text, response_text
+
+ answer_start = response_text.find('') + len('')
+ answer_end = response_text.find(' ')
+ answer_json_str = response_text[answer_start:answer_end].strip()
+
+ # 解析JSON
+ layout_data = json.loads(answer_json_str)
+ layout_data_augmented = deepcopy(layout_data)
+
+ # 应用增强序列(按照原系统的顺序)
+ self._apply_augmentation_sequence(layout_data_augmented)
+
+ # 重构response
+ new_answer_json = json.dumps(layout_data_augmented, ensure_ascii=False, separators=(',', ':'))
+ new_response = response_text[:answer_start] + new_answer_json + response_text[answer_end:]
+
+ return prompt_text, new_response # 不修改prompt
+
+ except Exception as e:
+ logger.warning(f"Data augmentation failed: {e}")
+ return prompt_text, response_text
+
+ def _apply_augmentation_sequence(self, layout_data: Dict) -> None:
+ """
+ 应用完整的增强序列,基于配置的增强类型
+ """
+ # 根据配置的增强类型应用相应的变换
+ for aug_type in self.augmentation_types:
+ if random.random() < 0.8: # 80%概率应用每种增强
+ if aug_type == 'rotation':
+ angle_radians = np.radians(np.random.choice(self.config['rotation_angles']))
+ self._rotate_scenegraph(layout_data, angle_radians)
+ elif aug_type == 'perturbation':
+ self._perturb_scene_objects(layout_data)
+ elif aug_type == 'group_reorder':
+ self._reorder_groups(layout_data)
+ elif aug_type == 'object_reorder':
+ self._reorder_objects_in_groups(layout_data)
+ elif aug_type == 'bounds_shift':
+ self._offset_room_bounds(layout_data)
+
+ def _rotate_scenegraph(self, layout_data: Dict, angle_radians: float) -> None:
+ """
+ 旋转整个场景图,基于原系统的rotate_scenegraph函数
+ """
+ # 旋转房间轮廓
+ if 'room_envelope' in layout_data:
+ for bounds_key in ['bounds_top', 'bounds_bottom']:
+ if bounds_key in layout_data['room_envelope']:
+ for point in layout_data['room_envelope'][bounds_key]:
+ rotated_point = self._rotate_around_y(point, angle_radians)
+ point[0], point[1], point[2] = rotated_point
+
+ # 旋转组中的对象
+ for group in layout_data.get('groups', []):
+ for obj in group.get('objects', []):
+ self._rotate_object(obj, angle_radians)
+
+ def _rotate_around_y(self, point: List[float], angle_radians: float) -> List[float]:
+ """
+ 围绕Y轴旋转点,基于原系统的rotate_around_y函数
+ """
+ rotation_matrix = np.array([
+ [np.cos(angle_radians), 0, np.sin(angle_radians)],
+ [0, 1, 0],
+ [-np.sin(angle_radians), 0, np.cos(angle_radians)]
+ ])
+ rot_point = np.dot(rotation_matrix, point).tolist()
+ return [round(elem, 2) for elem in rot_point]
+
+ def _rotate_object(self, obj: Dict, angle_radians: float) -> None:
+ """
+ 旋转单个对象,基于原系统的rotate_obj函数
+ """
+ if 'pos' in obj and len(obj['pos']) >= 3:
+ obj['pos'] = self._rotate_around_y(obj['pos'], angle_radians)
+
+ if 'rot' in obj and len(obj['rot']) >= 4:
+ obj['rot'] = self._combine_quaternion_with_y_rotation(obj['rot'], angle_radians)
+
+ def _combine_quaternion_with_y_rotation(self, original_quat: List[float], angle_radians: float) -> List[float]:
+ """
+ 组合四元数与Y轴旋转,基于原系统的combine_quaternion_with_y_rot_for_global_rot函数
+ """
+ if not HAS_SCIPY:
+ # 如果没有scipy,使用简化的四元数处理
+ qy_add = np.sin(angle_radians / 2)
+ qw_mult = np.cos(angle_radians / 2)
+ new_quat = original_quat.copy()
+ new_quat[1] += qy_add
+ new_quat[3] *= qw_mult
+ return [round(elem, 5) for elem in new_quat]
+
+ try:
+ y_axis_rotation = R.from_euler('y', angle_radians).as_quat()
+ original_rotation = R.from_quat(original_quat)
+ combined_rotation = R.from_quat(y_axis_rotation) * original_rotation
+ return [round(elem, 5) for elem in combined_rotation.as_quat().tolist()]
+ except Exception:
+ # 如果四元数操作失败,返回原始值
+ return original_quat
+
+ def _offset_room_bounds(self, layout_data: Dict) -> None:
+ """
+ 对房间边界进行循环位移,基于原系统的offset_bounds函数
+ """
+ if 'room_envelope' not in layout_data:
+ return
+
+ bounds_bottom = layout_data['room_envelope'].get('bounds_bottom', [])
+ if not bounds_bottom:
+ return
+
+ shift_amount = np.random.randint(0, len(bounds_bottom))
+
+ if shift_amount != 0:
+ for bounds_key in ['bounds_top', 'bounds_bottom']:
+ if bounds_key in layout_data['room_envelope']:
+ bounds = layout_data['room_envelope'][bounds_key]
+ layout_data['room_envelope'][bounds_key] = bounds[shift_amount:] + bounds[:shift_amount]
+
+ def _perturb_scene_objects(self, layout_data: Dict) -> None:
+ """
+ 扰动场景中对象的位置和尺寸,基于原系统的perturb_scene函数
+ """
+ # 创建房间多边形用于约束检查
+ floor_polygon = self._create_floor_polygon(layout_data)
+ if not floor_polygon:
+ return
+
+ # 扰动所有组中的对象
+ for group in layout_data.get('groups', []):
+ for obj in group.get('objects', []):
+ self._perturb_object(obj, floor_polygon)
+
+ def _create_floor_polygon(self, layout_data: Dict) -> Optional[Any]:
+ """
+ 从房间边界创建地板多边形
+ """
+ if not HAS_SHAPELY:
+ # 如果没有shapely,返回简化的边界框
+ try:
+ if 'room_envelope' not in layout_data:
+ return None
+
+ bounds_bottom = layout_data['room_envelope'].get('bounds_bottom', [])
+ if len(bounds_bottom) < 3:
+ return None
+
+ # 返回简化的边界框 [minx, minz, maxx, maxz]
+ x_coords = [point[0] for point in bounds_bottom]
+ z_coords = [point[2] for point in bounds_bottom]
+ return {
+ 'bounds': (min(x_coords), min(z_coords), max(x_coords), max(z_coords)),
+ 'type': 'simple_bbox'
+ }
+ except Exception:
+ return None
+
+ try:
+ if 'room_envelope' not in layout_data:
+ return None
+
+ bounds_bottom = layout_data['room_envelope'].get('bounds_bottom', [])
+ if len(bounds_bottom) < 3:
+ return None
+
+ # 提取x,z坐标(忽略y坐标)
+ coords = [(point[0], point[2]) for point in bounds_bottom]
+ return Polygon(coords)
+ except Exception:
+ return None
+
+ def _perturb_object(self, obj: Dict, floor_polygon: Any) -> None:
+ """
+ 扰动单个对象的位置和尺寸,基于原系统的get_safe_perturbation函数
+ """
+ if 'pos' not in obj or 'size' not in obj:
+ return
+
+ original_pos = obj['pos'].copy()
+ original_size = obj['size'].copy()
+
+ # 尝试找到安全的扰动
+ for attempt in range(10): # 最多尝试10次
+ new_pos, new_size = self._get_safe_perturbation(
+ original_pos, original_size, floor_polygon, obj.get('desc', '')
+ )
+
+ # 检查新位置是否在房间内
+ if self._is_object_within_bounds(new_pos, new_size, floor_polygon):
+ obj['pos'] = new_pos
+ obj['size'] = new_size
+ break
+
+ def _get_safe_perturbation(self, pos: List[float], size: List[float],
+ floor_polygon: Any, desc: str) -> Tuple[List[float], List[float]]:
+ """
+ 获取安全的位置和尺寸扰动
+ """
+ try:
+ if floor_polygon is None:
+ return pos, size
+
+ if not HAS_SHAPELY and floor_polygon.get('type') == 'simple_bbox':
+ # 使用简化的边界框
+ minx, minz, maxx, maxz = floor_polygon['bounds']
+ elif HAS_SHAPELY:
+ minx, minz, maxx, maxz = floor_polygon.bounds
+ else:
+ return pos, size
+
+ pos_perturbed = pos.copy()
+ pos_perturbed[0] = self._perturb_value_with_bounds(
+ pos[0], (minx, maxx), -self.config['perturbation_range'], self.config['perturbation_range']
+ )
+ pos_perturbed[2] = self._perturb_value_with_bounds(
+ pos[2], (minz, maxz), -self.config['perturbation_range'], self.config['perturbation_range']
+ )
+
+ size_perturbed = size.copy()
+ if len(size_perturbed) >= 1:
+ size_perturbed[0] = self._perturb_value_with_bounds(
+ size[0], (0.1, maxx-minx), -self.config['perturbation_range'], self.config['perturbation_range']
+ )
+ if len(size_perturbed) >= 3:
+ size_perturbed[2] = self._perturb_value_with_bounds(
+ size[2], (0.1, maxz-minz), -self.config['perturbation_range'], self.config['perturbation_range']
+ )
+
+ return pos_perturbed, size_perturbed
+ except Exception:
+ return pos, size
+
+ def _perturb_value_with_bounds(self, value: float, bounds: Tuple[float, float],
+ min_delta: float = -0.02, max_delta: float = 0.02) -> float:
+ """
+ 在给定边界内扰动值,基于原系统的perturb_value_with_bounds函数
+ """
+ min_val, max_val = bounds
+ delta = np.random.uniform(min_delta, max_delta)
+ new_val = round(value + delta, 2)
+ return np.clip(new_val, min_val, max_val)
+
+ def _is_object_within_bounds(self, pos: List[float], size: List[float],
+ floor_polygon: Any) -> bool:
+ """
+ 检查对象是否在房间边界内
+ """
+ try:
+ if len(pos) < 3 or len(size) < 3:
+ return True # 如果数据不完整,假设有效
+
+ if floor_polygon is None:
+ return True
+
+ if not HAS_SHAPELY and floor_polygon.get('type') == 'simple_bbox':
+ # 使用简化的边界框检查
+ minx, minz, maxx, maxz = floor_polygon['bounds']
+ x, z = pos[0], pos[2]
+ width, depth = size[0], size[2]
+
+ return (minx <= x - width/2 and x + width/2 <= maxx and
+ minz <= z - depth/2 and z + depth/2 <= maxz)
+ elif HAS_SHAPELY:
+ bbox = self._get_2d_bbox(pos, size)
+ return floor_polygon.contains(bbox)
+ else:
+ return True
+ except Exception:
+ return True
+
+ def _get_2d_bbox(self, pos: List[float], size: List[float]):
+ """
+ 获取对象的2D边界框,基于原系统的get_2d_bbox函数
+ """
+ if not HAS_SHAPELY:
+ # 返回简化的边界框信息
+ x, z = pos[0], pos[2]
+ width, depth = size[0], size[2]
+ return {
+ 'type': 'simple_bbox',
+ 'bounds': [x - width/2, z - depth/2, x + width/2, z + depth/2]
+ }
+ else:
+ try:
+ from shapely.geometry import box
+ x, z = pos[0], pos[2]
+ width, depth = size[0], size[2]
+ half_width, half_depth = width/2, depth/2
+ return box(x - half_width, z - half_depth, x + half_width, z + half_depth)
+ except ImportError:
+ # Fallback to simple bbox
+ x, z = pos[0], pos[2]
+ width, depth = size[0], size[2]
+ return {
+ 'type': 'simple_bbox',
+ 'bounds': [x - width/2, z - depth/2, x + width/2, z + depth/2]
+ }
+
+ def _reorder_groups(self, layout_data: Dict) -> None:
+ """
+ 随机重排序组的顺序
+ """
+ if 'groups' in layout_data and len(layout_data['groups']) > 1:
+ random.shuffle(layout_data['groups'])
+
+ def _reorder_objects_in_groups(self, layout_data: Dict) -> None:
+ """
+ 随机重排序每个组内对象的顺序
+ """
+ for group in layout_data.get('groups', []):
+ if 'objects' in group and len(group['objects']) > 1:
+ random.shuffle(group['objects'])
+
+ def _offset_room_bounds(self, layout_data: Dict) -> None:
+ """
+ 对房间边界应用小的偏移
+ """
+ if 'room_envelop' in layout_data:
+ room_envelop = layout_data['room_envelop']
+
+ # 生成小的偏移量
+ offset_range = 0.1
+ x_offset = random.uniform(-offset_range, offset_range)
+ z_offset = random.uniform(-offset_range, offset_range)
+
+ # 应用偏移到边界
+ if 'bounds_top' in room_envelop:
+ room_envelop['bounds_top'][0] += x_offset
+ room_envelop['bounds_top'][2] += z_offset
+
+ if 'bounds_bottom' in room_envelop:
+ room_envelop['bounds_bottom'][0] += x_offset
+ room_envelop['bounds_bottom'][2] += z_offset
diff --git a/src/train_sft/src/llamafactory/data/group_layout_augmentor_new.py b/src/train_sft/src/llamafactory/data/group_layout_augmentor_new.py
new file mode 100644
index 0000000000000000000000000000000000000000..7e74e2508c7e5de43fbb92f41b047333f3c0fb7c
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/group_layout_augmentor_new.py
@@ -0,0 +1,348 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import random
+import logging
+import numpy as np
+from typing import Dict, List, Any, Optional, Tuple
+from copy import deepcopy
+from scipy.spatial.transform import Rotation as R
+from shapely.geometry import Polygon, box
+
+logger = logging.getLogger(__name__)
+
+
+class GroupLayoutAugmentor:
+ """室内布局数据增强器,用于在data collator阶段动态增强样本"""
+
+ def __init__(self,
+ augmentation_types: List[str],
+ augmentation_ratio: float = 0.85,
+ seed: int = 42):
+ """
+ 初始化增强器
+
+ Args:
+ augmentation_types: 增强类型列表
+ augmentation_ratio: 增强概率 (0.0-1.0)
+ seed: 随机种子
+ """
+ random.seed(seed)
+ np.random.seed(seed)
+
+ self.augmentation_types = augmentation_types
+ self.augmentation_ratio = augmentation_ratio
+
+ # 支持的增强类型
+ self.supported_augmentations = {
+ 'random_group_shuffle', # 随机交换group之间的顺序
+ 'random_object_shuffle', # 随机交换group内object的顺序
+ 'geometric_rotation', # 几何旋转
+ 'bounds_offset', # 边界偏移
+ 'position_perturbation' # 位置扰动
+ }
+
+ def should_augment(self) -> bool:
+ """判断是否应该对当前样本进行增强"""
+ return random.random() < self.augmentation_ratio
+
+ def augment_sample(self, prompt_text: str, response_text: str) -> Tuple[str, str]:
+ """
+ 对单个样本进行增强
+
+ Args:
+ prompt_text: 用户输入文本
+ response_text: 助手回复文本(包含 )
+
+ Returns:
+ 增强后的 (prompt_text, response_text)
+ """
+ if not self.should_augment() or not self.augmentation_types:
+ return prompt_text, response_text
+
+ try:
+ # 从response中提取 中的JSON
+ answer_json = self._extract_answer_json(response_text)
+ if answer_json is None:
+ return prompt_text, response_text
+
+ # 应用随机选择的增强方法
+ augmented_json = self._apply_augmentations(answer_json)
+
+ # 重新构建response
+ new_response = self._replace_answer_json(response_text, augmented_json)
+
+ return prompt_text, new_response
+
+ except Exception as e:
+ logger.warning(f"数据增强失败: {e}")
+ return prompt_text, response_text
+
+ def _extract_answer_json(self, response_text: str) -> Optional[Dict]:
+ """从response中提取 包裹的JSON"""
+ try:
+ start_tag = ''
+ end_tag = ' '
+
+ if start_tag not in response_text or end_tag not in response_text:
+ return None
+
+ start_idx = response_text.find(start_tag) + len(start_tag)
+ end_idx = response_text.find(end_tag)
+
+ json_str = response_text[start_idx:end_idx].strip()
+ return json.loads(json_str)
+
+ except Exception as e:
+ logger.debug(f"提取answer JSON失败: {e}")
+ return None
+
+ def _replace_answer_json(self, response_text: str, new_json: Dict) -> str:
+ """替换response中 包裹的JSON"""
+ try:
+ start_tag = ''
+ end_tag = ' '
+
+ start_idx = response_text.find(start_tag) + len(start_tag)
+ end_idx = response_text.find(end_tag)
+
+ new_json_str = json.dumps(new_json, ensure_ascii=False, separators=(',', ':'))
+
+ return (response_text[:start_idx] +
+ new_json_str +
+ response_text[end_idx:])
+
+ except Exception as e:
+ logger.debug(f"替换answer JSON失败: {e}")
+ return response_text
+
+ def _apply_augmentations(self, layout_data: Dict) -> Dict:
+ """应用数据增强方法"""
+ augmented_data = deepcopy(layout_data)
+
+ # 获取可用的增强方法
+ available_methods = [aug for aug in self.augmentation_types
+ if aug in self.supported_augmentations]
+
+ if not available_methods:
+ return augmented_data
+
+ # 随机选择1-4个增强方法
+ num_methods = random.randint(1, min(4, len(available_methods)))
+ selected_methods = random.sample(available_methods, num_methods)
+
+ # group交换和object交换不能同时使用
+ if 'random_group_shuffle' in selected_methods and 'random_object_shuffle' in selected_methods:
+ # 随机保留其中一个
+ keep_method = random.choice(['random_group_shuffle', 'random_object_shuffle'])
+ selected_methods = [m for m in selected_methods if m not in ['random_group_shuffle', 'random_object_shuffle']]
+ selected_methods.append(keep_method)
+
+ # 应用选中的增强方法
+ for method in selected_methods:
+ if method == 'random_group_shuffle':
+ self._shuffle_groups(augmented_data)
+ elif method == 'random_object_shuffle':
+ self._shuffle_objects_in_groups(augmented_data)
+ elif method == 'geometric_rotation':
+ self._apply_geometric_rotation(augmented_data)
+ elif method == 'bounds_offset':
+ self._apply_bounds_offset(augmented_data)
+ elif method == 'position_perturbation':
+ self._apply_position_perturbation(augmented_data)
+
+ return augmented_data
+
+ def _shuffle_groups(self, layout_data: Dict) -> None:
+ """随机交换两个group的顺序"""
+ if 'groups' in layout_data and isinstance(layout_data['groups'], list) and len(layout_data['groups']) >= 2:
+ groups = layout_data['groups']
+ # 随机选择两个不同的group进行交换
+ idx1, idx2 = random.sample(range(len(groups)), 2)
+ groups[idx1], groups[idx2] = groups[idx2], groups[idx1]
+
+ def _shuffle_objects_in_groups(self, layout_data: Dict) -> None:
+ """随机选择1-2个group,对其内部的object进行shuffle"""
+ if 'groups' not in layout_data or not isinstance(layout_data['groups'], list):
+ return
+
+ # 找出有多个object的group
+ valid_groups = []
+ for i, group in enumerate(layout_data['groups']):
+ if (isinstance(group, dict) and 'objects' in group and
+ isinstance(group['objects'], list) and len(group['objects']) > 1):
+ valid_groups.append(i)
+
+ if not valid_groups:
+ return
+
+ # 随机选择1-2个group进行shuffle
+ num_groups_to_shuffle = random.randint(1, min(2, len(valid_groups)))
+ groups_to_shuffle = random.sample(valid_groups, num_groups_to_shuffle)
+
+ for group_idx in groups_to_shuffle:
+ random.shuffle(layout_data['groups'][group_idx]['objects'])
+
+ def _apply_geometric_rotation(self, layout_data: Dict) -> None:
+ """应用几何旋转变换"""
+ # 随机选择旋转角度 (0, 90, 180, 270度)
+ angle_degrees = random.choice([0, 90, 180, 270])
+ if angle_degrees == 0:
+ return
+
+ angle_radians = np.radians(angle_degrees)
+
+ # 旋转房间边界
+ if 'room_envelope' in layout_data and isinstance(layout_data['room_envelope'], dict):
+ self._rotate_bounds(layout_data['room_envelope'], angle_radians)
+
+ # 旋转所有对象
+ if 'groups' in layout_data and isinstance(layout_data['groups'], list):
+ for group in layout_data['groups']:
+ if isinstance(group, dict) and 'objects' in group and isinstance(group['objects'], list):
+ for obj in group['objects']:
+ if isinstance(obj, dict):
+ self._rotate_object(obj, angle_radians)
+
+ def _rotate_bounds(self, room_envelope: Dict, angle_radians: float) -> None:
+ """旋转房间边界"""
+ for bounds_key in ['bounds_top', 'bounds_bottom']:
+ if bounds_key in room_envelope and isinstance(room_envelope[bounds_key], list):
+ rotated_bounds = []
+ for point in room_envelope[bounds_key]:
+ if isinstance(point, list) and len(point) >= 3:
+ rotated_point = self._rotate_point_around_y(point, angle_radians)
+ rotated_bounds.append(rotated_point)
+ room_envelope[bounds_key] = rotated_bounds
+
+ def _rotate_object(self, obj: Dict, angle_radians: float) -> None:
+ """旋转单个对象"""
+ if 'pos' in obj and isinstance(obj['pos'], list) and len(obj['pos']) >= 3:
+ obj['pos'] = self._rotate_point_around_y(obj['pos'], angle_radians)
+
+ if 'rot' in obj and isinstance(obj['rot'], list) and len(obj['rot']) >= 4:
+ obj['rot'] = self._combine_quaternion_with_y_rotation(obj['rot'], angle_radians)
+
+ def _rotate_point_around_y(self, point: List[float], angle_radians: float) -> List[float]:
+ """绕Y轴旋转点"""
+ rotation_matrix = np.array([
+ [np.cos(angle_radians), 0, np.sin(angle_radians)],
+ [0, 1, 0],
+ [-np.sin(angle_radians), 0, np.cos(angle_radians)]
+ ])
+ rotated_point = np.dot(rotation_matrix, point).tolist()
+ return [round(elem, 2) for elem in rotated_point]
+
+ def _combine_quaternion_with_y_rotation(self, original_quat: List[float], angle_radians: float) -> List[float]:
+ """组合原始四元数与Y轴旋转"""
+ y_axis_rotation = R.from_euler('y', angle_radians).as_quat()
+ original_rotation = R.from_quat(original_quat)
+ combined_rotation = R.from_quat(y_axis_rotation) * original_rotation
+ return [round(elem, 5) for elem in combined_rotation.as_quat().tolist()]
+
+ def _apply_bounds_offset(self, layout_data: Dict) -> None:
+ """应用边界偏移(循环移位)"""
+ if 'room_envelope' not in layout_data or not isinstance(layout_data['room_envelope'], dict):
+ return
+
+ room_envelope = layout_data['room_envelope']
+
+ for bounds_key in ['bounds_top', 'bounds_bottom']:
+ if bounds_key in room_envelope and isinstance(room_envelope[bounds_key], list) and len(room_envelope[bounds_key]) > 1:
+ bounds = room_envelope[bounds_key]
+ shift_amount = random.randint(0, len(bounds) - 1)
+ if shift_amount > 0:
+ room_envelope[bounds_key] = bounds[shift_amount:] + bounds[:shift_amount]
+
+ def _apply_position_perturbation(self, layout_data: Dict) -> None:
+ """应用位置扰动"""
+ # 首先创建floor polygon用于约束
+ floor_polygon = self._create_floor_polygon(layout_data)
+ if floor_polygon is None:
+ return
+
+ if 'groups' in layout_data and isinstance(layout_data['groups'], list):
+ for group in layout_data['groups']:
+ if isinstance(group, dict) and 'objects' in group and isinstance(group['objects'], list):
+ for obj in group['objects']:
+ if isinstance(obj, dict):
+ self._perturb_object_position(obj, floor_polygon)
+
+ def _create_floor_polygon(self, layout_data: Dict) -> Optional[Polygon]:
+ """从房间边界创建floor polygon"""
+ try:
+ if ('room_envelope' in layout_data and
+ isinstance(layout_data['room_envelope'], dict) and
+ 'bounds_bottom' in layout_data['room_envelope'] and
+ isinstance(layout_data['room_envelope']['bounds_bottom'], list)):
+
+ bounds = layout_data['room_envelope']['bounds_bottom']
+ # 提取x,z坐标 (忽略y坐标)
+ points_2d = []
+ for point in bounds:
+ if isinstance(point, list) and len(point) >= 3:
+ points_2d.append((point[0], point[2]))
+
+ if len(points_2d) >= 3: # 至少需要3个点构成多边形
+ return Polygon(points_2d)
+ except Exception as e:
+ logger.debug(f"创建floor polygon失败: {e}")
+ return None
+
+ def _perturb_object_position(self, obj: Dict, floor_polygon: Polygon, max_attempts: int = 10) -> None:
+ """扰动对象位置,确保仍在floor polygon内"""
+ if ('pos' not in obj or not isinstance(obj['pos'], list) or len(obj['pos']) < 3 or
+ 'size' not in obj or not isinstance(obj['size'], list) or len(obj['size']) < 3):
+ return
+
+ original_pos = obj['pos'].copy()
+ original_size = obj['size'].copy()
+
+ # 获取floor边界
+ minx, minz, maxx, maxz = floor_polygon.bounds
+
+ for _ in range(max_attempts):
+ # 扰动位置和尺寸
+ new_pos = original_pos.copy()
+ new_pos[0] = self._perturb_value_with_bounds(original_pos[0], (minx, maxx))
+ new_pos[2] = self._perturb_value_with_bounds(original_pos[2], (minz, maxz))
+
+ new_size = original_size.copy()
+ new_size[0] = self._perturb_value_with_bounds(original_size[0], (0.1, 5.0)) # 合理的尺寸范围
+ new_size[2] = self._perturb_value_with_bounds(original_size[2], (0.1, 5.0))
+
+ # 检查新的边界框是否在floor polygon内
+ new_bbox = self._get_2d_bbox(new_pos, new_size)
+ if floor_polygon.contains(new_bbox):
+ obj['pos'] = new_pos
+ obj['size'] = new_size
+ return
+
+ # 如果找不到有效扰动,保持原值
+
+ def _perturb_value_with_bounds(self, value: float, bounds: Tuple[float, float],
+ min_delta: float = -0.02, max_delta: float = 0.02) -> float:
+ """在指定范围内扰动数值"""
+ min_val, max_val = bounds
+ delta = np.random.uniform(min_delta, max_delta)
+ new_val = round(value + delta, 2)
+ return np.clip(new_val, min_val, max_val)
+
+ def _get_2d_bbox(self, pos: List[float], size: List[float]) -> box:
+ """获取2D边界框"""
+ x, _, z = pos
+ width, _, depth = size
+ half_width, half_depth = width/2, depth/2
+ return box(x - half_width, z - half_depth, x + half_width, z + half_depth)
diff --git a/src/train_sft/src/llamafactory/data/group_layout_collator.py b/src/train_sft/src/llamafactory/data/group_layout_collator.py
new file mode 100644
index 0000000000000000000000000000000000000000..5c3a30a68953166c9fa606adf16fce2506c6dc27
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/group_layout_collator.py
@@ -0,0 +1,248 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import torch
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Any, Literal, Optional, List
+from collections import defaultdict
+
+from transformers import DataCollatorForSeq2Seq
+from ..extras.constants import IGNORE_INDEX
+from .group_layout_augmentor_new import GroupLayoutAugmentor
+from .processor.processor_utils import infer_seqlen
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedTokenizer, ProcessorMixin
+ from .template import Template
+ from ..hparams import DataArguments
+
+
+@dataclass
+class GroupLayoutDataCollator(DataCollatorForSeq2Seq):
+ """
+ 专门用于Group Layout数据的Data Collator
+ 支持在collation阶段进行动态数据增强和tokenization
+ """
+
+ template: "Template"
+ processor: Optional["ProcessorMixin"] = None
+ data_args: Optional["DataArguments"] = None
+ block_diag_attn: bool = False
+ attn_implementation: Literal["eager", "sdpa", "flash_attention_2"] = "eager"
+ compute_dtype: torch.dtype = torch.float32
+
+ def __post_init__(self):
+ super().__post_init__()
+
+ # 初始化数据增强器
+ self.augmentor = None
+ if (self.data_args and
+ getattr(self.data_args, 'enable_data_augmentation', False)):
+
+ augmentation_types = getattr(self.data_args, 'augmentation_types', '').split(',')
+ augmentation_types = [t.strip() for t in augmentation_types if t.strip()]
+
+ if augmentation_types:
+ self.augmentor = GroupLayoutAugmentor(
+ augmentation_types=augmentation_types,
+ augmentation_ratio=getattr(self.data_args, 'augmentation_ratio', 0.85),
+ seed=getattr(self.data_args, 'augmentation_seed', 42)
+ )
+
+ def __call__(self, features: List[dict[str, Any]]) -> dict[str, torch.Tensor]:
+ """
+ 处理batch数据:应用数据增强 -> tokenization -> 构建batch
+ """
+ batch_input_ids = []
+ batch_attention_masks = []
+ batch_labels = []
+
+ for feature in features:
+ # 获取原始对话数据
+ conversation = feature.get("conversations")
+ if not conversation:
+ continue
+
+ # 应用数据增强(如果启用且在训练模式)
+ augmented_conversation = self._apply_augmentation_if_needed(conversation)
+
+ # 将对话转换为input_ids和labels
+ input_ids, labels = self._encode_conversation(augmented_conversation)
+
+ if input_ids and labels:
+ batch_input_ids.append(torch.tensor(input_ids, dtype=torch.long))
+ batch_labels.append(torch.tensor(labels, dtype=torch.long))
+ # attention_mask由DataCollatorForSeq2Seq自动生成
+
+ if not batch_input_ids:
+ return {}
+
+ # 构建features格式,供父类处理
+ formatted_features = []
+ for input_ids, labels in zip(batch_input_ids, batch_labels):
+ formatted_features.append({
+ "input_ids": input_ids,
+ "labels": labels,
+ })
+
+ # 调用父类进行padding等处理
+ batch = super().__call__(formatted_features)
+
+ # 处理4D attention mask
+ if self.block_diag_attn and self.attn_implementation != "flash_attention_2":
+ from .collator import prepare_4d_attention_mask
+ batch["attention_mask"] = prepare_4d_attention_mask(
+ batch["attention_mask"], self.compute_dtype
+ )
+
+ # 类型转换
+ for key, value in batch.items():
+ if torch.is_tensor(value) and torch.is_floating_point(value):
+ batch[key] = value.to(self.compute_dtype)
+
+ return batch
+
+ def _apply_augmentation_if_needed(self, conversation: dict[str, Any]) -> dict[str, Any]:
+ """根据需要应用数据增强"""
+ if not self.augmentor or not self._is_training_mode():
+ return conversation
+
+ try:
+ # 提取prompt和response
+ prompt_messages = conversation.get("prompt", [])
+ response_messages = conversation.get("response", [])
+
+ if not prompt_messages or not response_messages:
+ return conversation
+
+ # 构建完整的prompt文本
+ prompt_text = self._messages_to_text(prompt_messages)
+ response_text = response_messages[0].get("content", "") if response_messages else ""
+
+ # 应用数据增强
+ augmented_prompt, augmented_response = self.augmentor.augment_sample(
+ prompt_text, response_text
+ )
+
+ # 如果有变化,更新conversation
+ if augmented_prompt != prompt_text or augmented_response != response_text:
+ new_conversation = conversation.copy()
+
+ # 更新prompt (通常只有user message需要更新)
+ if augmented_prompt != prompt_text:
+ new_prompt = []
+ for msg in prompt_messages:
+ if msg.get("role") == "user":
+ new_msg = msg.copy()
+ new_msg["content"] = augmented_prompt
+ new_prompt.append(new_msg)
+ else:
+ new_prompt.append(msg)
+ new_conversation["prompt"] = new_prompt
+
+ # 更新response
+ if augmented_response != response_text:
+ new_response = []
+ for msg in response_messages:
+ new_msg = msg.copy()
+ new_msg["content"] = augmented_response
+ new_response.append(new_msg)
+ new_conversation["response"] = new_response
+
+ return new_conversation
+
+ except Exception as e:
+ print(f"数据增强失败: {e}")
+
+ return conversation
+
+ def _messages_to_text(self, messages: List[dict[str, str]]) -> str:
+ """将messages转换为文本"""
+ texts = []
+ for msg in messages:
+ content = msg.get("content", "")
+ if content:
+ texts.append(content)
+ return " ".join(texts)
+
+ def _is_training_mode(self) -> bool:
+ """检查是否在训练模式"""
+ # 可以通过多种方式检测训练模式
+ # 这里使用一个简单的方法,实际使用时可能需要调整
+ import torch
+ return torch.is_grad_enabled()
+
+ def _encode_conversation(self, conversation: dict[str, Any]) -> tuple[List[int], List[int]]:
+ """将对话编码为input_ids和labels"""
+ try:
+ prompt_messages = conversation.get("prompt", [])
+ response_messages = conversation.get("response", [])
+ system_message = conversation.get("system")
+ tools = conversation.get("tools")
+
+ # 处理多模态输入
+ images = conversation.get("images", [])
+ videos = conversation.get("videos", [])
+ audios = conversation.get("audios", [])
+
+ # 使用template进行编码
+ messages = self.template.mm_plugin.process_messages(
+ prompt_messages + response_messages, images, videos, audios, self.processor
+ )
+
+ input_ids, labels = self.template.mm_plugin.process_token_ids(
+ [], [], images, videos, audios, self.tokenizer, self.processor
+ )
+
+ encoded_pairs = self.template.encode_multiturn(
+ self.tokenizer, messages, system_message, tools
+ )
+
+ total_length = len(input_ids) + (1 if self.template.efficient_eos else 0)
+
+ for turn_idx, (source_ids, target_ids) in enumerate(encoded_pairs):
+ if total_length >= self.data_args.cutoff_len:
+ break
+
+ source_len, target_len = infer_seqlen(
+ len(source_ids), len(target_ids),
+ self.data_args.cutoff_len - total_length
+ )
+
+ source_ids = source_ids[:source_len]
+ target_ids = target_ids[:target_len]
+ total_length += source_len + target_len
+
+ # 设置labels
+ if self.data_args.train_on_prompt:
+ source_label = source_ids
+ elif self.template.efficient_eos and turn_idx != 0:
+ source_label = [self.tokenizer.eos_token_id] + [IGNORE_INDEX] * (source_len - 1)
+ else:
+ source_label = [IGNORE_INDEX] * source_len
+
+ target_label = target_ids
+
+ input_ids += source_ids + target_ids
+ labels += source_label + target_label
+
+ if self.template.efficient_eos:
+ input_ids += [self.tokenizer.eos_token_id]
+ labels += [self.tokenizer.eos_token_id]
+
+ return input_ids, labels
+
+ except Exception as e:
+ print(f"编码对话失败: {e}")
+ return [], []
diff --git a/src/train_sft/src/llamafactory/data/loader.py b/src/train_sft/src/llamafactory/data/loader.py
new file mode 100644
index 0000000000000000000000000000000000000000..accb5ad55e9d8b24ff26dc7534f2a4ae936400fb
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/loader.py
@@ -0,0 +1,379 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+from typing import TYPE_CHECKING, Literal, Optional, Union
+
+import numpy as np
+from datasets import Dataset, load_dataset, load_from_disk
+import csv
+
+from ..extras import logging
+from ..extras.constants import FILEEXT2TYPE
+from ..extras.misc import check_version, has_tokenized_data
+from .converter import align_dataset
+from .data_utils import get_dataset_module, merge_dataset, read_cloud_json, split_dataset
+from .parser import get_dataset_list
+from .processor import (
+ FeedbackDatasetProcessor,
+ PackedSupervisedDatasetProcessor,
+ PairwiseDatasetProcessor,
+ PretrainDatasetProcessor,
+ SupervisedDatasetProcessor,
+ UnsupervisedDatasetProcessor,
+)
+
+
+if TYPE_CHECKING:
+ from datasets import Dataset, IterableDataset
+ from transformers import PreTrainedTokenizer, ProcessorMixin, Seq2SeqTrainingArguments
+
+ from ..hparams import DataArguments, ModelArguments
+ from .data_utils import DatasetModule
+ from .parser import DatasetAttr
+ from .processor import DatasetProcessor
+ from .template import Template
+
+
+logger = logging.get_logger(__name__)
+
+
+def _load_single_dataset(
+ dataset_attr: "DatasetAttr",
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+) -> Union["Dataset", "IterableDataset"]:
+ r"""Load a single dataset and aligns it to the standard format."""
+ logger.info_rank0(f"Loading dataset {dataset_attr}...")
+ data_path, data_name, data_dir, data_files = None, None, None, None
+ if dataset_attr.load_from in ["hf_hub", "ms_hub", "om_hub"]:
+ data_path = dataset_attr.dataset_name
+ data_name = dataset_attr.subset
+ data_dir = dataset_attr.folder
+
+ elif dataset_attr.load_from == "script":
+ data_path = os.path.join(data_args.dataset_dir, dataset_attr.dataset_name)
+ data_name = dataset_attr.subset
+ data_dir = dataset_attr.folder
+
+ elif dataset_attr.load_from == "cloud_file":
+ data_path = dataset_attr.dataset_name
+
+ elif dataset_attr.load_from == "file":
+ data_files = []
+ local_path = os.path.join(data_args.dataset_dir, dataset_attr.dataset_name)
+ if os.path.isdir(local_path): # is directory
+ for file_name in os.listdir(local_path):
+ data_files.append(os.path.join(local_path, file_name))
+ elif os.path.isfile(local_path): # is file
+ data_files.append(local_path)
+ else:
+ raise ValueError(f"File {local_path} not found.")
+
+ data_path = FILEEXT2TYPE.get(os.path.splitext(data_files[0])[-1][1:], None)
+ if data_path is None:
+ raise ValueError("Allowed file types: {}.".format(",".join(FILEEXT2TYPE.keys())))
+
+ if any(data_path != FILEEXT2TYPE.get(os.path.splitext(data_file)[-1][1:], None) for data_file in data_files):
+ raise ValueError("File types should be identical.")
+ else:
+ raise NotImplementedError(f"Unknown load type: {dataset_attr.load_from}.")
+
+ if dataset_attr.load_from == "ms_hub":
+ check_version("modelscope>=1.14.0", mandatory=True)
+ from modelscope import MsDataset # type: ignore
+ from modelscope.utils.config_ds import MS_DATASETS_CACHE # type: ignore
+
+ cache_dir = model_args.cache_dir or MS_DATASETS_CACHE
+ dataset = MsDataset.load(
+ dataset_name=data_path,
+ subset_name=data_name,
+ data_dir=data_dir,
+ data_files=data_files,
+ split=dataset_attr.split,
+ cache_dir=cache_dir,
+ token=model_args.ms_hub_token,
+ use_streaming=data_args.streaming,
+ )
+ if isinstance(dataset, MsDataset):
+ dataset = dataset.to_hf_dataset()
+
+ elif dataset_attr.load_from == "om_hub":
+ check_version("openmind>=0.8.0", mandatory=True)
+ from openmind import OmDataset # type: ignore
+ from openmind.utils.hub import OM_DATASETS_CACHE # type: ignore
+
+ cache_dir = model_args.cache_dir or OM_DATASETS_CACHE
+ dataset = OmDataset.load_dataset(
+ path=data_path,
+ name=data_name,
+ data_dir=data_dir,
+ data_files=data_files,
+ split=dataset_attr.split,
+ cache_dir=cache_dir,
+ token=model_args.om_hub_token,
+ streaming=data_args.streaming,
+ )
+ elif dataset_attr.load_from == "cloud_file":
+ dataset = Dataset.from_list(read_cloud_json(data_path), split=dataset_attr.split)
+ else:
+ if data_path == "json" and dataset_attr.load_from == "file" and len(data_files) == 1:
+ # custom: load a big json dict with a specific top-level field (e.g., "data")
+ import json as _json
+
+ with open(data_files[0], "r", encoding="utf-8") as f:
+ obj = _json.load(f)
+ data_list = obj.get(dataset_attr.field, None) if isinstance(obj, dict) else None
+ if isinstance(data_list, list):
+ dataset = Dataset.from_list(data_list, split=dataset_attr.split)
+ else:
+ dataset = load_dataset(
+ path=data_path,
+ name=data_name,
+ data_dir=data_dir,
+ data_files=data_files,
+ split=dataset_attr.split,
+ cache_dir=model_args.cache_dir,
+ token=model_args.hf_hub_token,
+ num_proc=data_args.preprocessing_num_workers,
+ trust_remote_code=model_args.trust_remote_code,
+ streaming=data_args.streaming and dataset_attr.load_from != "file",
+ )
+ else:
+ dataset = load_dataset(
+ path=data_path,
+ name=data_name,
+ data_dir=data_dir,
+ data_files=data_files,
+ split=dataset_attr.split,
+ cache_dir=model_args.cache_dir,
+ token=model_args.hf_hub_token,
+ num_proc=data_args.preprocessing_num_workers,
+ trust_remote_code=model_args.trust_remote_code,
+ streaming=data_args.streaming and dataset_attr.load_from != "file",
+ )
+ if data_args.streaming and dataset_attr.load_from == "file":
+ dataset = dataset.to_iterable_dataset(num_shards=training_args.dataloader_num_workers)
+
+ # Apply split filtering - either by split_map CSV or by split field in data
+ target_split = getattr(dataset_attr, "split", "train").lower()
+
+ if getattr(dataset_attr, "split_map", None):
+ # 使用外部CSV文件做split映射
+ split_csv = dataset_attr.split_map
+ if not os.path.isabs(split_csv):
+ split_csv = os.path.join(data_args.dataset_dir, split_csv)
+ id2split: dict[str, str] = {}
+ with open(split_csv, "r", encoding="utf-8") as f:
+ reader = csv.reader(f)
+ for row in reader:
+ if not row: continue
+ # Expect format: id,split
+ key, sp = row[0].strip(), row[1].strip().lower()
+ id2split[key] = sp
+
+ def _matches_split_csv(ex):
+ return ex["split"] == target_split
+
+ dataset = dataset.filter(_matches_split_csv)
+ print(f"[split_map] target split: {target_split}, {len(dataset)} examples after filtering")
+
+ elif target_split:
+ # 直接使用数据中的split字段过滤
+ def _matches_split_field(ex):
+ return ex.get("split", "") == target_split
+
+ original_len = len(dataset)
+ dataset = dataset.filter(_matches_split_field)
+ print(f"[split field] target split: '{target_split}', filtered {original_len} -> {len(dataset)} examples")
+
+ return align_dataset(dataset, dataset_attr, data_args, training_args)
+
+
+def _get_merged_dataset(
+ dataset_names: Optional[list[str]],
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+ stage: Literal["pt", "sft", "rm", "ppo", "kto"],
+ return_dict: bool = False,
+) -> Optional[Union["Dataset", "IterableDataset", dict[str, "Dataset"]]]:
+ r"""Return the merged datasets in the standard format."""
+ if dataset_names is None:
+ return None
+
+ datasets = {}
+ for dataset_name, dataset_attr in zip(dataset_names, get_dataset_list(dataset_names, data_args.dataset_dir)):
+ if (stage == "rm" and dataset_attr.ranking is False) or (stage != "rm" and dataset_attr.ranking is True):
+ raise ValueError("The dataset is not applicable in the current training stage.")
+
+ datasets[dataset_name] = _load_single_dataset(dataset_attr, model_args, data_args, training_args)
+
+ if return_dict:
+ return datasets
+ else:
+ return merge_dataset(list(datasets.values()), data_args, seed=training_args.seed)
+
+
+def _get_dataset_processor(
+ data_args: "DataArguments",
+ stage: Literal["pt", "sft", "rm", "ppo", "kto"],
+ template: "Template",
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["ProcessorMixin"],
+ do_generate: bool = False,
+) -> "DatasetProcessor":
+ r"""Return the corresponding dataset processor."""
+ if stage == "pt":
+ dataset_processor_class = PretrainDatasetProcessor
+ elif stage == "sft" and not do_generate:
+ # 检查是否启用了数据增强,如果启用则使用RawDatasetProcessor
+ if getattr(data_args, 'enable_data_augmentation', False):
+ from .processor import RawDatasetProcessor
+ dataset_processor_class = RawDatasetProcessor
+ elif data_args.packing:
+ if data_args.neat_packing: # hack datasets to have int32 attention mask
+ from datasets.arrow_writer import OptimizedTypedSequence, TypedSequence
+
+ def __init__(self, data, **kwargs):
+ return TypedSequence.__init__(
+ self,
+ data,
+ type=kwargs.pop("type", None),
+ try_type=kwargs.pop("try_type", None),
+ optimized_int_type=kwargs.pop("optimized_int_type", None),
+ )
+
+ OptimizedTypedSequence.__init__ = __init__
+ dataset_processor_class = PackedSupervisedDatasetProcessor
+ else:
+ dataset_processor_class = SupervisedDatasetProcessor
+
+ elif stage == "rm":
+ dataset_processor_class = PairwiseDatasetProcessor
+ elif stage == "kto":
+ dataset_processor_class = FeedbackDatasetProcessor
+ else:
+ dataset_processor_class = UnsupervisedDatasetProcessor
+
+ return dataset_processor_class(template=template, tokenizer=tokenizer, processor=processor, data_args=data_args)
+
+
+def _get_preprocessed_dataset(
+ dataset: Optional[Union["Dataset", "IterableDataset"]],
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+ stage: Literal["pt", "sft", "rm", "ppo", "kto"],
+ template: "Template",
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["ProcessorMixin"] = None,
+ is_eval: bool = False,
+) -> Optional[Union["Dataset", "IterableDataset"]]:
+ r"""Preprocesses the dataset, including format checking and tokenization."""
+ if dataset is None:
+ return None
+
+ dataset_processor = _get_dataset_processor(
+ data_args, stage, template, tokenizer, processor, do_generate=(training_args.predict_with_generate and is_eval)
+ )
+ column_names = list(next(iter(dataset)).keys())
+ kwargs = {}
+ if not data_args.streaming:
+ kwargs = dict(
+ num_proc=data_args.preprocessing_num_workers,
+ load_from_cache_file=(not data_args.overwrite_cache) or (training_args.local_process_index != 0),
+ desc="Running tokenizer on dataset",
+ )
+
+ dataset = dataset.map(
+ dataset_processor.preprocess_dataset,
+ batched=True,
+ batch_size=data_args.preprocessing_batch_size,
+ remove_columns=column_names,
+ **kwargs,
+ )
+
+ if training_args.should_log:
+ try:
+ print("eval example:" if is_eval else "training example:")
+ dataset_processor.print_data_example(next(iter(dataset)))
+ except StopIteration:
+ if stage == "pt":
+ raise RuntimeError("Cannot find sufficient samples, consider increasing dataset size.")
+ else:
+ raise RuntimeError("Cannot find valid samples, check `data/README.md` for the data format.")
+
+ return dataset
+
+
+def get_dataset(
+ template: "Template",
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ training_args: "Seq2SeqTrainingArguments",
+ stage: Literal["pt", "sft", "rm", "ppo", "kto"],
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["ProcessorMixin"] = None,
+) -> "DatasetModule":
+ r"""Get the train dataset and optionally gets the evaluation dataset."""
+ # Load tokenized dataset if path exists
+ if data_args.tokenized_path is not None:
+ if has_tokenized_data(data_args.tokenized_path):
+ logger.warning_rank0("Loading dataset from disk will ignore other data arguments.")
+ tokenized_data = load_from_disk(data_args.tokenized_path)
+ dataset_module = get_dataset_module(tokenized_data)
+ if data_args.streaming:
+ dataset_module["train_dataset"] = dataset_module["train_dataset"].to_iterable_dataset()
+
+ logger.info_rank0(f"Loaded tokenized dataset from {data_args.tokenized_path}.")
+ return dataset_module
+
+ if data_args.streaming:
+ raise ValueError("Turn off `streaming` when saving dataset to disk.")
+
+ # Load and preprocess dataset
+ with training_args.main_process_first(desc="load dataset", local=(not data_args.data_shared_file_system)):
+ dataset = _get_merged_dataset(data_args.dataset, model_args, data_args, training_args, stage)
+ eval_dataset = _get_merged_dataset(
+ data_args.eval_dataset,
+ model_args,
+ data_args,
+ training_args,
+ stage,
+ return_dict=data_args.eval_on_each_dataset,
+ )
+
+ with training_args.main_process_first(desc="pre-process dataset", local=(not data_args.data_shared_file_system)):
+ dataset = _get_preprocessed_dataset(
+ dataset, data_args, training_args, stage, template, tokenizer, processor, is_eval=False
+ )
+ if isinstance(eval_dataset, dict):
+ for eval_name, eval_data in eval_dataset.items():
+ eval_dataset[eval_name] = _get_preprocessed_dataset(
+ eval_data, data_args, training_args, stage, template, tokenizer, processor, is_eval=True
+ )
+ else:
+ eval_dataset = _get_preprocessed_dataset(
+ eval_dataset, data_args, training_args, stage, template, tokenizer, processor, is_eval=True
+ )
+
+ dataset_dict = split_dataset(dataset, eval_dataset, data_args, seed=training_args.seed)
+ if data_args.tokenized_path is not None: # save tokenized dataset to disk
+ if training_args.should_save:
+ dataset_dict.save_to_disk(data_args.tokenized_path)
+ logger.info_rank0(f"Tokenized dataset is saved at {data_args.tokenized_path}.")
+ logger.info_rank0(f"Please launch the training with `tokenized_path: {data_args.tokenized_path}`.")
+
+ return get_dataset_module(dataset_dict)
diff --git a/src/train_sft/src/llamafactory/data/mm_plugin.py b/src/train_sft/src/llamafactory/data/mm_plugin.py
new file mode 100644
index 0000000000000000000000000000000000000000..be8cea9d6ca8480de2e7172a5f4023e583f54b23
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/mm_plugin.py
@@ -0,0 +1,1912 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's Transformers library.
+# https://github.com/huggingface/transformers/blob/v4.40.0/src/transformers/models/llava/processing_llava.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import inspect
+import math
+import os
+import re
+from copy import deepcopy
+from dataclasses import dataclass
+from io import BytesIO
+from typing import TYPE_CHECKING, BinaryIO, Literal, Optional, TypedDict, Union
+
+import numpy as np
+import torch
+from transformers.image_utils import get_image_size, is_valid_image, to_numpy_array
+from transformers.models.mllama.processing_mllama import (
+ convert_sparse_cross_attention_mask_to_dense,
+ get_cross_attention_token_mask,
+)
+from typing_extensions import override
+
+from ..extras.constants import AUDIO_PLACEHOLDER, IGNORE_INDEX, IMAGE_PLACEHOLDER, VIDEO_PLACEHOLDER
+from ..extras.packages import (
+ is_librosa_available,
+ is_pillow_available,
+ is_pyav_available,
+ is_transformers_version_greater_than,
+)
+
+
+if is_librosa_available():
+ import librosa
+
+
+if is_pillow_available():
+ from PIL import Image
+ from PIL.Image import Image as ImageObject
+
+
+if is_pyav_available():
+ import av
+
+
+if is_transformers_version_greater_than("4.52.0"):
+ from transformers.image_utils import make_flat_list_of_images
+ from transformers.video_utils import make_batched_videos
+else:
+ from transformers.image_utils import make_batched_videos, make_flat_list_of_images
+
+
+if TYPE_CHECKING:
+ from av.stream import Stream
+ from numpy.typing import NDArray
+ from transformers import PreTrainedTokenizer, ProcessorMixin
+ from transformers.feature_extraction_sequence_utils import SequenceFeatureExtractor
+ from transformers.image_processing_utils import BaseImageProcessor
+
+ class EncodedImage(TypedDict):
+ path: Optional[str]
+ bytes: Optional[bytes]
+
+ ImageInput = Union[str, bytes, EncodedImage, BinaryIO, ImageObject]
+ VideoInput = Union[str, BinaryIO, list[list[ImageInput]]]
+ AudioInput = Union[str, BinaryIO, NDArray]
+
+ class MMProcessor(ProcessorMixin):
+ patch_size: int
+ image_seq_length: int
+ num_additional_image_tokens: int
+ vision_feature_select_strategy: Literal["default", "full"]
+
+ def _get_number_of_features(self, orig_height: int, orig_width: int, height: int, width: int) -> int:
+ pass
+
+
+def _get_paligemma_token_type_ids(imglens: list[int], seqlens: list[int], processor: "MMProcessor") -> list[list[int]]:
+ r"""Get paligemma token type ids for computing loss.
+
+ It is slightly different with the original token type ids where the prompt part is 0.
+
+ Returns:
+ batch_token_type_ids: shape (batch_size, seq_length)
+
+ """
+ batch_token_type_ids = []
+ for imglen, seqlen in zip(imglens, seqlens):
+ image_seqlen = imglen * processor.image_seq_length
+ batch_token_type_ids.append([0] * image_seqlen + [1] * (seqlen - image_seqlen))
+
+ return batch_token_type_ids
+
+
+def _get_gemma3_token_type_ids(batch_ids: list[list[int]], processor: "MMProcessor"):
+ r"""Get gemma3 token type ids for computing loss.
+
+ Returns:
+ batch_token_type_ids: shape (batch_size, seq_length)
+
+ """
+ image_token_id: int = getattr(processor, "image_token_id")
+ batch_token_type_ids = []
+ for token_ids in batch_ids:
+ token_ids = np.array(token_ids)
+ token_type_ids = np.zeros_like(token_ids)
+ token_type_ids[token_ids == image_token_id] = 1
+ batch_token_type_ids.append(token_type_ids.tolist())
+
+ return batch_token_type_ids
+
+
+def _make_batched_images(images: list["ImageObject"], imglens: list[int]) -> list[list["ImageObject"]]:
+ r"""Make nested list of images."""
+ batch_images = []
+ for imglen in imglens:
+ batch_images.append(images[:imglen])
+ images = images[imglen:]
+
+ return batch_images
+
+
+def _check_video_is_nested_images(video: "VideoInput") -> bool:
+ r"""Check if the video is nested images."""
+ return isinstance(video, list) and all(isinstance(frame, (str, BinaryIO, dict)) for frame in video)
+
+
+@dataclass
+class MMPluginMixin:
+ image_token: Optional[str]
+ video_token: Optional[str]
+ audio_token: Optional[str]
+ expand_mm_tokens: bool = True
+
+ def _validate_input(
+ self,
+ processor: Optional["MMProcessor"],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ ) -> None:
+ r"""Validate if this model accepts the input modalities."""
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor", None)
+ video_processor: BaseImageProcessor = getattr(
+ processor, "video_processor", getattr(processor, "image_processor", None)
+ )
+ feature_extractor: SequenceFeatureExtractor = getattr(processor, "feature_extractor", None)
+ if len(images) != 0 and self.image_token is None:
+ raise ValueError(
+ "This model does not support image input. Please check whether the correct `template` is used."
+ )
+
+ if len(videos) != 0 and self.video_token is None:
+ raise ValueError(
+ "This model does not support video input. Please check whether the correct `template` is used."
+ )
+
+ if len(audios) != 0 and self.audio_token is None:
+ raise ValueError(
+ "This model does not support audio input. Please check whether the correct `template` is used."
+ )
+
+ if self.image_token is not None and processor is None:
+ raise ValueError("Processor was not found, please check and update your model file.")
+
+ if self.image_token is not None and image_processor is None:
+ raise ValueError("Image processor was not found, please check and update your model file.")
+
+ if self.video_token is not None and video_processor is None:
+ raise ValueError("Video processor was not found, please check and update your model file.")
+
+ if self.audio_token is not None and feature_extractor is None:
+ raise ValueError("Audio feature extractor was not found, please check and update your model file.")
+
+ def _validate_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ ):
+ r"""Validate if the number of images, videos and audios match the number of placeholders in messages."""
+ num_image_tokens, num_video_tokens, num_audio_tokens = 0, 0, 0
+ for message in messages:
+ num_image_tokens += message["content"].count(IMAGE_PLACEHOLDER)
+ num_video_tokens += message["content"].count(VIDEO_PLACEHOLDER)
+ num_audio_tokens += message["content"].count(AUDIO_PLACEHOLDER)
+
+ if len(images) != num_image_tokens:
+ raise ValueError(
+ f"The number of images does not match the number of {IMAGE_PLACEHOLDER} tokens in {messages}."
+ )
+
+ if len(videos) != num_video_tokens:
+ raise ValueError(
+ f"The number of videos does not match the number of {VIDEO_PLACEHOLDER} tokens in {messages}."
+ )
+
+ if len(audios) != num_audio_tokens:
+ raise ValueError(
+ f"The number of audios does not match the number of {AUDIO_PLACEHOLDER} tokens in {messages}."
+ )
+
+ def _preprocess_image(
+ self, image: "ImageObject", image_max_pixels: int, image_min_pixels: int, **kwargs
+ ) -> "ImageObject":
+ r"""Pre-process a single image."""
+ if (image.width * image.height) > image_max_pixels:
+ resize_factor = math.sqrt(image_max_pixels / (image.width * image.height))
+ width, height = int(image.width * resize_factor), int(image.height * resize_factor)
+ image = image.resize((width, height))
+
+ if (image.width * image.height) < image_min_pixels:
+ resize_factor = math.sqrt(image_min_pixels / (image.width * image.height))
+ width, height = int(image.width * resize_factor), int(image.height * resize_factor)
+ image = image.resize((width, height))
+
+ if image.mode != "RGB":
+ image = image.convert("RGB")
+
+ return image
+
+ def _get_video_sample_indices(
+ self, video_stream: "Stream", video_fps: float, video_maxlen: int, **kwargs
+ ) -> list[int]:
+ r"""Compute video sample indices according to fps."""
+ total_frames = video_stream.frames
+ if total_frames == 0: # infinite video
+ return np.linspace(0, video_maxlen - 1, video_maxlen).astype(np.int32)
+
+ sample_frames = max(1, math.floor(float(video_stream.duration * video_stream.time_base) * video_fps))
+ sample_frames = min(total_frames, video_maxlen, sample_frames)
+ return np.linspace(0, total_frames - 1, sample_frames).astype(np.int32)
+
+ def _regularize_images(self, images: list["ImageInput"], **kwargs) -> dict[str, list["ImageObject"]]:
+ r"""Regularize images to avoid error. Including reading and pre-processing."""
+ results = []
+ for image in images:
+ if isinstance(image, (str, BinaryIO)):
+ image = Image.open(image)
+ elif isinstance(image, bytes):
+ image = Image.open(BytesIO(image))
+ elif isinstance(image, dict):
+ if image["bytes"] is not None:
+ image = Image.open(BytesIO(image["bytes"]))
+ else:
+ image = Image.open(image["path"])
+
+ if not isinstance(image, ImageObject):
+ raise ValueError(f"Expect input is a list of images, but got {type(image)}.")
+
+ results.append(self._preprocess_image(image, **kwargs))
+
+ return {"images": results}
+
+ def _regularize_videos(self, videos: list["VideoInput"], **kwargs) -> dict[str, list[list["ImageObject"]]]:
+ r"""Regularizes videos to avoid error. Including reading, resizing and converting."""
+ results = []
+ for video in videos:
+ frames: list[ImageObject] = []
+ if _check_video_is_nested_images(video):
+ for frame in video:
+ if not is_valid_image(frame) and not isinstance(frame, dict) and not os.path.exists(frame):
+ raise ValueError("Invalid image found in video frames.")
+ frames = video
+ else:
+ container = av.open(video, "r")
+ video_stream = next(stream for stream in container.streams if stream.type == "video")
+ sample_indices = self._get_video_sample_indices(video_stream, **kwargs)
+ container.seek(0)
+ for frame_idx, frame in enumerate(container.decode(video_stream)):
+ if frame_idx in sample_indices:
+ frames.append(frame.to_image())
+
+ frames = self._regularize_images(frames, **kwargs)["images"]
+ results.append(frames)
+
+ return {"videos": results}
+
+ def _regularize_audios(
+ self, audios: list["AudioInput"], sampling_rate: float, **kwargs
+ ) -> dict[str, Union[list["NDArray"], list[float]]]:
+ r"""Regularizes audios to avoid error. Including reading and resampling."""
+ results, sampling_rates = [], []
+ for audio in audios:
+ if not isinstance(audio, np.ndarray):
+ audio, sampling_rate = librosa.load(audio, sr=sampling_rate)
+
+ results.append(audio)
+ sampling_rates.append(sampling_rate)
+
+ return {"audios": results, "sampling_rates": sampling_rates}
+
+ def _get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: "MMProcessor",
+ imglens: Optional[list[int]] = None,
+ ) -> dict[str, "torch.Tensor"]:
+ r"""Process visual inputs.
+
+ Returns: (llava and paligemma)
+ pixel_values: tensor with shape (B, C, H, W)
+
+ Returns: (qwen2-vl)
+ pixel_values: tensor with shape (num_patches, patch_dim)
+ image_grid_thw: tensor with shape (num_images, 3), where the three numbers are time, width, height
+ where num_patches == torch.prod(image_grid_thw)
+
+ Returns: (mllama)
+ pixel_values: tensor with shape
+ (batch_size, max_num_images, max_image_tiles, channels, tile_height, tile_width)
+ For example, (2, 1, 4, 3, 560, 560).
+ aspect_ratio_ids: tensor with shape (batch_size, max_num_images). For example, (2, 1).
+ aspect_ratio_mask: tensor with shape (batch_size, max_num_images, max_image_tiles). For example, (2, 1, 4).
+ num_tiles: List[List[int]] with shape (batch_size, num_images_in_batch). For example, (2, 1).
+
+ """
+ mm_inputs = {}
+ if len(images) != 0:
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor", None)
+ images = self._regularize_images(
+ images,
+ image_max_pixels=getattr(processor, "image_max_pixels", 768 * 768),
+ image_min_pixels=getattr(processor, "image_min_pixels", 32 * 32),
+ )["images"]
+ if imglens is not None: # if imglens are provided, make batched images
+ images = _make_batched_images(images, imglens)
+
+ image_processor_kwargs = {}
+ if getattr(processor, "image_do_pan_and_scan", False): # gemma3 image processor
+ image_processor_kwargs.update(
+ {
+ "do_pan_and_scan": True,
+ "pan_and_scan_min_crop_size": 256,
+ "pan_and_scan_max_num_crops": 4,
+ "pan_and_scan_min_ratio_to_activate": 1.2,
+ }
+ )
+
+ mm_inputs.update(image_processor(images, return_tensors="pt", **image_processor_kwargs))
+
+ if len(videos) != 0:
+ video_processor: BaseImageProcessor = getattr(
+ processor, "video_processor", getattr(processor, "image_processor", None)
+ )
+ videos = self._regularize_videos(
+ videos,
+ image_max_pixels=getattr(processor, "video_max_pixels", 256 * 256),
+ image_min_pixels=getattr(processor, "video_min_pixels", 16 * 16),
+ video_fps=getattr(processor, "video_fps", 2.0),
+ video_maxlen=getattr(processor, "video_maxlen", 128),
+ )["videos"]
+ if "videos" in inspect.signature(video_processor.preprocess).parameters: # for qwen2_vl and video_llava
+ mm_inputs.update(video_processor(images=None, videos=videos, return_tensors="pt"))
+ else: # for llava_next_video
+ mm_inputs.update(video_processor(videos, return_tensors="pt"))
+
+ if len(audios) != 0:
+ feature_extractor: SequenceFeatureExtractor = getattr(processor, "feature_extractor", None)
+ audios = self._regularize_audios(
+ audios,
+ sampling_rate=getattr(processor, "audio_sampling_rate", 16000),
+ )["audios"]
+ mm_inputs.update(
+ feature_extractor(
+ audios,
+ sampling_rate=getattr(processor, "audio_sampling_rate", 16000),
+ return_attention_mask=True,
+ padding="max_length",
+ return_tensors="pt",
+ )
+ )
+ mm_inputs["feature_attention_mask"] = mm_inputs.pop("attention_mask", None) # prevent conflicts
+
+ return mm_inputs
+
+
+@dataclass
+class BasePlugin(MMPluginMixin):
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ r"""Pre-process input messages before tokenization for VLMs."""
+ self._validate_input(processor, images, videos, audios)
+ return messages
+
+ def process_token_ids(
+ self,
+ input_ids: list[int],
+ labels: Optional[list[int]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["MMProcessor"],
+ ) -> tuple[list[int], Optional[list[int]]]:
+ r"""Pre-process token ids after tokenization for VLMs."""
+ self._validate_input(processor, images, videos, audios)
+ return input_ids, labels
+
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ r"""Build batched multimodal inputs for VLMs.
+
+ Arguments:
+ images: a list of image inputs, shape (num_images,)
+ videos: a list of video inputs, shape (num_videos,)
+ audios: a list of audio inputs, shape (num_audios,)
+ imglens: number of images in each sample, shape (batch_size,)
+ vidlens: number of videos in each sample, shape (batch_size,)
+ audlens: number of audios in each sample, shape (batch_size,)
+ batch_ids: token ids of input samples, shape (batch_size, seq_len)
+ processor: a processor for pre-processing images and videos
+
+ """
+ self._validate_input(processor, images, videos, audios)
+ return self._get_mm_inputs(images, videos, audios, processor)
+
+
+@dataclass
+class Gemma3Plugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens = 0
+ messages = deepcopy(messages)
+ boi_token: str = getattr(processor, "boi_token")
+ full_image_sequence: str = getattr(processor, "full_image_sequence")
+ image_str = full_image_sequence if self.expand_mm_tokens else boi_token
+
+ do_pan_and_scan: bool = getattr(processor, "image_do_pan_and_scan", False)
+ if do_pan_and_scan:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ if do_pan_and_scan:
+ image_placeholder_str = (
+ "Here is the original image {{image}} and here are some crops to help you see better "
+ + " ".join(["{{image}}"] * mm_inputs["num_crops"][0][num_image_tokens])
+ )
+ else:
+ image_placeholder_str = "{{image}}"
+
+ content = content.replace(IMAGE_PLACEHOLDER, image_placeholder_str, 1)
+ num_image_tokens += 1
+
+ message["content"] = content.replace("{{image}}", image_str)
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ mm_inputs.pop("num_crops", None)
+ mm_inputs["token_type_ids"] = _get_gemma3_token_type_ids(batch_ids, processor)
+ return mm_inputs
+
+
+class Gemma3nPlugin(Gemma3Plugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ messages = deepcopy(messages)
+ boi_token: str = getattr(processor, "boi_token")
+ boa_token: str = getattr(processor, "boa_token")
+ full_image_sequence: str = getattr(processor, "full_image_sequence")
+ full_audio_sequence: str = getattr(processor, "full_audio_sequence")
+ image_str = full_image_sequence if self.expand_mm_tokens else boi_token
+ audio_str = full_audio_sequence if self.expand_mm_tokens else boa_token
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ content = content.replace(IMAGE_PLACEHOLDER, image_str, 1)
+
+ while AUDIO_PLACEHOLDER in content:
+ content = content.replace(AUDIO_PLACEHOLDER, audio_str, 1)
+
+ message["content"] = content
+
+ return messages
+
+
+@dataclass
+class InternVLPlugin(BasePlugin):
+ @override
+ def _get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: "ProcessorMixin",
+ **kwargs,
+ ) -> dict[str, "torch.Tensor"]:
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor")
+ image_processor_kwargs = {}
+ if getattr(processor, "crop_to_patches", False):
+ image_processor_kwargs.update(
+ {
+ "crop_to_patches": True,
+ "max_patches": 12,
+ "min_patches": 1,
+ }
+ )
+
+ mm_inputs = {}
+ image_video_patches = []
+
+ if len(images) != 0:
+ images = self._regularize_images(
+ images,
+ image_max_pixels=getattr(processor, "image_max_pixels", 1024 * 1024),
+ image_min_pixels=getattr(processor, "image_min_pixels", 32 * 32),
+ )["images"]
+
+ if len(videos) != 0:
+ videos = self._regularize_videos(
+ videos,
+ image_max_pixels=getattr(processor, "video_max_pixels", 256 * 256),
+ image_min_pixels=getattr(processor, "video_min_pixels", 16 * 16),
+ video_fps=getattr(processor, "video_fps", 2.0),
+ video_maxlen=getattr(processor, "video_maxlen", 128),
+ )["videos"]
+
+ if len(images) != 0:
+ images = make_flat_list_of_images(images)
+ image_inputs = image_processor(images=images, return_tensors="pt", **image_processor_kwargs)
+ image_num_patches = image_inputs.pop("num_patches")
+ image_pixel_values = image_inputs.pop("pixel_values")
+ image_num_patches_indices = np.cumsum(image_num_patches)
+
+ if len(videos) != 0:
+ videos = make_batched_videos(videos)
+ num_frames_per_video = [len(video) for video in videos]
+ patch_indices = np.cumsum(num_frames_per_video)
+ image_processor_kwargs["crop_to_patches"] = False
+ video_inputs = image_processor(images=videos, return_tensors="pt", **image_processor_kwargs)
+ video_num_patches = video_inputs.pop("num_patches")
+ video_pixel_values = video_inputs.pop("pixel_values")
+ video_num_patches_indices = np.cumsum(video_num_patches)
+
+ # NOT SUPPORT IMAGE VIDEO INTERLEAVED
+ if len(images) != 0 and image_pixel_values is not None:
+ for i in range(len(images)):
+ start_index = image_num_patches_indices[i - 1] if i > 0 else 0
+ end_index = image_num_patches_indices[i]
+ image_video_patches.append(image_pixel_values[start_index:end_index])
+
+ if len(videos) != 0 and video_pixel_values is not None:
+ patch_indices_with_prefix = [0] + list(patch_indices)
+ for i in range(len(videos)):
+ current_patch_index = patch_indices_with_prefix[i]
+ end_patch_index = patch_indices_with_prefix[i + 1]
+ start_index = video_num_patches_indices[current_patch_index - 1] if i > 0 else 0
+ end_index = video_num_patches_indices[end_patch_index - 1]
+ image_video_patches.append(video_pixel_values[start_index:end_index])
+
+ if len(images) != 0 or len(videos) != 0:
+ mm_inputs["pixel_values"] = torch.cat(image_video_patches, dim=0)
+
+ if len(images) != 0:
+ mm_inputs.update({"image_num_patches": image_num_patches})
+
+ if len(videos) != 0:
+ mm_inputs.update({"video_patch_indices": patch_indices})
+ mm_inputs.update({"video_num_patches": video_num_patches})
+
+ return mm_inputs
+
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["ProcessorMixin"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens, num_video_tokens = 0, 0
+ image_seqlen = getattr(processor, "image_seq_length") if self.expand_mm_tokens else 1
+ messages = deepcopy(messages)
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+
+ image_pixel_patch_list = mm_inputs.get("image_num_patches") # pathes of images
+ video_num_patches = mm_inputs.get("video_num_patches") # all patches for frames of videos
+ video_patch_indices = mm_inputs.get("video_patch_indices") # num frames of per video
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ content = content.replace(
+ IMAGE_PLACEHOLDER,
+ f" {'' * image_seqlen * image_pixel_patch_list[num_image_tokens]}",
+ 1,
+ )
+ num_image_tokens += 1
+
+ while VIDEO_PLACEHOLDER in content:
+ current_patch_index = video_patch_indices[num_video_tokens - 1] if num_video_tokens > 0 else 0
+ end_patch_index = video_patch_indices[num_video_tokens]
+ num_patches = list(video_num_patches[current_patch_index:end_patch_index])
+ video_replaced_prompt = "\n".join(
+ f"Frame{i + 1}: {'' * image_seqlen * num_patches[i]}"
+ for i in range(len(num_patches))
+ )
+ content = content.replace(VIDEO_PLACEHOLDER, video_replaced_prompt, 1)
+ num_video_tokens += 1
+
+ message["content"] = content
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["ProcessorMixin"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ mm_inputs.pop("image_num_patches", None)
+ mm_inputs.pop("video_patch_indices", None)
+ mm_inputs.pop("video_num_patches", None)
+ return mm_inputs
+
+
+class KimiVLPlugin(BasePlugin):
+ @override
+ def process_messages(self, messages, images, videos, audios, processor):
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ image_grid_hws = mm_inputs.get("image_grid_hws", [])
+ else:
+ image_grid_hws = [None] * len(images)
+
+ num_image_tokens = 0
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor")
+ merge_length = math.prod(image_processor.merge_kernel_size)
+ messages = deepcopy(messages)
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ image_seqlen = image_grid_hws[num_image_tokens].prod() // merge_length if self.expand_mm_tokens else 1
+ content = content.replace(
+ IMAGE_PLACEHOLDER,
+ f"<|media_start|>image<|media_content|>{self.image_token * image_seqlen}<|media_end|>",
+ 1,
+ )
+ num_image_tokens += 1
+
+ message["content"] = content
+
+ return messages
+
+
+@dataclass
+class Llama4Plugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ if "pixel_values" in mm_inputs:
+ image_height, image_width = mm_inputs["pixel_values"][0].shape[-2:]
+ num_patches_per_chunk = int(
+ (image_height // processor.patch_size)
+ * (image_width // processor.patch_size)
+ // processor.downsample_ratio
+ )
+ aspect_ratios = mm_inputs.pop("aspect_ratios")
+
+ num_image_tokens = 0
+ messages = deepcopy(messages)
+ for message in messages:
+ content = message["content"]
+ if self.expand_mm_tokens:
+ placeholder_count = content.count(IMAGE_PLACEHOLDER)
+ prompt_splits = content.split(IMAGE_PLACEHOLDER)
+ new_content = []
+ for local_image_index, split_part in enumerate(prompt_splits):
+ new_content.append(split_part)
+ if local_image_index < placeholder_count:
+ tokens_for_this_image = processor._prompt_split_image(
+ aspect_ratios[num_image_tokens], num_patches_per_chunk
+ )
+ num_image_tokens += 1
+ new_content.append(tokens_for_this_image)
+
+ content = "".join(new_content)
+ else:
+ content = content.replace(IMAGE_PLACEHOLDER, self.image_token)
+
+ message["content"] = content
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ mm_inputs.pop("aspect_ratios", None)
+ return mm_inputs
+
+
+@dataclass
+class LlavaPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ messages = deepcopy(messages)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ if "pixel_values" in mm_inputs:
+ height, width = get_image_size(to_numpy_array(mm_inputs["pixel_values"][0]))
+ image_seqlen = (height // processor.patch_size) * (
+ width // processor.patch_size
+ ) + processor.num_additional_image_tokens
+ if processor.vision_feature_select_strategy == "default":
+ image_seqlen -= 1
+ else:
+ image_seqlen = 1
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ content = content.replace(IMAGE_PLACEHOLDER, "{{image}}" * image_seqlen, 1)
+
+ message["content"] = content.replace("{{image}}", self.image_token)
+
+ return messages
+
+
+@dataclass
+class LlavaNextPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens = 0
+ messages = deepcopy(messages)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ if "pixel_values" in mm_inputs:
+ image_sizes = iter(mm_inputs["image_sizes"].tolist())
+ height, width = get_image_size(to_numpy_array(mm_inputs["pixel_values"][0][0]))
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ if self.expand_mm_tokens:
+ orig_height, orig_width = next(image_sizes)
+ image_seqlen = processor._get_number_of_features(orig_height, orig_width, height, width)
+ if processor.vision_feature_select_strategy == "default":
+ image_seqlen -= 1
+ else:
+ image_seqlen = 1
+
+ content = content.replace(IMAGE_PLACEHOLDER, "{{image}}" * image_seqlen, 1)
+ num_image_tokens += 1
+
+ message["content"] = content.replace("{{image}}", self.image_token)
+
+ return messages
+
+
+@dataclass
+class LlavaNextVideoPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ messages = deepcopy(messages)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ if "pixel_values" in mm_inputs:
+ image_sizes = iter(mm_inputs["image_sizes"].tolist())
+ height, width = get_image_size(to_numpy_array(mm_inputs["pixel_values"][0][0]))
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ if self.expand_mm_tokens:
+ orig_height, orig_width = next(image_sizes)
+ image_seqlen = processor._get_number_of_features(orig_height, orig_width, height, width)
+ if processor.vision_feature_select_strategy == "default":
+ image_seqlen -= 1
+ else:
+ image_seqlen = 1
+
+ content = content.replace(IMAGE_PLACEHOLDER, "{{image}}" * image_seqlen, 1)
+
+ message["content"] = content.replace("{{image}}", self.image_token)
+
+ if self.expand_mm_tokens:
+ if "pixel_values_videos" in mm_inputs:
+ one_video = to_numpy_array(mm_inputs.get("pixel_values_videos")[0])
+ height, width = get_image_size(one_video[0])
+ num_frames = one_video.shape[0] # frame dim is always after batch dim
+ image_seqlen = (height // processor.patch_size) * (width // processor.patch_size)
+ video_seqlen = image_seqlen // 4 * num_frames # divide by 4 needed for avg pooling layer
+ else:
+ video_seqlen = 1
+
+ for message in messages:
+ content = message["content"]
+ while VIDEO_PLACEHOLDER in content:
+ content = content.replace(VIDEO_PLACEHOLDER, "{{video}}" * video_seqlen, 1)
+
+ message["content"] = content.replace("{{video}}", self.video_token)
+
+ return messages
+
+
+@dataclass
+class MiniCPMVPlugin(BasePlugin):
+ @override
+ def _get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: "MMProcessor",
+ **kwargs,
+ ) -> dict[str, "torch.Tensor"]:
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor")
+ mm_inputs = {}
+ if len(images) != 0:
+ images = self._regularize_images(
+ images,
+ image_max_pixels=getattr(processor, "image_max_pixels", 768 * 768),
+ image_min_pixels=getattr(processor, "image_min_pixels", 32 * 32),
+ )["images"]
+ if "valid_image_nums_ls" in kwargs:
+ valid_image_nums_ls = kwargs["valid_image_nums_ls"]
+ new_images = []
+ idx = 0
+ for valid_image_nums in valid_image_nums_ls:
+ new_images.append(images[idx : idx + valid_image_nums])
+ idx += valid_image_nums
+
+ images = new_images
+
+ image_inputs = image_processor(
+ images, do_pad=True, max_slice_nums=image_processor.max_slice_nums, return_tensors="pt"
+ )
+ mm_inputs.update(image_inputs)
+
+ if len(videos) != 0:
+ videos = self._regularize_videos(
+ videos,
+ image_max_pixels=getattr(processor, "video_max_pixels", 256 * 256),
+ image_min_pixels=getattr(processor, "video_min_pixels", 16 * 16),
+ video_fps=getattr(processor, "video_fps", 2.0),
+ video_maxlen=getattr(processor, "video_maxlen", 128),
+ )["videos"]
+ video_inputs = image_processor(videos, do_pad=True, max_slice_nums=2, return_tensors="pt")
+ mm_inputs.update(video_inputs)
+
+ if len(audios) != 0:
+ audios = self._regularize_audios(
+ audios,
+ sampling_rate=getattr(processor, "audio_sampling_rate", 16000),
+ )["audios"]
+ if "valid_audio_nums_ls" in kwargs:
+ valid_audio_nums_ls = kwargs["valid_audio_nums_ls"]
+ audios_ls = []
+ idx = 0
+ for valid_audio_nums in valid_audio_nums_ls:
+ audios_ls.append(audios[idx : idx + valid_audio_nums])
+ idx += valid_audio_nums
+ else:
+ audios_ls = [audios]
+
+ audio_features, audio_feature_lens, audio_phs = processor.audio_feature_extract(
+ audios_ls,
+ chunk_input=True,
+ sampling_rate=getattr(processor, "audio_sampling_rate", 16000),
+ )
+ audio_feature_lens = [torch.tensor(audio_feature_len) for audio_feature_len in audio_feature_lens]
+ mm_inputs.update({"audio_features": audio_features, "audio_feature_lens": audio_feature_lens})
+ if kwargs.get("ret_phs", False):
+ mm_inputs.update({"audio_phs": audio_phs})
+
+ return mm_inputs
+
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens, num_video_tokens, num_audio_tokens = 0, 0, 0
+ messages = deepcopy(messages)
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor")
+ mm_inputs, audio_inputs = {}, {}
+ if len(images) != 0 and len(videos) != 0:
+ raise ValueError("MiniCPM-V model does not support input images and videos at the same time.")
+
+ if len(videos) != 0:
+ max_slice_nums = 2
+ use_image_id = False
+ mm_inputs = self._get_mm_inputs([], videos, [], processor)
+ else:
+ max_slice_nums = image_processor.max_slice_nums
+ use_image_id = image_processor.use_image_id
+
+ for i, message in enumerate(messages):
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ content = content.replace(IMAGE_PLACEHOLDER, "{{image}}", 1)
+ num_image_tokens += 1
+
+ while VIDEO_PLACEHOLDER in content:
+ video_seqlen = len(mm_inputs["pixel_values"][num_video_tokens]) if self.expand_mm_tokens else 1
+ content = content.replace(VIDEO_PLACEHOLDER, "{{image}}" * video_seqlen, 1)
+ num_video_tokens += 1
+
+ while AUDIO_PLACEHOLDER in content:
+ content = content.replace(AUDIO_PLACEHOLDER, "{{audio}}", 1)
+ num_audio_tokens += 1
+
+ message["content"] = content.replace("{{image}}", "(./ )").replace(
+ "{{audio}}", "(./ )"
+ )
+
+ if len(images):
+ mm_inputs = self._get_mm_inputs(images, [], [], processor)
+
+ if len(audios):
+ audio_inputs = self._get_mm_inputs([], [], audios, processor, ret_phs=True)
+
+ if self.expand_mm_tokens and mm_inputs:
+ pattern = "(./ )"
+ image_sizes = mm_inputs["image_sizes"]
+ idx = 0
+ for index, message in enumerate(messages):
+ text = message["content"]
+ image_tags = re.findall(pattern, text)
+ text_chunks = text.split(pattern)
+ final_text = ""
+ for i in range(len(image_tags)):
+ final_text = (
+ final_text
+ + text_chunks[i]
+ + image_processor.get_slice_image_placeholder(
+ image_sizes[0][idx], idx, max_slice_nums, use_image_id
+ )
+ )
+ idx += 1
+
+ final_text += text_chunks[-1]
+ messages[index]["content"] = final_text
+
+ if self.expand_mm_tokens and audio_inputs:
+ pattern = "(./ )"
+ idx = 0
+ for index, message in enumerate(messages):
+ text = message["content"]
+ audio_tags = re.findall(pattern, text)
+ text_chunks = text.split(pattern)
+ final_text = ""
+ for i in range(len(audio_tags)):
+ audio_placeholder = audio_inputs["audio_phs"][0][idx]
+ final_text = final_text + text_chunks[i] + audio_placeholder
+ idx += 1
+
+ final_text += text_chunks[-1]
+ messages[index]["content"] = final_text
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ # image bound
+ image_bounds_list = []
+ valid_image_nums_ls = []
+ for i, input_ids in enumerate(batch_ids):
+ input_ids_ = torch.tensor(input_ids)
+ start_cond = (input_ids_ == processor.tokenizer.im_start_id) | (
+ input_ids_ == processor.tokenizer.slice_start_id
+ )
+ end_cond = (input_ids_ == processor.tokenizer.im_end_id) | (input_ids_ == processor.tokenizer.slice_end_id)
+ image_start_tokens = torch.where(start_cond)[0]
+ image_start_tokens += 1
+ image_end_tokens = torch.where(end_cond)[0]
+ valid_image_nums_ls.append(imglens[i])
+ image_bounds = torch.hstack(
+ [
+ image_start_tokens.unsqueeze(-1),
+ image_end_tokens.unsqueeze(-1),
+ ]
+ )
+ image_bounds_list.append(image_bounds)
+
+ mm_inputs = self._get_mm_inputs(images, videos, [], processor, valid_image_nums_ls=valid_image_nums_ls)
+ if "tgt_sizes" not in mm_inputs:
+ dummy_data = [torch.empty(0) for _ in range(len(batch_ids))]
+ mm_inputs.update({"tgt_sizes": dummy_data, "pixel_values": dummy_data, "image_sizes": dummy_data})
+
+ mm_inputs.update({"image_bound": image_bounds_list})
+
+ if len(audios) > 0:
+ # audio bound
+ audio_bounds_ls = []
+ spk_bounds_ls = []
+ valid_audio_nums_ls = []
+
+ for input_ids, audiolen in zip(batch_ids, audlens):
+ input_ids_ = torch.tensor(input_ids)
+ audio_start_idx = torch.where(input_ids_ == processor.tokenizer.audio_start_id)[0]
+ audio_end_idx = torch.where(input_ids_ == processor.tokenizer.audio_end_id)[0]
+ assert len(audio_start_idx) == len(audio_end_idx)
+ audio_bounds = torch.hstack([(audio_start_idx + 1).unsqueeze(-1), audio_end_idx.unsqueeze(-1)])
+ audio_bounds_ls.append(audio_bounds)
+ valid_audio_nums_ls.append(audiolen)
+
+ spk_start_idx = torch.where(input_ids_ == processor.tokenizer.spk_start_id)[0]
+ spk_end_idx = torch.where(input_ids_ == processor.tokenizer.spk_end_id)[0]
+ assert len(spk_start_idx) == len(spk_end_idx)
+ spk_bounds = torch.hstack([(spk_start_idx + 1).unsqueeze(-1), spk_end_idx.unsqueeze(-1)])
+ spk_bounds_ls.append(spk_bounds)
+
+ audio_inputs = self._get_mm_inputs([], [], audios, processor, valid_audio_nums_ls=valid_audio_nums_ls)
+ mm_inputs.update(audio_inputs)
+ mm_inputs.update({"audio_bounds": audio_bounds_ls, "spk_bounds": spk_bounds_ls})
+
+ return mm_inputs
+
+
+@dataclass
+class MllamaPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens = 0
+ messages = deepcopy(messages)
+ for message in messages:
+ content = message["content"]
+ num_image_tokens += content.count(IMAGE_PLACEHOLDER)
+ message["content"] = content.replace(IMAGE_PLACEHOLDER, self.image_token)
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor, imglens)
+ if mm_inputs:
+ num_tiles = mm_inputs.pop("num_tiles")
+ image_token_id: int = getattr(processor, "image_token_id")
+ max_image_tiles: int = getattr(processor.image_processor, "max_image_tiles")
+ cross_attention_token_mask = [
+ get_cross_attention_token_mask(input_ids, image_token_id) for input_ids in batch_ids
+ ]
+ mm_inputs["cross_attention_mask"] = torch.from_numpy(
+ convert_sparse_cross_attention_mask_to_dense(
+ cross_attention_token_mask,
+ num_tiles=num_tiles,
+ max_num_tiles=max_image_tiles,
+ length=max(len(input_ids) for input_ids in batch_ids),
+ )
+ ) # shape: (batch_size, length, max_num_images, max_num_tiles)
+
+ return mm_inputs
+
+
+@dataclass
+class PaliGemmaPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens = 0
+ messages = deepcopy(messages)
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ content = content.replace(IMAGE_PLACEHOLDER, "", 1)
+ num_image_tokens += 1
+
+ message["content"] = content
+
+ return messages
+
+ @override
+ def process_token_ids(
+ self,
+ input_ids: list[int],
+ labels: Optional[list[int]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ tokenizer: "PreTrainedTokenizer",
+ processor: Optional["MMProcessor"],
+ ) -> tuple[list[int], Optional[list[int]]]:
+ self._validate_input(processor, images, videos, audios)
+ num_images = len(images)
+ image_seqlen = processor.image_seq_length if self.expand_mm_tokens else 0 # skip mm token
+ image_token_id = tokenizer.convert_tokens_to_ids(self.image_token)
+ input_ids = [image_token_id] * num_images * image_seqlen + input_ids
+ if labels is not None:
+ labels = [IGNORE_INDEX] * num_images * image_seqlen + labels
+
+ return input_ids, labels
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ seqlens = [len(input_ids) for input_ids in batch_ids]
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ mm_inputs["token_type_ids"] = _get_paligemma_token_type_ids(imglens, seqlens, processor)
+ return mm_inputs
+
+
+@dataclass
+class PixtralPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ messages = deepcopy(messages)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ if "pixel_values" in mm_inputs:
+ # BC for transformers < 4.49.0
+ if isinstance(mm_inputs["image_sizes"], list):
+ image_sizes = iter(mm_inputs["image_sizes"][0])
+ else:
+ image_sizes = iter(mm_inputs["image_sizes"].tolist())
+
+ image_break_token: str = getattr(processor, "image_break_token")
+ image_end_token: str = getattr(processor, "image_end_token")
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ if self.expand_mm_tokens:
+ patch_size = processor.patch_size * getattr(processor, "spatial_merge_size", 1)
+ height, width = next(image_sizes)
+ num_height_tokens = height // patch_size
+ num_width_tokens = width // patch_size
+ replace_tokens = [[self.image_token] * num_width_tokens + [image_break_token]] * num_height_tokens
+ replace_tokens = [item for sublist in replace_tokens for item in sublist] # flatten list
+ replace_tokens[-1] = image_end_token
+ replace_str = "".join(replace_tokens)
+ else:
+ replace_str = self.image_token
+
+ content = content.replace(IMAGE_PLACEHOLDER, replace_str, 1)
+
+ message["content"] = content
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ # ref to this commit https://github.com/huggingface/transformers/pull/35122
+ # after transformers 4.49.0, the `image_sizes` is mandatory as an input parameter for Pixtral VisionEncoder forwarding.
+ # it can be passed into `LlavaConditionalGeneration` as a parameter.
+ if not is_transformers_version_greater_than("4.49.0"):
+ mm_inputs.pop("image_sizes", None)
+ return mm_inputs
+
+
+@dataclass
+class Qwen2AudioPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ bos_token: str = getattr(processor, "audio_bos_token")
+ eos_token: str = getattr(processor, "audio_eos_token")
+ messages = deepcopy(messages)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs([], [], audios, processor)
+ if "feature_attention_mask" in mm_inputs:
+ audio_lengths = mm_inputs["feature_attention_mask"].sum(-1).tolist()
+
+ for message in messages:
+ content = message["content"]
+ while AUDIO_PLACEHOLDER in content:
+ if self.expand_mm_tokens:
+ audio_length = audio_lengths.pop(0)
+ input_length = (audio_length - 1) // 2 + 1
+ audio_seqlen = (input_length - 2) // 2 + 1
+ else:
+ audio_seqlen = 1
+
+ content = content.replace(
+ AUDIO_PLACEHOLDER, f"{bos_token}{self.audio_token * audio_seqlen}{eos_token}", 1
+ )
+
+ message["content"] = content
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["MMProcessor"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ return self._get_mm_inputs(images, videos, audios, processor)
+
+
+@dataclass
+class Qwen2VLPlugin(BasePlugin):
+ @override
+ def _preprocess_image(self, image: "ImageObject", **kwargs) -> "ImageObject":
+ image = super()._preprocess_image(image, **kwargs)
+ if min(image.width, image.height) < 28:
+ width, height = max(image.width, 28), max(image.height, 28)
+ image = image.resize((width, height))
+
+ if image.width / image.height > 200:
+ width, height = image.height * 180, image.height
+ image = image.resize((width, height))
+
+ if image.height / image.width > 200:
+ width, height = image.width, image.width * 180
+ image = image.resize((width, height))
+
+ return image
+
+ @override
+ def _regularize_videos(
+ self, videos: list["VideoInput"], **kwargs
+ ) -> dict[str, Union[list[list["ImageObject"]], list[float]]]:
+ results, fps_per_video = [], []
+ for video in videos:
+ frames: list[ImageObject] = []
+ if _check_video_is_nested_images(video):
+ for frame in video:
+ if not is_valid_image(frame) and not isinstance(frame, dict) and not os.path.exists(frame):
+ raise ValueError("Invalid image found in video frames.")
+
+ frames = video
+ fps_per_video.append(kwargs.get("video_fps", 2.0))
+ else:
+ container = av.open(video, "r")
+ video_stream = next(stream for stream in container.streams if stream.type == "video")
+ sample_indices = self._get_video_sample_indices(video_stream, **kwargs)
+ container.seek(0)
+ for frame_idx, frame in enumerate(container.decode(video_stream)):
+ if frame_idx in sample_indices:
+ frames.append(frame.to_image())
+
+ if video_stream.duration is None:
+ fps_per_video.append(kwargs.get("video_fps", 2.0))
+ else:
+ fps_per_video.append(len(sample_indices) / float(video_stream.duration * video_stream.time_base))
+
+ if len(frames) % 2 != 0:
+ frames.append(frames[-1])
+
+ frames = self._regularize_images(frames, **kwargs)["images"]
+ results.append(frames)
+
+ return {"videos": results, "fps_per_video": fps_per_video}
+
+ @override
+ def _get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: "MMProcessor",
+ ) -> dict[str, "torch.Tensor"]:
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor", None)
+ mm_inputs = {}
+ if len(images) != 0:
+ images = self._regularize_images(
+ images,
+ image_max_pixels=getattr(processor, "image_max_pixels", 768 * 768),
+ image_min_pixels=getattr(processor, "image_min_pixels", 32 * 32),
+ )["images"]
+ mm_inputs.update(image_processor(images, return_tensors="pt"))
+
+ if len(videos) != 0:
+ video_data = self._regularize_videos(
+ videos,
+ image_max_pixels=getattr(processor, "video_max_pixels", 256 * 256),
+ image_min_pixels=getattr(processor, "video_min_pixels", 16 * 16),
+ video_fps=getattr(processor, "video_fps", 2.0),
+ video_maxlen=getattr(processor, "video_maxlen", 128),
+ )
+ mm_inputs.update(image_processor(images=None, videos=video_data["videos"], return_tensors="pt"))
+ temporal_patch_size: int = getattr(image_processor, "temporal_patch_size", 2)
+ if "second_per_grid_ts" in processor.model_input_names:
+ mm_inputs["second_per_grid_ts"] = [temporal_patch_size / fps for fps in video_data["fps_per_video"]]
+
+ return mm_inputs
+
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens, num_video_tokens = 0, 0
+ messages = deepcopy(messages)
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor")
+
+ merge_length: int = getattr(image_processor, "merge_size") ** 2
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ image_grid_thw = mm_inputs.get("image_grid_thw", [])
+ video_grid_thw = mm_inputs.get("video_grid_thw", [])
+ else:
+ image_grid_thw = [None] * len(images)
+ video_grid_thw = [None] * len(videos)
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ image_seqlen = image_grid_thw[num_image_tokens].prod() // merge_length if self.expand_mm_tokens else 1
+ content = content.replace(
+ IMAGE_PLACEHOLDER, f"<|vision_start|>{self.image_token * image_seqlen}<|vision_end|>", 1
+ )
+ num_image_tokens += 1
+
+ while VIDEO_PLACEHOLDER in content:
+ video_seqlen = video_grid_thw[num_video_tokens].prod() // merge_length if self.expand_mm_tokens else 1
+ content = content.replace(
+ VIDEO_PLACEHOLDER, f"<|vision_start|>{self.video_token * video_seqlen}<|vision_end|>", 1
+ )
+ num_video_tokens += 1
+
+ message["content"] = content
+
+ return messages
+
+
+@dataclass
+class GLM4VPlugin(Qwen2VLPlugin):
+ @override
+ def _get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: "MMProcessor",
+ ) -> dict[str, "torch.Tensor"]:
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor", None)
+ video_processor: BaseImageProcessor = getattr(processor, "video_processor", None)
+ mm_inputs = {}
+ if len(images) != 0:
+ images = self._regularize_images(
+ images,
+ image_max_pixels=getattr(processor, "image_max_pixels", 768 * 768),
+ image_min_pixels=getattr(processor, "image_min_pixels", 32 * 32),
+ )["images"]
+ mm_inputs.update(image_processor(images, return_tensors="pt"))
+
+ if len(videos) != 0:
+ video_data = self._regularize_videos(
+ videos,
+ image_max_pixels=getattr(processor, "video_max_pixels", 256 * 256),
+ image_min_pixels=getattr(processor, "video_min_pixels", 16 * 16),
+ video_fps=getattr(processor, "video_fps", 2.0),
+ video_maxlen=getattr(processor, "video_maxlen", 128),
+ )
+ # prepare video metadata
+ video_metadata = [
+ {"fps": 2, "duration": len(video), "total_frames": len(video)} for video in video_data["videos"]
+ ]
+ mm_inputs.update(video_processor(images=None, videos=video_data["videos"], video_metadata=video_metadata))
+
+ return mm_inputs
+
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens, num_video_tokens = 0, 0
+ messages = deepcopy(messages)
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor")
+
+ merge_length: int = getattr(image_processor, "merge_size") ** 2
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ image_grid_thw = mm_inputs.get("image_grid_thw", [])
+ video_grid_thw = mm_inputs.get("video_grid_thw", [])
+ num_frames = video_grid_thw[0][0] if len(video_grid_thw) > 0 else 0 # hard code for now
+ timestamps = mm_inputs.get("timestamps", [])
+
+ if hasattr(timestamps, "tolist"):
+ timestamps = timestamps.tolist()
+
+ if not timestamps:
+ timestamps_list = []
+ elif isinstance(timestamps[0], list):
+ timestamps_list = timestamps[0]
+ else:
+ timestamps_list = timestamps
+
+ unique_timestamps = timestamps_list.copy()
+ selected_timestamps = unique_timestamps[:num_frames]
+ while len(selected_timestamps) < num_frames:
+ selected_timestamps.append(selected_timestamps[-1] if selected_timestamps else 0)
+
+ else:
+ image_grid_thw = [None] * len(images)
+ video_grid_thw = [None] * len(videos)
+ num_frames = 0
+ selected_timestamps = [0]
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ image_seqlen = image_grid_thw[num_image_tokens].prod() // merge_length if self.expand_mm_tokens else 1
+ content = content.replace(
+ IMAGE_PLACEHOLDER, f"<|begin_of_image|>{self.image_token * image_seqlen}<|end_of_image|>", 1
+ )
+ num_image_tokens += 1
+
+ while VIDEO_PLACEHOLDER in content:
+ video_structure = ""
+ for frame_index in range(num_frames):
+ video_seqlen = (
+ video_grid_thw[num_video_tokens][1:].prod() // merge_length if self.expand_mm_tokens else 1
+ )
+ timestamp_sec = selected_timestamps[frame_index]
+ frame_structure = (
+ f"<|begin_of_image|>{self.image_token * video_seqlen}<|end_of_image|>{timestamp_sec}"
+ )
+ video_structure += frame_structure
+
+ content = content.replace(VIDEO_PLACEHOLDER, f"<|begin_of_video|>{video_structure}<|end_of_video|>", 1)
+ num_video_tokens += 1
+
+ message["content"] = content
+
+ return messages
+
+ @override
+ def get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ imglens: list[int],
+ vidlens: list[int],
+ audlens: list[int],
+ batch_ids: list[list[int]],
+ processor: Optional["ProcessorMixin"],
+ ) -> dict[str, Union[list[int], "torch.Tensor"]]:
+ self._validate_input(processor, images, videos, audios)
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ mm_inputs.pop("timestamps", None)
+ return mm_inputs
+
+
+class Qwen2OmniPlugin(Qwen2VLPlugin):
+ @override
+ def _get_mm_inputs(
+ self,
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: "MMProcessor",
+ ) -> dict[str, "torch.Tensor"]:
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor", None)
+ feature_extractor: SequenceFeatureExtractor = getattr(processor, "feature_extractor", None)
+ mm_inputs = {}
+ if len(images) != 0:
+ images = self._regularize_images(
+ images,
+ image_max_pixels=getattr(processor, "image_max_pixels", 768 * 768),
+ image_min_pixels=getattr(processor, "image_min_pixels", 32 * 32),
+ )["images"]
+ mm_inputs.update(image_processor(images, return_tensors="pt"))
+
+ if len(videos) != 0:
+ video_dict = self._regularize_videos(
+ videos,
+ image_max_pixels=getattr(processor, "video_max_pixels", 256 * 256),
+ image_min_pixels=getattr(processor, "video_min_pixels", 16 * 16),
+ video_fps=getattr(processor, "video_fps", 2.0),
+ video_maxlen=getattr(processor, "video_maxlen", 128),
+ )
+ mm_inputs.update(image_processor(images=None, videos=video_dict["videos"], return_tensors="pt"))
+ temporal_patch_size: int = getattr(image_processor, "temporal_patch_size", 2)
+ mm_inputs["video_second_per_grid"] = torch.tensor(
+ [temporal_patch_size / fps for fps in video_dict["fps_per_video"]]
+ )
+
+ if len(audios) != 0:
+ audios = self._regularize_audios(
+ audios,
+ sampling_rate=getattr(processor, "audio_sampling_rate", 16000),
+ )["audios"]
+ mm_inputs.update(
+ feature_extractor(
+ audios,
+ sampling_rate=getattr(processor, "audio_sampling_rate", 16000),
+ return_attention_mask=True,
+ padding="max_length",
+ return_tensors="pt",
+ )
+ )
+ mm_inputs["feature_attention_mask"] = mm_inputs.pop("attention_mask") # prevent conflicts
+
+ return mm_inputs
+
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens, num_video_tokens, num_audio_tokens = 0, 0, 0
+ messages = deepcopy(messages)
+ image_processor: BaseImageProcessor = getattr(processor, "image_processor", None)
+
+ merge_length = processor.image_processor.merge_size**2
+ use_audio_in_video = getattr(processor, "use_audio_in_video", False)
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ image_grid_thw = mm_inputs.get("image_grid_thw", [])
+ video_grid_thw = mm_inputs.get("video_grid_thw", [])
+ if "feature_attention_mask" in mm_inputs:
+ input_lengths = (mm_inputs["feature_attention_mask"].sum(-1).numpy() - 1) // 2 + 1
+ audio_lengths = (input_lengths - 2) // 2 + 1
+ else:
+ mm_inputs = {}
+ image_grid_thw = [None] * len(images)
+ video_grid_thw = [None] * len(videos)
+ audio_lengths = [None] * len(audios)
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ image_seqlen = image_grid_thw[num_image_tokens].prod() // merge_length if self.expand_mm_tokens else 1
+ content = content.replace(
+ IMAGE_PLACEHOLDER, f"<|vision_bos|>{self.image_token * image_seqlen}<|vision_eos|>", 1
+ )
+ num_image_tokens += 1
+
+ if (
+ use_audio_in_video and len(audios) and len(videos)
+ ): # if use the audio of video # deal video token and audio token togather
+ if len(videos) != len(audios):
+ raise ValueError(
+ f"Number of videos ({len(videos)}) must match number of audios ({len(audios)}) when using audio in video."
+ )
+
+ while VIDEO_PLACEHOLDER in content:
+ video_pos = content.find(VIDEO_PLACEHOLDER)
+ audio_pos = content.find(AUDIO_PLACEHOLDER, video_pos)
+ if audio_pos == -1 or audio_pos < video_pos:
+ raise ValueError(
+ f"Each {VIDEO_PLACEHOLDER} must be followed by an {AUDIO_PLACEHOLDER} when using audio in video."
+ )
+
+ audio_t_index = torch.arange(audio_lengths[num_audio_tokens])
+ video_t_index = (
+ torch.arange(video_grid_thw[num_video_tokens][0])
+ .view(-1, 1, 1)
+ .expand(
+ -1,
+ video_grid_thw[num_video_tokens][1] // image_processor.merge_size,
+ video_grid_thw[num_video_tokens][2] // image_processor.merge_size,
+ )
+ .flatten()
+ * mm_inputs["video_second_per_grid"][num_video_tokens]
+ * 25 # FIXME hardcode of position_id_per_seconds=25
+ ).long()
+ t_ntoken_per_chunk = 50 # FIXME hardcode: [25 * 2]
+ video_chunk_indices = processor.get_chunked_index(video_t_index, t_ntoken_per_chunk)
+ audio_chunk_indices = processor.get_chunked_index(audio_t_index, t_ntoken_per_chunk)
+ placeholder_string = ""
+ placeholder_string += "<|vision_bos|>" + "<|audio_bos|>"
+ for j in range(max(len(video_chunk_indices), len(audio_chunk_indices))):
+ video_chunk_index = video_chunk_indices[j] if j < len(video_chunk_indices) else None
+ audio_chunk_index = audio_chunk_indices[j] if j < len(audio_chunk_indices) else None
+ if video_chunk_index is not None:
+ placeholder_string += self.video_token * (video_chunk_index[1] - video_chunk_index[0])
+
+ if audio_chunk_index is not None:
+ placeholder_string += self.audio_token * (audio_chunk_index[1] - audio_chunk_index[0])
+
+ placeholder_string += "<|audio_eos|>" + "<|vision_eos|>"
+ content = content.replace(VIDEO_PLACEHOLDER, placeholder_string, 1)
+ content = content.replace(AUDIO_PLACEHOLDER, "", 1)
+ num_audio_tokens += 1
+ num_video_tokens += 1
+ else:
+ while AUDIO_PLACEHOLDER in content:
+ audio_seqlen = audio_lengths[num_audio_tokens] if self.expand_mm_tokens else 1
+ content = content.replace(
+ AUDIO_PLACEHOLDER, f"<|audio_bos|>{self.audio_token * audio_seqlen}<|audio_eos|>", 1
+ )
+ num_audio_tokens += 1
+
+ while VIDEO_PLACEHOLDER in content:
+ video_seqlen = (
+ video_grid_thw[num_video_tokens].prod() // merge_length if self.expand_mm_tokens else 1
+ )
+ content = content.replace(
+ VIDEO_PLACEHOLDER, f"<|vision_bos|>{self.video_token * video_seqlen}<|vision_eos|>", 1
+ )
+ num_video_tokens += 1
+
+ message["content"] = content
+
+ return messages
+
+
+@dataclass
+class VideoLlavaPlugin(BasePlugin):
+ @override
+ def process_messages(
+ self,
+ messages: list[dict[str, str]],
+ images: list["ImageInput"],
+ videos: list["VideoInput"],
+ audios: list["AudioInput"],
+ processor: Optional["MMProcessor"],
+ ) -> list[dict[str, str]]:
+ self._validate_input(processor, images, videos, audios)
+ self._validate_messages(messages, images, videos, audios)
+ num_image_tokens, num_video_tokens = 0, 0
+ messages = deepcopy(messages)
+ num_frames = 0
+ if self.expand_mm_tokens:
+ mm_inputs = self._get_mm_inputs(images, videos, audios, processor)
+ if "pixel_values_images" in mm_inputs:
+ height, width = get_image_size(to_numpy_array(mm_inputs["pixel_values_images"][0]))
+ num_frames = 1
+
+ if "pixel_values_videos" in mm_inputs:
+ one_video = to_numpy_array(mm_inputs["pixel_values_videos"][0])
+ height, width = get_image_size(one_video[0])
+ num_frames = one_video.shape[0] # frame dim is always after batch dim
+
+ if "pixel_values_images" in mm_inputs or "pixel_values_videos" in mm_inputs:
+ image_seqlen = (height // processor.patch_size) * (
+ width // processor.patch_size
+ ) + processor.num_additional_image_tokens
+ video_seqlen = image_seqlen * num_frames
+ if processor.vision_feature_select_strategy == "default":
+ image_seqlen -= 1
+ else:
+ image_seqlen, video_seqlen = 1, 1
+
+ for message in messages:
+ content = message["content"]
+ while IMAGE_PLACEHOLDER in content:
+ content = content.replace(IMAGE_PLACEHOLDER, "{{image}}" * image_seqlen, 1)
+ num_image_tokens += 1
+
+ while VIDEO_PLACEHOLDER in content:
+ content = content.replace(VIDEO_PLACEHOLDER, "{{video}}" * video_seqlen, 1)
+ num_video_tokens += 1
+
+ content = content.replace("{{image}}", self.image_token)
+ message["content"] = content.replace("{{video}}", self.video_token)
+
+ return messages
+
+
+PLUGINS = {
+ "base": BasePlugin,
+ "gemma3": Gemma3Plugin,
+ "glm4v": GLM4VPlugin,
+ "gemma3n": Gemma3nPlugin,
+ "intern_vl": InternVLPlugin,
+ "kimi_vl": KimiVLPlugin,
+ "llama4": Llama4Plugin,
+ "llava": LlavaPlugin,
+ "llava_next": LlavaNextPlugin,
+ "llava_next_video": LlavaNextVideoPlugin,
+ "minicpm_v": MiniCPMVPlugin,
+ "mllama": MllamaPlugin,
+ "paligemma": PaliGemmaPlugin,
+ "pixtral": PixtralPlugin,
+ "qwen2_audio": Qwen2AudioPlugin,
+ "qwen2_omni": Qwen2OmniPlugin,
+ "qwen2_vl": Qwen2VLPlugin,
+ "video_llava": VideoLlavaPlugin,
+}
+
+
+def register_mm_plugin(name: str, plugin_class: type["BasePlugin"]) -> None:
+ r"""Register a multimodal plugin."""
+ if name in PLUGINS:
+ raise ValueError(f"Multimodal plugin {name} already exists.")
+
+ PLUGINS[name] = plugin_class
+
+
+def get_mm_plugin(
+ name: str,
+ image_token: Optional[str] = None,
+ video_token: Optional[str] = None,
+ audio_token: Optional[str] = None,
+) -> "BasePlugin":
+ r"""Get plugin for multimodal inputs."""
+ if name not in PLUGINS:
+ raise ValueError(f"Multimodal plugin `{name}` not found.")
+
+ return PLUGINS[name](image_token, video_token, audio_token)
diff --git a/src/train_sft/src/llamafactory/data/parser.py b/src/train_sft/src/llamafactory/data/parser.py
new file mode 100644
index 0000000000000000000000000000000000000000..f95462e305fb2989973b4d8961477e889f23c5ff
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/parser.py
@@ -0,0 +1,153 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+from dataclasses import dataclass
+from typing import Any, Literal, Optional, Union
+
+from huggingface_hub import hf_hub_download
+
+from ..extras.constants import DATA_CONFIG
+from ..extras.misc import use_modelscope, use_openmind
+
+
+@dataclass
+class DatasetAttr:
+ r"""Dataset attributes."""
+
+ # basic configs
+ load_from: Literal["hf_hub", "ms_hub", "om_hub", "script", "file"]
+ dataset_name: str
+ formatting: Literal["alpaca", "sharegpt"] = "alpaca"
+ ranking: bool = False
+ # extra configs
+ subset: Optional[str] = None
+ split: str = "train"
+ folder: Optional[str] = None
+ num_samples: Optional[int] = None
+ field: Optional[str] = None # for json datasets with a top-level list field
+ split_map: Optional[str] = None # path to csv with id,split mapping
+ # common columns
+ system: Optional[str] = None
+ tools: Optional[str] = None
+ images: Optional[str] = None
+ videos: Optional[str] = None
+ audios: Optional[str] = None
+ # dpo columns
+ chosen: Optional[str] = None
+ rejected: Optional[str] = None
+ kto_tag: Optional[str] = None
+ # alpaca columns
+ prompt: Optional[str] = "instruction"
+ query: Optional[str] = "input"
+ response: Optional[str] = "output"
+ history: Optional[str] = None
+ # sharegpt columns
+ messages: Optional[str] = "conversations"
+ # sharegpt tags
+ role_tag: Optional[str] = "from"
+ content_tag: Optional[str] = "value"
+ user_tag: Optional[str] = "human"
+ assistant_tag: Optional[str] = "gpt"
+ observation_tag: Optional[str] = "observation"
+ function_tag: Optional[str] = "function_call"
+ system_tag: Optional[str] = "system"
+
+ def __repr__(self) -> str:
+ return self.dataset_name
+
+ def set_attr(self, key: str, obj: dict[str, Any], default: Optional[Any] = None) -> None:
+ setattr(self, key, obj.get(key, default))
+
+ def join(self, attr: dict[str, Any]) -> None:
+ self.set_attr("formatting", attr, default="alpaca")
+ self.set_attr("ranking", attr, default=False)
+ self.set_attr("subset", attr)
+ self.set_attr("split", attr, default="train")
+ self.set_attr("folder", attr)
+ self.set_attr("num_samples", attr)
+ self.set_attr("field", attr) # new: read top-level json field if provided
+ self.set_attr("split_map", attr) # new: external split mapping file (csv)
+
+ if "columns" in attr:
+ column_names = ["prompt", "query", "response", "history", "messages", "system", "tools"]
+ column_names += ["images", "videos", "audios", "chosen", "rejected", "kto_tag"]
+ for column_name in column_names:
+ self.set_attr(column_name, attr["columns"])
+
+ if "tags" in attr:
+ tag_names = ["role_tag", "content_tag"]
+ tag_names += ["user_tag", "assistant_tag", "observation_tag", "function_tag", "system_tag"]
+ for tag in tag_names:
+ self.set_attr(tag, attr["tags"])
+
+
+def get_dataset_list(dataset_names: Optional[list[str]], dataset_dir: Union[str, dict]) -> list["DatasetAttr"]:
+ r"""Get the attributes of the datasets."""
+ if dataset_names is None:
+ dataset_names = []
+
+ if isinstance(dataset_dir, dict):
+ dataset_info = dataset_dir
+ elif dataset_dir == "ONLINE":
+ dataset_info = None
+ else:
+ if dataset_dir.startswith("REMOTE:"):
+ config_path = hf_hub_download(repo_id=dataset_dir[7:], filename=DATA_CONFIG, repo_type="dataset")
+ else:
+ config_path = os.path.join(dataset_dir, DATA_CONFIG)
+
+ try:
+ with open(config_path) as f:
+ dataset_info = json.load(f)
+ except Exception as err:
+ if len(dataset_names) != 0:
+ raise ValueError(f"Cannot open {config_path} due to {str(err)}.")
+
+ dataset_info = None
+
+ dataset_list: list[DatasetAttr] = []
+ for name in dataset_names:
+ if dataset_info is None: # dataset_dir is ONLINE
+ load_from = "ms_hub" if use_modelscope() else "om_hub" if use_openmind() else "hf_hub"
+ dataset_attr = DatasetAttr(load_from, dataset_name=name)
+ dataset_list.append(dataset_attr)
+ continue
+
+ if name not in dataset_info:
+ raise ValueError(f"Undefined dataset {name} in {DATA_CONFIG}.")
+
+ has_hf_url = "hf_hub_url" in dataset_info[name]
+ has_ms_url = "ms_hub_url" in dataset_info[name]
+ has_om_url = "om_hub_url" in dataset_info[name]
+
+ if has_hf_url or has_ms_url or has_om_url:
+ if has_ms_url and (use_modelscope() or not has_hf_url):
+ dataset_attr = DatasetAttr("ms_hub", dataset_name=dataset_info[name]["ms_hub_url"])
+ elif has_om_url and (use_openmind() or not has_hf_url):
+ dataset_attr = DatasetAttr("om_hub", dataset_name=dataset_info[name]["om_hub_url"])
+ else:
+ dataset_attr = DatasetAttr("hf_hub", dataset_name=dataset_info[name]["hf_hub_url"])
+ elif "script_url" in dataset_info[name]:
+ dataset_attr = DatasetAttr("script", dataset_name=dataset_info[name]["script_url"])
+ elif "cloud_file_name" in dataset_info[name]:
+ dataset_attr = DatasetAttr("cloud_file", dataset_name=dataset_info[name]["cloud_file_name"])
+ else:
+ dataset_attr = DatasetAttr("file", dataset_name=dataset_info[name]["file_name"])
+
+ dataset_attr.join(dataset_info[name])
+ dataset_list.append(dataset_attr)
+
+ return dataset_list
diff --git a/src/train_sft/src/llamafactory/data/template.py b/src/train_sft/src/llamafactory/data/template.py
new file mode 100644
index 0000000000000000000000000000000000000000..b888b6b1fc847e75f4c23f26e0b0f80b1e7e277a
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/template.py
@@ -0,0 +1,2040 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import re
+from copy import deepcopy
+from dataclasses import dataclass
+from typing import TYPE_CHECKING, Optional, Union
+
+from typing_extensions import override
+
+from ..extras import logging
+from .data_utils import Role
+from .formatter import EmptyFormatter, FunctionFormatter, StringFormatter, ToolFormatter
+from .mm_plugin import get_mm_plugin
+
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedTokenizer
+
+ from ..hparams import DataArguments
+ from .formatter import SLOTS, Formatter
+ from .mm_plugin import BasePlugin
+ from .tool_utils import FunctionCall
+
+
+logger = logging.get_logger(__name__)
+
+
+@dataclass
+class Template:
+ format_user: "Formatter"
+ format_assistant: "Formatter"
+ format_system: "Formatter"
+ format_function: "Formatter"
+ format_observation: "Formatter"
+ format_tools: "Formatter"
+ format_prefix: "Formatter"
+ default_system: str
+ stop_words: list[str]
+ thought_words: tuple[str, str]
+ efficient_eos: bool
+ replace_eos: bool
+ replace_jinja_template: bool
+ enable_thinking: Optional[bool]
+ mm_plugin: "BasePlugin"
+
+ def encode_oneturn(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ ) -> tuple[list[int], list[int]]:
+ r"""Return a single pair of token ids representing prompt and response respectively."""
+ encoded_messages = self._encode(tokenizer, messages, system, tools)
+ prompt_ids = []
+ for encoded_ids in encoded_messages[:-1]:
+ prompt_ids += encoded_ids
+
+ response_ids = encoded_messages[-1]
+ return prompt_ids, response_ids
+
+ def encode_multiturn(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ ) -> list[tuple[list[int], list[int]]]:
+ r"""Return multiple pairs of token ids representing prompts and responses respectively."""
+ encoded_messages = self._encode(tokenizer, messages, system, tools)
+ return [(encoded_messages[i], encoded_messages[i + 1]) for i in range(0, len(encoded_messages), 2)]
+
+ def extract_tool(self, content: str) -> Union[str, list["FunctionCall"]]:
+ r"""Extract tool message."""
+ return self.format_tools.extract(content)
+
+ def get_stop_token_ids(self, tokenizer: "PreTrainedTokenizer") -> list[int]:
+ r"""Return stop token ids."""
+ stop_token_ids = {tokenizer.eos_token_id}
+ for token in self.stop_words:
+ stop_token_ids.add(tokenizer.convert_tokens_to_ids(token))
+
+ return list(stop_token_ids)
+
+ def add_thought(self, content: str = "") -> str:
+ r"""Add empty thought to assistant message."""
+ return f"{self.thought_words[0]}{self.thought_words[1]}" + content
+
+ def remove_thought(self, content: str) -> str:
+ r"""Remove thought from assistant message."""
+ pattern = re.compile(f"{re.escape(self.thought_words[0])}(.*?){re.escape(self.thought_words[1])}", re.DOTALL)
+ return re.sub(pattern, "", content).lstrip("\n")
+
+ def get_thought_word_ids(self, tokenizer: "PreTrainedTokenizer") -> list[int]:
+ r"""Get the token ids of thought words."""
+ return tokenizer.encode(self.add_thought(), add_special_tokens=False)
+
+ def _convert_elements_to_ids(self, tokenizer: "PreTrainedTokenizer", elements: "SLOTS") -> list[int]:
+ r"""Convert elements to token ids."""
+ token_ids = []
+ for elem in elements:
+ if isinstance(elem, str):
+ if len(elem) != 0:
+ token_ids += tokenizer.encode(elem, add_special_tokens=False)
+ elif isinstance(elem, dict):
+ token_ids += [tokenizer.convert_tokens_to_ids(elem.get("token"))]
+ elif isinstance(elem, set):
+ if "bos_token" in elem and tokenizer.bos_token_id is not None:
+ token_ids += [tokenizer.bos_token_id]
+ elif "eos_token" in elem and tokenizer.eos_token_id is not None:
+ token_ids += [tokenizer.eos_token_id]
+ else:
+ raise ValueError(f"Input must be string, set[str] or dict[str, str], got {type(elem)}")
+
+ return token_ids
+
+ def _encode(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ messages: list[dict[str, str]],
+ system: Optional[str],
+ tools: Optional[str],
+ ) -> list[list[int]]:
+ r"""Encode formatted inputs to pairs of token ids.
+
+ Turn 0: prefix + system + query resp
+ Turn t: query resp.
+ """
+ system = system or self.default_system
+ encoded_messages = []
+ for i, message in enumerate(messages):
+ elements = []
+
+ if i == 0:
+ elements += self.format_prefix.apply()
+ if system or tools:
+ tool_text = self.format_tools.apply(content=tools)[0] if tools else ""
+ elements += self.format_system.apply(content=(system + tool_text))
+
+ if message["role"] == Role.USER:
+ elements += self.format_user.apply(content=message["content"], idx=str(i // 2))
+ elif message["role"] == Role.ASSISTANT:
+ elements += self.format_assistant.apply(content=message["content"])
+ elif message["role"] == Role.OBSERVATION:
+ elements += self.format_observation.apply(content=message["content"])
+ elif message["role"] == Role.FUNCTION:
+ elements += self.format_function.apply(content=message["content"])
+ else:
+ raise NotImplementedError("Unexpected role: {}".format(message["role"]))
+
+ encoded_messages.append(self._convert_elements_to_ids(tokenizer, elements))
+
+ return encoded_messages
+
+ @staticmethod
+ def _add_or_replace_eos_token(tokenizer: "PreTrainedTokenizer", eos_token: str) -> None:
+ r"""Add or replace eos token to the tokenizer."""
+ if tokenizer.eos_token == eos_token:
+ return
+
+ is_added = tokenizer.eos_token_id is None
+ num_added_tokens = tokenizer.add_special_tokens({"eos_token": eos_token})
+
+ if is_added:
+ logger.info_rank0(f"Add eos token: {tokenizer.eos_token}.")
+ else:
+ logger.info_rank0(f"Replace eos token: {tokenizer.eos_token}.")
+
+ if num_added_tokens > 0:
+ logger.warning_rank0("New tokens have been added, make sure `resize_vocab` is True.")
+
+ def fix_special_tokens(self, tokenizer: "PreTrainedTokenizer") -> None:
+ r"""Add eos token and pad token to the tokenizer."""
+ stop_words = self.stop_words
+ if self.replace_eos:
+ if not stop_words:
+ raise ValueError("Stop words are required to replace the EOS token.")
+
+ self._add_or_replace_eos_token(tokenizer, eos_token=stop_words[0])
+ stop_words = stop_words[1:]
+
+ if tokenizer.eos_token_id is None:
+ self._add_or_replace_eos_token(tokenizer, eos_token="<|endoftext|>")
+
+ if tokenizer.pad_token_id is None:
+ tokenizer.pad_token = tokenizer.eos_token
+ logger.info_rank0(f"Add pad token: {tokenizer.pad_token}")
+
+ if stop_words:
+ num_added_tokens = tokenizer.add_special_tokens(
+ dict(additional_special_tokens=stop_words), replace_additional_special_tokens=False
+ )
+ logger.info_rank0("Add {} to stop words.".format(",".join(stop_words)))
+ if num_added_tokens > 0:
+ logger.warning_rank0("New tokens have been added, make sure `resize_vocab` is True.")
+
+ @staticmethod
+ def _jinja_escape(content: str) -> str:
+ r"""Escape single quotes in content."""
+ return content.replace("'", r"\'")
+
+ @staticmethod
+ def _convert_slots_to_jinja(slots: "SLOTS", tokenizer: "PreTrainedTokenizer", placeholder: str = "content") -> str:
+ r"""Convert slots to jinja template."""
+ slot_items = []
+ for slot in slots:
+ if isinstance(slot, str):
+ slot_pieces = slot.split("{{content}}")
+ if slot_pieces[0]:
+ slot_items.append("'" + Template._jinja_escape(slot_pieces[0]) + "'")
+ if len(slot_pieces) > 1:
+ slot_items.append(placeholder)
+ if slot_pieces[1]:
+ slot_items.append("'" + Template._jinja_escape(slot_pieces[1]) + "'")
+ elif isinstance(slot, set): # do not use {{ eos_token }} since it may be replaced
+ if "bos_token" in slot and tokenizer.bos_token_id is not None:
+ slot_items.append("'" + tokenizer.bos_token + "'")
+ elif "eos_token" in slot and tokenizer.eos_token_id is not None:
+ slot_items.append("'" + tokenizer.eos_token + "'")
+ elif isinstance(slot, dict):
+ raise ValueError("Dict is not supported.")
+
+ return " + ".join(slot_items)
+
+ def _get_jinja_template(self, tokenizer: "PreTrainedTokenizer") -> str:
+ r"""Return the jinja template."""
+ prefix = self._convert_slots_to_jinja(self.format_prefix.apply(), tokenizer)
+ system = self._convert_slots_to_jinja(self.format_system.apply(), tokenizer, placeholder="system_message")
+ user = self._convert_slots_to_jinja(self.format_user.apply(), tokenizer)
+ assistant = self._convert_slots_to_jinja(self.format_assistant.apply(), tokenizer)
+ jinja_template = ""
+ if prefix:
+ jinja_template += "{{ " + prefix + " }}"
+
+ if self.default_system:
+ jinja_template += "{% set system_message = '" + self._jinja_escape(self.default_system) + "' %}"
+
+ jinja_template += (
+ "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}"
+ "{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}"
+ "{% if system_message is defined %}{{ " + system + " }}{% endif %}"
+ "{% for message in loop_messages %}"
+ "{% set content = message['content'] %}"
+ "{% if message['role'] == 'user' %}"
+ "{{ " + user + " }}"
+ "{% elif message['role'] == 'assistant' %}"
+ "{{ " + assistant + " }}"
+ "{% endif %}"
+ "{% endfor %}"
+ )
+ return jinja_template
+
+ def fix_jinja_template(self, tokenizer: "PreTrainedTokenizer") -> None:
+ r"""Replace the jinja template in the tokenizer."""
+ if tokenizer.chat_template is None or self.replace_jinja_template:
+ try:
+ tokenizer.chat_template = self._get_jinja_template(tokenizer)
+ except ValueError as e:
+ logger.info_rank0(f"Cannot add this chat template to tokenizer: {e}.")
+
+ @staticmethod
+ def _convert_slots_to_ollama(
+ slots: "SLOTS", tokenizer: "PreTrainedTokenizer", placeholder: str = "content"
+ ) -> str:
+ r"""Convert slots to ollama template."""
+ slot_items = []
+ for slot in slots:
+ if isinstance(slot, str):
+ slot_pieces = slot.split("{{content}}")
+ if slot_pieces[0]:
+ slot_items.append(slot_pieces[0])
+ if len(slot_pieces) > 1:
+ slot_items.append("{{ " + placeholder + " }}")
+ if slot_pieces[1]:
+ slot_items.append(slot_pieces[1])
+ elif isinstance(slot, set): # do not use {{ eos_token }} since it may be replaced
+ if "bos_token" in slot and tokenizer.bos_token_id is not None:
+ slot_items.append(tokenizer.bos_token)
+ elif "eos_token" in slot and tokenizer.eos_token_id is not None:
+ slot_items.append(tokenizer.eos_token)
+ elif isinstance(slot, dict):
+ raise ValueError("Dict is not supported.")
+
+ return "".join(slot_items)
+
+ def _get_ollama_template(self, tokenizer: "PreTrainedTokenizer") -> str:
+ r"""Return the ollama template."""
+ prefix = self._convert_slots_to_ollama(self.format_prefix.apply(), tokenizer)
+ system = self._convert_slots_to_ollama(self.format_system.apply(), tokenizer, placeholder=".System")
+ user = self._convert_slots_to_ollama(self.format_user.apply(), tokenizer, placeholder=".Content")
+ assistant = self._convert_slots_to_ollama(self.format_assistant.apply(), tokenizer, placeholder=".Content")
+ return (
+ f"{prefix}{{{{ if .System }}}}{system}{{{{ end }}}}"
+ f"""{{{{ range .Messages }}}}{{{{ if eq .Role "user" }}}}{user}"""
+ f"""{{{{ else if eq .Role "assistant" }}}}{assistant}{{{{ end }}}}{{{{ end }}}}"""
+ )
+
+ def get_ollama_modelfile(self, tokenizer: "PreTrainedTokenizer") -> str:
+ r"""Return the ollama modelfile.
+
+ TODO: support function calling.
+ """
+ modelfile = "# ollama modelfile auto-generated by llamafactory\n\n"
+ modelfile += f'FROM .\n\nTEMPLATE """{self._get_ollama_template(tokenizer)}"""\n\n'
+
+ if self.default_system:
+ modelfile += f'SYSTEM """{self.default_system}"""\n\n'
+
+ for stop_token_id in self.get_stop_token_ids(tokenizer):
+ modelfile += f'PARAMETER stop "{tokenizer.convert_ids_to_tokens(stop_token_id)}"\n'
+
+ modelfile += "PARAMETER num_ctx 4096\n"
+ return modelfile
+
+
+@dataclass
+class Llama2Template(Template):
+ r"""A template that fuse the system message to first user message."""
+
+ @override
+ def _encode(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ messages: list[dict[str, str]],
+ system: str,
+ tools: str,
+ ) -> list[list[int]]:
+ system = system or self.default_system
+ encoded_messages = []
+ for i, message in enumerate(messages):
+ elements = []
+
+ system_text = ""
+ if i == 0:
+ elements += self.format_prefix.apply()
+ if system or tools:
+ tool_text = self.format_tools.apply(content=tools)[0] if tools else ""
+ system_text = self.format_system.apply(content=(system + tool_text))[0]
+
+ if message["role"] == Role.USER:
+ elements += self.format_user.apply(content=system_text + message["content"])
+ elif message["role"] == Role.ASSISTANT:
+ elements += self.format_assistant.apply(content=message["content"])
+ elif message["role"] == Role.OBSERVATION:
+ elements += self.format_observation.apply(content=message["content"])
+ elif message["role"] == Role.FUNCTION:
+ elements += self.format_function.apply(content=message["content"])
+ else:
+ raise NotImplementedError("Unexpected role: {}".format(message["role"]))
+
+ encoded_messages.append(self._convert_elements_to_ids(tokenizer, elements))
+
+ return encoded_messages
+
+ def _get_jinja_template(self, tokenizer: "PreTrainedTokenizer") -> str:
+ prefix = self._convert_slots_to_jinja(self.format_prefix.apply(), tokenizer)
+ system_message = self._convert_slots_to_jinja(
+ self.format_system.apply(), tokenizer, placeholder="system_message"
+ )
+ user_message = self._convert_slots_to_jinja(self.format_user.apply(), tokenizer)
+ assistant_message = self._convert_slots_to_jinja(self.format_assistant.apply(), tokenizer)
+ jinja_template = ""
+ if prefix:
+ jinja_template += "{{ " + prefix + " }}"
+
+ if self.default_system:
+ jinja_template += "{% set system_message = '" + self._jinja_escape(self.default_system) + "' %}"
+
+ jinja_template += (
+ "{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}"
+ "{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% endif %}"
+ "{% for message in loop_messages %}"
+ "{% if loop.index0 == 0 and system_message is defined %}"
+ "{% set content = " + system_message + " + message['content'] %}"
+ "{% else %}{% set content = message['content'] %}{% endif %}"
+ "{% if message['role'] == 'user' %}"
+ "{{ " + user_message + " }}"
+ "{% elif message['role'] == 'assistant' %}"
+ "{{ " + assistant_message + " }}"
+ "{% endif %}"
+ "{% endfor %}"
+ )
+ return jinja_template
+
+
+@dataclass
+class ReasoningTemplate(Template):
+ r"""A template that add thought to assistant message."""
+
+ @override
+ def encode_oneturn(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ ) -> tuple[list[int], list[int]]:
+ messages = deepcopy(messages)
+ for i in range(1, len(messages) - 2, 2):
+ messages[i]["content"] = self.remove_thought(messages[i]["content"])
+
+ if self.enable_thinking is False: # remove all cot
+ messages[-1]["content"] = self.remove_thought(messages[-1]["content"])
+
+ prompt_ids, response_ids = super().encode_oneturn(tokenizer, messages, system, tools)
+ if (
+ self.thought_words[0] not in messages[-1]["content"]
+ and self.thought_words[1] not in messages[-1]["content"]
+ ): # add empty cot
+ if not self.enable_thinking: # do not compute loss
+ prompt_ids += self.get_thought_word_ids(tokenizer)
+ else: # do compute loss
+ response_ids = self.get_thought_word_ids(tokenizer) + response_ids
+
+ return prompt_ids, response_ids
+
+ @override
+ def encode_multiturn(
+ self,
+ tokenizer: "PreTrainedTokenizer",
+ messages: list[dict[str, str]],
+ system: Optional[str] = None,
+ tools: Optional[str] = None,
+ ) -> list[tuple[list[int], list[int]]]:
+ messages = deepcopy(messages)
+ if self.enable_thinking is False: # remove all cot
+ for i in range(1, len(messages), 2):
+ messages[i]["content"] = self.remove_thought(messages[i]["content"])
+
+ encoded_messages = self._encode(tokenizer, messages, system, tools)
+ for i in range(0, len(messages), 2):
+ if (
+ self.thought_words[0] not in messages[i + 1]["content"]
+ and self.thought_words[1] not in messages[i + 1]["content"]
+ ): # add empty cot
+ if not self.enable_thinking: # do not compute loss
+ encoded_messages[i] += self.get_thought_word_ids(tokenizer)
+ else: # do compute loss
+ encoded_messages[i + 1] = self.get_thought_word_ids(tokenizer) + encoded_messages[i + 1]
+
+ return [(encoded_messages[i], encoded_messages[i + 1]) for i in range(0, len(encoded_messages), 2)]
+
+
+TEMPLATES: dict[str, "Template"] = {}
+
+
+def register_template(
+ name: str,
+ format_user: Optional["Formatter"] = None,
+ format_assistant: Optional["Formatter"] = None,
+ format_system: Optional["Formatter"] = None,
+ format_function: Optional["Formatter"] = None,
+ format_observation: Optional["Formatter"] = None,
+ format_tools: Optional["Formatter"] = None,
+ format_prefix: Optional["Formatter"] = None,
+ default_system: str = "",
+ stop_words: Optional[list[str]] = None,
+ thought_words: Optional[tuple[str, str]] = None,
+ efficient_eos: bool = False,
+ replace_eos: bool = False,
+ replace_jinja_template: bool = False,
+ enable_thinking: Optional[bool] = True,
+ mm_plugin: "BasePlugin" = get_mm_plugin(name="base"),
+ template_class: type["Template"] = Template,
+) -> None:
+ r"""Register a chat template.
+
+ To add the following chat template:
+ ```
+ user prompt here
+ model response here
+ user prompt here
+ model response here
+ ```
+
+ The corresponding code should be:
+ ```
+ register_template(
+ name="custom",
+ format_user=StringFormatter(slots=["{{content}}\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ format_prefix=EmptyFormatter(""),
+ )
+ ```
+ """
+ if name in TEMPLATES:
+ raise ValueError(f"Template {name} already exists.")
+
+ default_slots = ["{{content}}"] if efficient_eos else ["{{content}}", {"eos_token"}]
+ default_user_formatter = StringFormatter(slots=["{{content}}"])
+ default_assistant_formatter = StringFormatter(slots=default_slots)
+ if format_assistant is not None:
+ default_function_formatter = FunctionFormatter(slots=format_assistant.slots, tool_format="default")
+ else:
+ default_function_formatter = FunctionFormatter(slots=default_slots, tool_format="default")
+
+ default_tool_formatter = ToolFormatter(tool_format="default")
+ default_prefix_formatter = EmptyFormatter()
+ TEMPLATES[name] = template_class(
+ format_user=format_user or default_user_formatter,
+ format_assistant=format_assistant or default_assistant_formatter,
+ format_system=format_system or default_user_formatter,
+ format_function=format_function or default_function_formatter,
+ format_observation=format_observation or format_user or default_user_formatter,
+ format_tools=format_tools or default_tool_formatter,
+ format_prefix=format_prefix or default_prefix_formatter,
+ default_system=default_system,
+ stop_words=stop_words or [],
+ thought_words=thought_words or ("\n", "\n \n\n"),
+ efficient_eos=efficient_eos,
+ replace_eos=replace_eos,
+ replace_jinja_template=replace_jinja_template,
+ enable_thinking=enable_thinking,
+ mm_plugin=mm_plugin,
+ )
+
+
+def parse_template(tokenizer: "PreTrainedTokenizer") -> "Template":
+ r"""Extract a chat template from the tokenizer."""
+
+ def find_diff(short_str: str, long_str: str) -> str:
+ i, j = 0, 0
+ diff = ""
+ while i < len(short_str) and j < len(long_str):
+ if short_str[i] == long_str[j]:
+ i += 1
+ j += 1
+ else:
+ diff += long_str[j]
+ j += 1
+
+ return diff
+
+ prefix = tokenizer.decode(tokenizer.encode(""))
+
+ messages = [{"role": "system", "content": "{{content}}"}]
+ system_slot = tokenizer.apply_chat_template(messages, add_generation_prompt=False, tokenize=False)[len(prefix) :]
+
+ messages = [{"role": "system", "content": ""}, {"role": "user", "content": "{{content}}"}]
+ user_slot_empty_system = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
+ user_slot_empty_system = user_slot_empty_system[len(prefix) :]
+
+ messages = [{"role": "user", "content": "{{content}}"}]
+ user_slot = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
+ user_slot = user_slot[len(prefix) :]
+
+ messages = [{"role": "user", "content": "{{content}}"}, {"role": "assistant", "content": "{{content}}"}]
+ assistant_slot = tokenizer.apply_chat_template(messages, add_generation_prompt=False, tokenize=False)
+ assistant_slot = assistant_slot[len(prefix) + len(user_slot) :]
+ template_class = ReasoningTemplate if "" in assistant_slot else Template
+ assistant_slot = assistant_slot.replace("", "").replace(" ", "").lstrip("\n") # remove thought tags
+
+ if len(user_slot) > len(user_slot_empty_system):
+ default_system = find_diff(user_slot_empty_system, user_slot)
+ sole_system = system_slot.replace("{{content}}", default_system, 1)
+ user_slot = user_slot[len(sole_system) :]
+ else: # if defaut_system is empty, user_slot_empty_system will be longer than user_slot
+ default_system = ""
+
+ return template_class(
+ format_user=StringFormatter(slots=[user_slot]),
+ format_assistant=StringFormatter(slots=[assistant_slot]),
+ format_system=StringFormatter(slots=[system_slot]),
+ format_function=FunctionFormatter(slots=[assistant_slot], tool_format="default"),
+ format_observation=StringFormatter(slots=[user_slot]),
+ format_tools=ToolFormatter(tool_format="default"),
+ format_prefix=EmptyFormatter(slots=[prefix]) if prefix else EmptyFormatter(),
+ default_system=default_system,
+ stop_words=[],
+ thought_words=("\n", "\n \n\n"),
+ efficient_eos=False,
+ replace_eos=False,
+ replace_jinja_template=False,
+ enable_thinking=True,
+ mm_plugin=get_mm_plugin(name="base"),
+ )
+
+
+def get_template_and_fix_tokenizer(tokenizer: "PreTrainedTokenizer", data_args: "DataArguments") -> "Template":
+ r"""Get chat template and fixes the tokenizer."""
+ if data_args.template is None:
+ if isinstance(tokenizer.chat_template, str):
+ logger.warning_rank0("`template` was not specified, try parsing the chat template from the tokenizer.")
+ template = parse_template(tokenizer)
+ else:
+ logger.warning_rank0("`template` was not specified, use `empty` template.")
+ template = TEMPLATES["empty"] # placeholder
+ else:
+ if data_args.template not in TEMPLATES:
+ raise ValueError(f"Template {data_args.template} does not exist.")
+
+ template = TEMPLATES[data_args.template]
+
+ if data_args.train_on_prompt and template.efficient_eos:
+ raise ValueError("Current template does not support `train_on_prompt`.")
+
+ if data_args.tool_format is not None:
+ logger.info_rank0(f"Using tool format: {data_args.tool_format}.")
+ default_slots = ["{{content}}"] if template.efficient_eos else ["{{content}}", {"eos_token"}]
+ template.format_function = FunctionFormatter(slots=default_slots, tool_format=data_args.tool_format)
+ template.format_tools = ToolFormatter(tool_format=data_args.tool_format)
+
+ if data_args.default_system is not None:
+ logger.info_rank0(f"Using default system message: {data_args.default_system}.")
+ template.default_system = data_args.default_system
+
+ template.enable_thinking = data_args.enable_thinking
+ template.fix_special_tokens(tokenizer)
+ template.fix_jinja_template(tokenizer)
+ return template
+
+
+register_template(
+ name="alpaca",
+ format_user=StringFormatter(slots=["### Instruction:\n{{content}}\n\n### Response:\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}", {"eos_token"}, "\n\n"]),
+ default_system=(
+ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
+ ),
+ replace_jinja_template=True,
+)
+
+
+register_template(
+ name="aquila",
+ format_user=StringFormatter(slots=["Human: {{content}}###Assistant:"]),
+ format_assistant=StringFormatter(slots=["{{content}}###"]),
+ format_system=StringFormatter(slots=["System: {{content}}###"]),
+ default_system=(
+ "A chat between a curious human and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the human's questions."
+ ),
+ stop_words=[" "],
+)
+
+
+register_template(
+ name="atom",
+ format_user=StringFormatter(
+ slots=[{"bos_token"}, "Human: {{content}}\n", {"eos_token"}, {"bos_token"}, "Assistant:"]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}\n", {"eos_token"}]),
+)
+
+
+register_template(
+ name="baichuan",
+ format_user=StringFormatter(slots=[{"token": ""}, "{{content}}", {"token": ""}]),
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="baichuan2",
+ format_user=StringFormatter(slots=["{{content}}"]),
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="bailing",
+ format_user=StringFormatter(slots=["HUMAN {{content}}ASSISTANT "]),
+ format_system=StringFormatter(slots=["SYSTEM {{content}}"]),
+ format_observation=StringFormatter(slots=["OBSERVATION {{content}}ASSISTANT "]),
+ stop_words=["<|endoftext|>"],
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="belle",
+ format_user=StringFormatter(slots=["Human: {{content}}\n\nBelle: "]),
+ format_assistant=StringFormatter(slots=["{{content}}", {"eos_token"}, "\n\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+)
+
+
+register_template(
+ name="bluelm",
+ format_user=StringFormatter(slots=[{"token": "[|Human|]:"}, "{{content}}", {"token": "[|AI|]:"}]),
+)
+
+
+register_template(
+ name="breeze",
+ format_user=StringFormatter(slots=["[INST] {{content}} [/INST] "]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="chatglm2",
+ format_user=StringFormatter(slots=["[Round {{idx}}]\n\n问:{{content}}\n\n答:"]),
+ format_prefix=EmptyFormatter(slots=[{"token": "[gMASK]"}, {"token": "sop"}]),
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="chatglm3",
+ format_user=StringFormatter(slots=[{"token": "<|user|>"}, "\n", "{{content}}", {"token": "<|assistant|>"}]),
+ format_assistant=StringFormatter(slots=["\n", "{{content}}"]),
+ format_system=StringFormatter(slots=[{"token": "<|system|>"}, "\n", "{{content}}"]),
+ format_function=FunctionFormatter(slots=["{{content}}"], tool_format="glm4"),
+ format_observation=StringFormatter(
+ slots=[{"token": "<|observation|>"}, "\n", "{{content}}", {"token": "<|assistant|>"}]
+ ),
+ format_tools=ToolFormatter(tool_format="glm4"),
+ format_prefix=EmptyFormatter(slots=[{"token": "[gMASK]"}, {"token": "sop"}]),
+ stop_words=["<|user|>", "<|observation|>"],
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="chatml",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ stop_words=["<|im_end|>", "<|im_start|>"],
+ replace_eos=True,
+ replace_jinja_template=True,
+)
+
+
+# copied from chatml template
+register_template(
+ name="chatml_de",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ default_system="Du bist ein freundlicher und hilfsbereiter KI-Assistent.",
+ stop_words=["<|im_end|>", "<|im_start|>"],
+ replace_eos=True,
+ replace_jinja_template=True,
+)
+
+
+register_template(
+ name="codegeex2",
+ format_prefix=EmptyFormatter(slots=[{"token": "[gMASK]"}, {"token": "sop"}]),
+)
+
+
+register_template(
+ name="codegeex4",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|assistant|>\n"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}"]),
+ format_function=FunctionFormatter(slots=["{{content}}"], tool_format="glm4"),
+ format_observation=StringFormatter(slots=["<|observation|>\n{{content}}<|assistant|>\n"]),
+ format_tools=ToolFormatter(tool_format="glm4"),
+ format_prefix=EmptyFormatter(slots=["[gMASK]"]),
+ default_system=(
+ "你是一位智能编程助手,你叫CodeGeeX。你会为用户回答关于编程、代码、计算机方面的任何问题,"
+ "并提供格式规范、可以执行、准确安全的代码,并在必要时提供详细的解释。"
+ ),
+ stop_words=["<|user|>", "<|observation|>"],
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="cohere",
+ format_user=StringFormatter(
+ slots=[
+ (
+ "<|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{content}}<|END_OF_TURN_TOKEN|>"
+ "<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>"
+ )
+ ]
+ ),
+ format_system=StringFormatter(slots=["<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{{content}}<|END_OF_TURN_TOKEN|>"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+)
+
+
+register_template(
+ name="cpm",
+ format_user=StringFormatter(slots=["<用户>{{content}}"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+)
+
+
+# copied from chatml template
+register_template(
+ name="cpm3",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|im_end|>"],
+)
+
+
+# copied from chatml template
+register_template(
+ name="cpm4",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|im_end|>"],
+)
+
+
+# copied from chatml template
+register_template(
+ name="dbrx",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ default_system=(
+ "You are DBRX, created by Databricks. You were last updated in December 2023. "
+ "You answer questions based on information available up to that point.\n"
+ "YOU PROVIDE SHORT RESPONSES TO SHORT QUESTIONS OR STATEMENTS, but provide thorough "
+ "responses to more complex and open-ended questions.\nYou assist with various tasks, "
+ "from writing to coding (using markdown for code blocks — remember to use ``` with "
+ "code, JSON, and tables).\n(You do not have real-time data access or code execution "
+ "capabilities. You avoid stereotyping and provide balanced perspectives on "
+ "controversial topics. You do not provide song lyrics, poems, or news articles and "
+ "do not divulge details of your training data.)\nThis is your system prompt, "
+ "guiding your responses. Do not reference it, just respond to the user. If you find "
+ "yourself talking about this message, stop. You should be responding appropriately "
+ "and usually that means not mentioning this.\nYOU DO NOT MENTION ANY OF THIS INFORMATION "
+ "ABOUT YOURSELF UNLESS THE INFORMATION IS DIRECTLY PERTINENT TO THE USER'S QUERY."
+ ),
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+)
+
+
+register_template(
+ name="deepseek",
+ format_user=StringFormatter(slots=["User: {{content}}\n\nAssistant:"]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+)
+
+
+register_template(
+ name="deepseek3",
+ format_user=StringFormatter(slots=["<|User|>{{content}}<|Assistant|>"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+)
+
+
+# copied from deepseek3 template
+register_template(
+ name="deepseekr1",
+ format_user=StringFormatter(slots=["<|User|>{{content}}<|Assistant|>"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ template_class=ReasoningTemplate,
+)
+
+
+register_template(
+ name="deepseekcoder",
+ format_user=StringFormatter(slots=["### Instruction:\n{{content}}\n### Response:"]),
+ format_assistant=StringFormatter(slots=["\n{{content}}\n<|EOT|>\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ default_system=(
+ "You are an AI programming assistant, utilizing the DeepSeek Coder model, "
+ "developed by DeepSeek Company, and you only answer questions related to computer science. "
+ "For politically sensitive questions, security and privacy issues, "
+ "and other non-computer science questions, you will refuse to answer.\n"
+ ),
+)
+
+
+register_template(
+ name="default",
+ format_user=StringFormatter(slots=["Human: {{content}}", {"eos_token"}, "\nAssistant:"]),
+ format_assistant=StringFormatter(slots=["{{content}}", {"eos_token"}, "\n"]),
+ format_system=StringFormatter(slots=["System: {{content}}", {"eos_token"}, "\n"]),
+ replace_jinja_template=True,
+)
+
+
+register_template(
+ name="empty",
+ format_assistant=StringFormatter(slots=["{{content}}"]),
+)
+
+
+register_template(
+ name="exaone",
+ format_user=StringFormatter(slots=["[|user|]{{content}}\n[|assistant|]"]),
+ format_assistant=StringFormatter(slots=["{{content}}", {"eos_token"}, "\n"]),
+ format_system=StringFormatter(slots=["[|system|]{{content}}[|endofturn|]\n"]),
+)
+
+
+register_template(
+ name="falcon",
+ format_user=StringFormatter(slots=["User: {{content}}\nFalcon:"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ efficient_eos=True,
+)
+
+
+# copied from chatml template
+register_template(
+ name="falcon_h1",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|im_end|>", "<|end_of_text|>"],
+)
+
+
+register_template(
+ name="fewshot",
+ format_assistant=StringFormatter(slots=["{{content}}\n\n"]),
+ efficient_eos=True,
+ replace_jinja_template=True,
+)
+
+
+register_template(
+ name="gemma",
+ format_user=StringFormatter(slots=["user\n{{content}}\nmodel\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_observation=StringFormatter(
+ slots=["tool\n{{content}}\nmodel\n"]
+ ),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=[""],
+ replace_eos=True,
+ template_class=Llama2Template,
+)
+
+
+# copied from gemma template
+register_template(
+ name="gemma2",
+ format_user=StringFormatter(slots=["user\n{{content}}\nmodel\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_observation=StringFormatter(
+ slots=["tool\n{{content}}\nmodel\n"]
+ ),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["", ""],
+ efficient_eos=True,
+ template_class=Llama2Template,
+)
+
+
+# copied from gemma template
+register_template(
+ name="gemma3",
+ format_user=StringFormatter(slots=["user\n{{content}}\nmodel\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_observation=StringFormatter(
+ slots=["tool\n{{content}}\nmodel\n"]
+ ),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=[""],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin("gemma3", image_token=""),
+ template_class=Llama2Template,
+)
+
+
+register_template(
+ name="gemma3n",
+ format_user=StringFormatter(slots=["user\n{{content}}\nmodel\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_observation=StringFormatter(
+ slots=["tool\n{{content}}\nmodel\n"]
+ ),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=[""],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin("gemma3n", image_token="", audio_token=""),
+ template_class=Llama2Template,
+)
+
+
+register_template(
+ name="glm4",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|assistant|>"]),
+ format_assistant=StringFormatter(slots=["\n{{content}}"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}"]),
+ format_function=FunctionFormatter(slots=["{{content}}"], tool_format="glm4"),
+ format_observation=StringFormatter(slots=["<|observation|>\n{{content}}<|assistant|>"]),
+ format_tools=ToolFormatter(tool_format="glm4"),
+ format_prefix=EmptyFormatter(slots=["[gMASK]"]),
+ stop_words=["<|user|>", "<|observation|>"],
+ efficient_eos=True,
+)
+
+
+# copied from glm4 template
+register_template(
+ name="glm4_moe",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|assistant|>"]),
+ format_assistant=StringFormatter(slots=["\n{{content}}"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}"]),
+ format_function=FunctionFormatter(slots=["{{content}}"], tool_format="glm4_moe"),
+ format_observation=StringFormatter(slots=["<|observation|>\n{{content}}<|assistant|>"]),
+ format_tools=ToolFormatter(tool_format="glm4_moe"),
+ format_prefix=EmptyFormatter(slots=["[gMASK]"]),
+ stop_words=["<|user|>", "<|observation|>"],
+ efficient_eos=True,
+ template_class=ReasoningTemplate,
+)
+
+
+# copied from glm4 template
+register_template(
+ name="glm4v",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|assistant|>"]),
+ format_assistant=StringFormatter(slots=["\n{{content}}"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}"]),
+ format_function=FunctionFormatter(slots=["{{content}}"], tool_format="glm4"),
+ format_observation=StringFormatter(slots=["<|observation|>\n{{content}}<|assistant|>"]),
+ format_tools=ToolFormatter(tool_format="glm4"),
+ format_prefix=EmptyFormatter(slots=["[gMASK]"]),
+ stop_words=["<|user|>", "<|observation|>", " "],
+ efficient_eos=True,
+ mm_plugin=get_mm_plugin(name="glm4v", image_token="<|image|>", video_token="<|video|>"),
+ template_class=ReasoningTemplate,
+)
+
+
+register_template(
+ name="glm45v",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|assistant|>"]),
+ format_assistant=StringFormatter(slots=["\n{{content}}"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}"]),
+ format_function=FunctionFormatter(slots=["{{content}}"], tool_format="glm4_moe"),
+ format_observation=StringFormatter(slots=["<|observation|>\n{{content}}<|assistant|>"]),
+ format_tools=ToolFormatter(tool_format="glm4_moe"),
+ format_prefix=EmptyFormatter(slots=["[gMASK]"]),
+ stop_words=["<|user|>", "<|observation|>", " "],
+ efficient_eos=True,
+ mm_plugin=get_mm_plugin(name="glm4v", image_token="<|image|>", video_token="<|video|>"),
+ template_class=ReasoningTemplate,
+)
+
+
+# copied from glm4 template
+register_template(
+ name="glmz1",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|assistant|>"]),
+ format_assistant=StringFormatter(slots=["\n{{content}}"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}"]),
+ format_function=FunctionFormatter(slots=["{{content}}"], tool_format="glm4"),
+ format_observation=StringFormatter(slots=["<|observation|>\n{{content}}<|assistant|>"]),
+ format_tools=ToolFormatter(tool_format="glm4"),
+ format_prefix=EmptyFormatter(slots=["[gMASK]"]),
+ stop_words=["<|user|>", "<|observation|>"],
+ efficient_eos=True,
+ template_class=ReasoningTemplate,
+)
+
+
+register_template(
+ name="gpt",
+ format_user=StringFormatter(slots=["<|start|>user<|message|>{{content}}<|end|><|start|>assistant"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|end|>"]),
+ format_system=StringFormatter(slots=["<|start|>system<|message|>{{content}}<|end|>"]),
+ default_system="You are ChatGPT, a large language model trained by OpenAI.",
+ thought_words=("<|channel|>analysis<|message|>", "<|end|><|start|>assistant<|channel|>final<|message|>"),
+ efficient_eos=True,
+ template_class=ReasoningTemplate,
+)
+
+
+register_template(
+ name="granite3",
+ format_user=StringFormatter(
+ slots=[
+ "<|start_of_role|>user<|end_of_role|>{{content}}<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>"
+ ]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|end_of_text|>\n"]),
+ format_system=StringFormatter(slots=["<|start_of_role|>system<|end_of_role|>{{content}}<|end_of_text|>\n"]),
+)
+
+
+register_template(
+ name="granite3_vision",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}\n<|assistant|>\n"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}\n"]),
+ default_system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ mm_plugin=get_mm_plugin(name="llava_next", image_token=""),
+)
+
+
+register_template(
+ name="granite4",
+ format_user=StringFormatter(
+ slots=[
+ "<|start_of_role|>user<|end_of_role|>{{content}}<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>"
+ ]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|end_of_text|>\n"]),
+ format_system=StringFormatter(slots=["<|start_of_role|>system<|end_of_role|>{{content}}<|end_of_text|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|end_of_text|>\n"], tool_format="default"),
+ format_observation=StringFormatter(
+ slots=["<|start_of_role|>tool<|end_of_role|>{{content}}<|end_of_text|>\n<|start_of_role|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="default"),
+ stop_words=["<|end_of_text|>"],
+ default_system="You are Granite, developed by IBM. You are a helpful AI assistant.",
+)
+
+
+register_template(
+ name="index",
+ format_user=StringFormatter(slots=["reserved_0{{content}}reserved_1"]),
+ format_system=StringFormatter(slots=["{{content}}"]),
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="hunyuan",
+ format_user=StringFormatter(slots=["<|bos|>user\n{{content}}<|eos|>\n<|bos|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|eos|>\n"]),
+ format_system=StringFormatter(slots=["<|bos|>system\n{{content}}<|eos|>\n"]),
+ format_prefix=EmptyFormatter(slots=["<|bos|>"]),
+ stop_words=["<|eos|>"],
+)
+
+
+register_template(
+ name="intern",
+ format_user=StringFormatter(slots=["<|User|>:{{content}}\n<|Bot|>:"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ format_system=StringFormatter(slots=["<|System|>:{{content}}\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ default_system=(
+ "You are an AI assistant whose name is InternLM (书生·浦语).\n"
+ "- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory "
+ "(上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n"
+ "- InternLM (书生·浦语) can understand and communicate fluently in the language "
+ "chosen by the user such as English and 中文."
+ ),
+ stop_words=[""],
+)
+
+
+register_template(
+ name="intern2",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ default_system=(
+ "You are an AI assistant whose name is InternLM (书生·浦语).\n"
+ "- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory "
+ "(上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n"
+ "- InternLM (书生·浦语) can understand and communicate fluently in the language "
+ "chosen by the user such as English and 中文."
+ ),
+ stop_words=["<|im_end|>"],
+)
+
+
+register_template(
+ name="intern_vl",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ default_system=(
+ "你是书生·万象,英文名是InternVL,是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型。"
+ ),
+ stop_words=["<|im_end|>"],
+ mm_plugin=get_mm_plugin(name="intern_vl", image_token="", video_token=""),
+)
+
+
+# copied from qwen template
+register_template(
+ name="keye_vl",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="qwen2_vl", image_token="<|image_pad|>", video_token="<|video_pad|>"),
+ template_class=ReasoningTemplate,
+)
+
+
+register_template(
+ name="kimi_vl",
+ format_user=StringFormatter(
+ slots=["<|im_user|>user<|im_middle|>{{content}}<|im_end|><|im_assistant|>assistant<|im_middle|>"]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>"]),
+ format_system=StringFormatter(slots=["<|im_system|>system<|im_middle|>{{content}}<|im_end|>"]),
+ default_system="You are a helpful assistant",
+ stop_words=["<|im_end|>"],
+ thought_words=("◁think▷", "◁/think▷"),
+ mm_plugin=get_mm_plugin("kimi_vl", image_token="<|media_pad|>"),
+ template_class=ReasoningTemplate,
+)
+
+
+register_template(
+ name="llama2",
+ format_user=StringFormatter(slots=[{"bos_token"}, "[INST] {{content}} [/INST]"]),
+ format_system=StringFormatter(slots=["<>\n{{content}}\n< >\n\n"]),
+ template_class=Llama2Template,
+)
+
+
+# copied from llama2 template
+register_template(
+ name="llama2_zh",
+ format_user=StringFormatter(slots=[{"bos_token"}, "[INST] {{content}} [/INST]"]),
+ format_system=StringFormatter(slots=["<>\n{{content}}\n< >\n\n"]),
+ default_system="You are a helpful assistant. 你是一个乐于助人的助手。",
+ template_class=Llama2Template,
+)
+
+
+register_template(
+ name="llama3",
+ format_user=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|eot_id|>"]),
+ format_system=StringFormatter(slots=["<|start_header_id|>system<|end_header_id|>\n\n{{content}}<|eot_id|>"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|eot_id|>"], tool_format="llama3"),
+ format_observation=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>ipython<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_tools=ToolFormatter(tool_format="llama3"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|eot_id|>", "<|eom_id|>"],
+ replace_eos=True,
+)
+
+
+register_template(
+ name="llama4",
+ format_user=StringFormatter(
+ slots=["<|header_start|>user<|header_end|>\n\n{{content}}<|eot|><|header_start|>assistant<|header_end|>\n\n"]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|eot|>"]),
+ format_system=StringFormatter(slots=["<|header_start|>system<|header_end|>\n\n{{content}}<|eot|>"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|eot|>"], tool_format="llama3"),
+ format_observation=StringFormatter(
+ slots=[
+ "<|header_start|>ipython<|header_end|>\n\n{{content}}<|eot|><|header_start|>assistant<|header_end|>\n\n"
+ ]
+ ),
+ format_tools=ToolFormatter(tool_format="llama3"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|eot|>", "<|eom|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="llama4", image_token="<|image|>"),
+)
+
+
+# copied from llama3 template
+register_template(
+ name="mllama",
+ format_user=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|eot_id|>"]),
+ format_system=StringFormatter(slots=["<|start_header_id|>system<|end_header_id|>\n\n{{content}}<|eot_id|>"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|eot_id|>"], tool_format="llama3"),
+ format_observation=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>ipython<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_tools=ToolFormatter(tool_format="llama3"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|eot_id|>", "<|eom_id|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="mllama", image_token="<|image|>"),
+)
+
+
+register_template(
+ name="moonlight",
+ format_user=StringFormatter(
+ slots=["<|im_user|>user<|im_middle|>{{content}}<|im_end|><|im_assistant|>assistant<|im_middle|>"]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>"]),
+ format_system=StringFormatter(slots=["<|im_system|>system<|im_middle|>{{content}}<|im_end|>"]),
+ default_system="You are a helpful assistant provided by Moonshot-AI.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+)
+
+
+# copied from vicuna template
+register_template(
+ name="llava",
+ format_user=StringFormatter(slots=["USER: {{content}} ASSISTANT:"]),
+ default_system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ mm_plugin=get_mm_plugin(name="llava", image_token=""),
+)
+
+
+# copied from vicuna template
+register_template(
+ name="llava_next",
+ format_user=StringFormatter(slots=["USER: {{content}} ASSISTANT:"]),
+ default_system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ mm_plugin=get_mm_plugin(name="llava_next", image_token=""),
+)
+
+
+# copied from llama3 template
+register_template(
+ name="llava_next_llama3",
+ format_user=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|eot_id|>"]),
+ format_system=StringFormatter(slots=["<|start_header_id|>system<|end_header_id|>\n\n{{content}}<|eot_id|>"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|eot_id|>"], tool_format="llama3"),
+ format_observation=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>ipython<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_tools=ToolFormatter(tool_format="llama3"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|eot_id|>", "<|eom_id|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="llava_next", image_token=""),
+)
+
+
+# copied from mistral template
+register_template(
+ name="llava_next_mistral",
+ format_user=StringFormatter(slots=["[INST] {{content}}[/INST]"]),
+ format_assistant=StringFormatter(slots=[" {{content}}", {"eos_token"}]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_function=FunctionFormatter(slots=["[TOOL_CALLS] {{content}}", {"eos_token"}], tool_format="mistral"),
+ format_observation=StringFormatter(slots=["""[TOOL_RESULTS] {"content": {{content}}}[/TOOL_RESULTS]"""]),
+ format_tools=ToolFormatter(tool_format="mistral"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ mm_plugin=get_mm_plugin(name="llava_next", image_token=""),
+ template_class=Llama2Template,
+)
+
+
+# copied from qwen template
+register_template(
+ name="llava_next_qwen",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ default_system="You are a helpful assistant.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="llava_next", image_token=""),
+)
+
+
+# copied from chatml template
+register_template(
+ name="llava_next_yi",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ stop_words=["<|im_end|>"],
+ mm_plugin=get_mm_plugin(name="llava_next", image_token=""),
+)
+
+
+# copied from vicuna template
+register_template(
+ name="llava_next_video",
+ format_user=StringFormatter(slots=["USER: {{content}} ASSISTANT:"]),
+ default_system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ mm_plugin=get_mm_plugin(name="llava_next_video", image_token="", video_token=""),
+)
+
+
+# copied from mistral template
+register_template(
+ name="llava_next_video_mistral",
+ format_user=StringFormatter(slots=["[INST] {{content}}[/INST]"]),
+ format_assistant=StringFormatter(slots=[" {{content}}", {"eos_token"}]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_function=FunctionFormatter(slots=["[TOOL_CALLS] {{content}}", {"eos_token"}], tool_format="mistral"),
+ format_observation=StringFormatter(slots=["""[TOOL_RESULTS] {"content": {{content}}}[/TOOL_RESULTS]"""]),
+ format_tools=ToolFormatter(tool_format="mistral"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ mm_plugin=get_mm_plugin(name="llava_next_video", image_token="", video_token=""),
+ template_class=Llama2Template,
+)
+
+
+# copied from chatml template
+register_template(
+ name="llava_next_video_yi",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ stop_words=["<|im_end|>"],
+ mm_plugin=get_mm_plugin(name="llava_next_video", image_token="", video_token=""),
+)
+
+
+# copied from chatml template
+register_template(
+ name="marco",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ default_system=(
+ "你是一个经过良好训练的AI助手,你的名字是Marco-o1."
+ "由阿里国际数字商业集团的AI Business创造.\n## 重要!!!!!\n"
+ "当你回答问题时,你的思考应该在内完成,内输出你的结果。\n"
+ "应该尽可能是英文,但是有2个特例,一个是对原文中的引用,另一个是是数学应该使用markdown格式,内的输出需要遵循用户输入的语言。\n"
+ ),
+ stop_words=["<|im_end|>"],
+)
+
+
+# copied from qwen template
+register_template(
+ name="mimo",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ default_system="You are a helpful assistant.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ template_class=ReasoningTemplate,
+)
+
+# copied from qwen2vl
+register_template(
+ name="mimo_vl",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ default_system="You are MiMo, an AI assistant developed by Xiaomi.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="qwen2_vl", image_token="<|image_pad|>", video_token="<|video_pad|>"),
+ template_class=ReasoningTemplate,
+)
+
+
+# copied from chatml template
+register_template(
+ name="minicpm_v",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ stop_words=["<|im_end|>"],
+ default_system="You are a helpful assistant.",
+ mm_plugin=get_mm_plugin(name="minicpm_v", image_token="", video_token=""),
+)
+
+
+# copied from minicpm_v template
+register_template(
+ name="minicpm_o",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ stop_words=["<|im_end|>"],
+ default_system="You are Qwen, created by Alibaba Cloud. You are a helpful assistant.",
+ mm_plugin=get_mm_plugin(name="minicpm_v", image_token="", video_token="", audio_token=""),
+)
+
+
+# mistral tokenizer v3 tekken
+register_template(
+ name="ministral",
+ format_user=StringFormatter(slots=["[INST]{{content}}[/INST]"]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_function=FunctionFormatter(slots=["[TOOL_CALLS]{{content}}", {"eos_token"}], tool_format="mistral"),
+ format_observation=StringFormatter(slots=["""[TOOL_RESULTS]{"content": {{content}}}[/TOOL_RESULTS]"""]),
+ format_tools=ToolFormatter(tool_format="mistral"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ template_class=Llama2Template,
+)
+
+
+# mistral tokenizer v3
+register_template(
+ name="mistral",
+ format_user=StringFormatter(slots=["[INST] {{content}}[/INST]"]),
+ format_assistant=StringFormatter(slots=[" {{content}}", {"eos_token"}]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_function=FunctionFormatter(slots=["[TOOL_CALLS] {{content}}", {"eos_token"}], tool_format="mistral"),
+ format_observation=StringFormatter(slots=["""[TOOL_RESULTS] {"content": {{content}}}[/TOOL_RESULTS]"""]),
+ format_tools=ToolFormatter(tool_format="mistral"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ template_class=Llama2Template,
+)
+
+
+# mistral tokenizer v7 tekken (copied from ministral)
+register_template(
+ name="mistral_small",
+ format_user=StringFormatter(slots=["[INST]{{content}}[/INST]"]),
+ format_system=StringFormatter(slots=["[SYSTEM_PROMPT]{{content}}[/SYSTEM_PROMPT]"]),
+ format_function=FunctionFormatter(slots=["[TOOL_CALLS]{{content}}", {"eos_token"}], tool_format="mistral"),
+ format_observation=StringFormatter(slots=["""[TOOL_RESULTS]{"content": {{content}}}[/TOOL_RESULTS]"""]),
+ format_tools=ToolFormatter(tool_format="mistral"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ mm_plugin=get_mm_plugin(name="pixtral", image_token="[IMG]"),
+)
+
+
+register_template(
+ name="olmo",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|assistant|>\n"]),
+ format_prefix=EmptyFormatter(slots=[{"eos_token"}]),
+)
+
+
+register_template(
+ name="openchat",
+ format_user=StringFormatter(slots=["GPT4 Correct User: {{content}}", {"eos_token"}, "GPT4 Correct Assistant:"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+)
+
+
+register_template(
+ name="openchat-3.6",
+ format_user=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>GPT4 Correct User<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>GPT4 Correct Assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=["<|eot_id|>"],
+)
+
+
+# copied from chatml template
+register_template(
+ name="opencoder",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_observation=StringFormatter(slots=["<|im_start|>tool\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ default_system="You are OpenCoder, created by OpenCoder Team.",
+ stop_words=["<|im_end|>"],
+)
+
+
+register_template(
+ name="orion",
+ format_user=StringFormatter(slots=["Human: {{content}}\n\nAssistant: ", {"eos_token"}]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+)
+
+
+register_template(
+ name="paligemma",
+ format_user=StringFormatter(slots=["{{content}}\n"]),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ mm_plugin=get_mm_plugin(name="paligemma", image_token=""),
+ template_class=Llama2Template,
+)
+
+
+# copied from gemma template
+register_template(
+ name="paligemma_chat",
+ format_user=StringFormatter(slots=["user\n{{content}}\nmodel\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ format_observation=StringFormatter(
+ slots=["tool\n{{content}}\nmodel\n"]
+ ),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ stop_words=[""],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="paligemma", image_token=""),
+ template_class=Llama2Template,
+)
+
+
+register_template(
+ name="phi",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|end|>\n<|assistant|>\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|end|>\n"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}<|end|>\n"]),
+ stop_words=["<|end|>"],
+ replace_eos=True,
+)
+
+
+register_template(
+ name="phi_small",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|end|>\n<|assistant|>\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|end|>\n"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}<|end|>\n"]),
+ format_prefix=EmptyFormatter(slots=[{"<|endoftext|>"}]),
+ stop_words=["<|end|>"],
+ replace_eos=True,
+)
+
+
+register_template(
+ name="phi4",
+ format_user=StringFormatter(
+ slots=["<|im_start|>user<|im_sep|>{{content}}<|im_end|><|im_start|>assistant<|im_sep|>"]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>"]),
+ format_system=StringFormatter(slots=["<|im_start|>system<|im_sep|>{{content}}<|im_end|>"]),
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+)
+
+
+# copied from ministral template
+register_template(
+ name="pixtral",
+ format_user=StringFormatter(slots=["[INST]{{content}}[/INST]"]),
+ format_system=StringFormatter(slots=["{{content}}\n\n"]),
+ format_function=FunctionFormatter(slots=["[TOOL_CALLS]{{content}}", {"eos_token"}], tool_format="mistral"),
+ format_observation=StringFormatter(slots=["""[TOOL_RESULTS]{"content": {{content}}}[/TOOL_RESULTS]"""]),
+ format_tools=ToolFormatter(tool_format="mistral"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ mm_plugin=get_mm_plugin(name="pixtral", image_token="[IMG]"),
+ template_class=Llama2Template,
+)
+
+
+# copied from chatml template
+register_template(
+ name="qwen",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ default_system="You are Qwen, created by Alibaba Cloud. You are a helpful assistant.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+)
+
+
+# copied from qwen template
+register_template(
+ name="qwen3",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ template_class=ReasoningTemplate,
+)
+
+
+# copied from qwen template
+register_template(
+ name="qwen3_nothink",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+)
+
+
+# copied from chatml template
+register_template(
+ name="qwen2_audio",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ default_system="You are a helpful assistant.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="qwen2_audio", audio_token="<|AUDIO|>"),
+)
+
+
+# copied from qwen template
+register_template(
+ name="qwen2_omni",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ default_system="You are a helpful assistant.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(
+ name="qwen2_omni", audio_token="<|AUDIO|>", image_token="<|IMAGE|>", video_token="<|VIDEO|>"
+ ),
+)
+
+# copied from qwen template
+register_template(
+ name="qwen2_vl",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|im_end|>\n"], tool_format="qwen"),
+ format_observation=StringFormatter(
+ slots=["<|im_start|>user\n\n{{content}}\n <|im_end|>\n<|im_start|>assistant\n"]
+ ),
+ format_tools=ToolFormatter(tool_format="qwen"),
+ default_system="You are a helpful assistant.",
+ stop_words=["<|im_end|>"],
+ replace_eos=True,
+ mm_plugin=get_mm_plugin(name="qwen2_vl", image_token="<|image_pad|>", video_token="<|video_pad|>"),
+)
+
+
+register_template(
+ name="sailor",
+ format_user=StringFormatter(slots=["<|im_start|>question\n{{content}}<|im_end|>\n<|im_start|>answer\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ default_system=(
+ "You are an AI assistant named Sailor created by Sea AI Lab. "
+ "Your answer should be friendly, unbiased, faithful, informative and detailed."
+ ),
+ stop_words=["<|im_end|>"],
+)
+
+
+register_template(
+ name="seed_coder",
+ format_user=StringFormatter(
+ slots=[{"bos_token"}, "user\n{{content}}", {"eos_token"}, {"bos_token"}, "assistant\n"]
+ ),
+ format_system=StringFormatter(slots=[{"bos_token"}, "system\n{{content}}", {"eos_token"}]),
+ default_system=(
+ "You are an AI programming assistant, utilizing the Seed-Coder model, developed by ByteDance Seed, "
+ "and you only answer questions related to computer science. For politically sensitive questions, "
+ "security and privacy issues, and other non-computer science questions, you will refuse to answer.\n\n"
+ ),
+)
+
+
+# copied from llama3 template
+register_template(
+ name="skywork_o1",
+ format_user=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>user<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_assistant=StringFormatter(slots=["{{content}}<|eot_id|>"]),
+ format_system=StringFormatter(slots=["<|start_header_id|>system<|end_header_id|>\n\n{{content}}<|eot_id|>"]),
+ format_function=FunctionFormatter(slots=["{{content}}<|eot_id|>"], tool_format="llama3"),
+ format_observation=StringFormatter(
+ slots=[
+ (
+ "<|start_header_id|>ipython<|end_header_id|>\n\n{{content}}<|eot_id|>"
+ "<|start_header_id|>assistant<|end_header_id|>\n\n"
+ )
+ ]
+ ),
+ format_tools=ToolFormatter(tool_format="llama3"),
+ format_prefix=EmptyFormatter(slots=[{"bos_token"}]),
+ default_system=(
+ "You are Skywork-o1, a thinking model developed by Skywork AI, specializing in solving complex problems "
+ "involving mathematics, coding, and logical reasoning through deep thought. When faced with a user's request, "
+ "you first engage in a lengthy and in-depth thinking process to explore possible solutions to the problem. "
+ "After completing your thoughts, you then provide a detailed explanation of the solution process "
+ "in your response."
+ ),
+ stop_words=["<|eot_id|>", "<|eom_id|>"],
+)
+
+
+register_template(
+ name="smollm",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ stop_words=["<|im_end|>"],
+)
+
+
+register_template(
+ name="smollm2",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ stop_words=["<|im_end|>"],
+ default_system="You are a helpful AI assistant named SmolLM, trained by Hugging Face.",
+)
+
+
+register_template(
+ name="solar",
+ format_user=StringFormatter(slots=["### User:\n{{content}}\n\n### Assistant:\n"]),
+ format_system=StringFormatter(slots=["### System:\n{{content}}\n\n"]),
+ efficient_eos=True,
+)
+
+
+register_template(
+ name="starchat",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}<|end|>\n<|assistant|>"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|end|>\n"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}<|end|>\n"]),
+ stop_words=["<|end|>"],
+)
+
+
+register_template(
+ name="telechat",
+ format_user=StringFormatter(slots=["<_user>{{content}}<_bot>"]),
+ format_system=StringFormatter(slots=["<_system>{{content}}<_end>"]),
+)
+
+
+register_template(
+ name="telechat2",
+ format_user=StringFormatter(slots=["<_user>{{content}}<_bot>"]),
+ format_system=StringFormatter(slots=["<_system>{{content}}"]),
+ default_system=(
+ "你是中国电信星辰语义大模型,英文名是TeleChat,你是由中电信人工智能科技有限公司和中国电信人工智能研究院(TeleAI)研发的人工智能助手。"
+ ),
+)
+
+
+register_template(
+ name="vicuna",
+ format_user=StringFormatter(slots=["USER: {{content}} ASSISTANT:"]),
+ default_system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ replace_jinja_template=True,
+)
+
+
+register_template(
+ name="video_llava",
+ format_user=StringFormatter(slots=["USER: {{content}} ASSISTANT:"]),
+ default_system=(
+ "A chat between a curious user and an artificial intelligence assistant. "
+ "The assistant gives helpful, detailed, and polite answers to the user's questions."
+ ),
+ mm_plugin=get_mm_plugin(name="video_llava", image_token="", video_token=""),
+)
+
+
+register_template(
+ name="xuanyuan",
+ format_user=StringFormatter(slots=["Human: {{content}} Assistant:"]),
+ default_system=(
+ "以下是用户和人工智能助手之间的对话。用户以Human开头,人工智能助手以Assistant开头,"
+ "会对人类提出的问题给出有帮助、高质量、详细和礼貌的回答,并且总是拒绝参与与不道德、"
+ "不安全、有争议、政治敏感等相关的话题、问题和指示。\n"
+ ),
+)
+
+
+register_template(
+ name="xverse",
+ format_user=StringFormatter(slots=["Human: {{content}}\n\nAssistant: "]),
+)
+
+
+register_template(
+ name="yayi",
+ format_user=StringFormatter(slots=[{"token": "<|Human|>"}, ":\n{{content}}\n\n", {"token": "<|YaYi|>"}, ":"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n\n"]),
+ format_system=StringFormatter(slots=[{"token": "<|System|>"}, ":\n{{content}}\n\n"]),
+ default_system=(
+ "You are a helpful, respectful and honest assistant named YaYi "
+ "developed by Beijing Wenge Technology Co.,Ltd. "
+ "Always answer as helpfully as possible, while being safe. "
+ "Your answers should not include any harmful, unethical, "
+ "racist, sexist, toxic, dangerous, or illegal content. "
+ "Please ensure that your responses are socially unbiased and positive in nature.\n\n"
+ "If a question does not make any sense, or is not factually coherent, "
+ "explain why instead of answering something not correct. "
+ "If you don't know the answer to a question, please don't share false information."
+ ),
+ stop_words=["<|End|>"],
+)
+
+
+# copied from chatml template
+register_template(
+ name="yi",
+ format_user=StringFormatter(slots=["<|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n"]),
+ format_assistant=StringFormatter(slots=["{{content}}<|im_end|>\n"]),
+ format_system=StringFormatter(slots=["<|im_start|>system\n{{content}}<|im_end|>\n"]),
+ stop_words=["<|im_end|>"],
+)
+
+
+register_template(
+ name="yi_vl",
+ format_user=StringFormatter(slots=["### Human: {{content}}\n### Assistant:"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ default_system=(
+ "This is a chat between an inquisitive human and an AI assistant. "
+ "Assume the role of the AI assistant. Read all the images carefully, "
+ "and respond to the human's questions with informative, helpful, detailed and polite answers. "
+ "这是一个好奇的人类和一个人工智能助手之间的对话。假设你扮演这个AI助手的角色。"
+ "仔细阅读所有的图像,并对人类的问题做出信息丰富、有帮助、详细的和礼貌的回答。\n\n"
+ ),
+ stop_words=["###"],
+ efficient_eos=True,
+ mm_plugin=get_mm_plugin(name="llava", image_token=""),
+)
+
+
+register_template(
+ name="yuan",
+ format_user=StringFormatter(slots=["{{content}}", {"token": ""}]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+ stop_words=[""],
+)
+
+
+register_template(
+ name="zephyr",
+ format_user=StringFormatter(slots=["<|user|>\n{{content}}", {"eos_token"}, "<|assistant|>\n"]),
+ format_system=StringFormatter(slots=["<|system|>\n{{content}}", {"eos_token"}]),
+ default_system="You are Zephyr, a helpful assistant.",
+)
+
+
+register_template(
+ name="ziya",
+ format_user=StringFormatter(slots=[":{{content}}\n:"]),
+ format_assistant=StringFormatter(slots=["{{content}}\n"]),
+)
diff --git a/src/train_sft/src/llamafactory/data/tool_utils.py b/src/train_sft/src/llamafactory/data/tool_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..f90f47c11a6fa89dae01903ac9a561fb6cfa2c89
--- /dev/null
+++ b/src/train_sft/src/llamafactory/data/tool_utils.py
@@ -0,0 +1,365 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import re
+from abc import ABC, abstractmethod
+from dataclasses import dataclass
+from datetime import datetime
+from typing import Any, NamedTuple, Union
+
+from typing_extensions import override
+
+
+class FunctionCall(NamedTuple):
+ name: str
+ arguments: str
+
+
+DEFAULT_TOOL_PROMPT = (
+ "You have access to the following tools:\n{tool_text}"
+ "Use the following format if using a tool:\n"
+ "```\n"
+ "Action: tool name (one of [{tool_names}])\n"
+ "Action Input: the input to the tool, in a JSON format representing the kwargs "
+ """(e.g. ```{{"input": "hello world", "num_beams": 5}}```)\n"""
+ "```\n"
+)
+
+GLM4_TOOL_PROMPT = (
+ "你是一个名为 ChatGLM 的人工智能助手。你是基于智谱 AI 公司训练的语言模型 GLM-4 模型开发的,"
+ "你的任务是针对用户的问题和要求提供适当的答复和支持。\n\n# 可用工具{tool_text}"
+)
+
+GLM4_MOE_TOOL_PROMPT = (
+ "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\n"
+ "You are provided with function signatures within XML tags:\n{tool_text}"
+ "\n \n\nFor each function call, output the function name and arguments within the following XML format:"
+ "\n{{function-name}}"
+ "\n{{arg-key-1}} "
+ "\n{{arg-value-1}} "
+ "\n{{arg-key-2}} "
+ "\n{{arg-value-2}} "
+ "\n...\n \n"
+)
+
+LLAMA3_TOOL_PROMPT = (
+ "Cutting Knowledge Date: December 2023\nToday Date: {date}\n\n"
+ "You have access to the following functions. To call a function, please respond with JSON for a function call. "
+ """Respond in the format {{"name": function name, "parameters": dictionary of argument name and its value}}. """
+ "Do not use variables.\n\n{tool_text}"
+)
+
+QWEN_TOOL_PROMPT = (
+ "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\n"
+ "You are provided with function signatures within XML tags:\n{tool_text}"
+ "\n \n\nFor each function call, return a json object with function name and arguments within "
+ """ XML tags:\n\n{{"name": , """
+ """"arguments": }}\n """
+)
+
+
+@dataclass
+class ToolUtils(ABC):
+ """Base class for tool utilities."""
+
+ @staticmethod
+ @abstractmethod
+ def tool_formatter(tools: list[dict[str, Any]]) -> str:
+ r"""Generate the system message describing all the available tools."""
+ ...
+
+ @staticmethod
+ @abstractmethod
+ def function_formatter(functions: list["FunctionCall"]) -> str:
+ r"""Generate the assistant message including all the tool calls."""
+ ...
+
+ @staticmethod
+ @abstractmethod
+ def tool_extractor(content: str) -> Union[str, list["FunctionCall"]]:
+ r"""Extract all the function calls from the assistant message.
+
+ It should be an inverse function of `function_formatter`.
+ """
+ ...
+
+
+class DefaultToolUtils(ToolUtils):
+ r"""Default tool using template."""
+
+ @override
+ @staticmethod
+ def tool_formatter(tools: list[dict[str, Any]]) -> str:
+ tool_text = ""
+ tool_names = []
+ for tool in tools:
+ tool = tool.get("function", "") if tool.get("type") == "function" else tool
+ param_text = ""
+ for name, param in tool["parameters"]["properties"].items():
+ required, enum, items = "", "", ""
+ if name in tool["parameters"].get("required", []):
+ required = ", required"
+
+ if param.get("enum", None):
+ enum = ", should be one of [{}]".format(", ".join(param["enum"]))
+
+ if param.get("items", None):
+ items = ", where each item should be {}".format(param["items"].get("type", ""))
+
+ param_text += " - {name} ({type}{required}): {desc}{enum}{items}\n".format(
+ name=name,
+ type=param.get("type", ""),
+ required=required,
+ desc=param.get("description", ""),
+ enum=enum,
+ items=items,
+ )
+
+ tool_text += "> Tool Name: {name}\nTool Description: {desc}\nTool Args:\n{args}\n".format(
+ name=tool["name"], desc=tool.get("description", ""), args=param_text
+ )
+ tool_names.append(tool["name"])
+
+ return DEFAULT_TOOL_PROMPT.format(tool_text=tool_text, tool_names=", ".join(tool_names))
+
+ @override
+ @staticmethod
+ def function_formatter(functions: list["FunctionCall"]) -> str:
+ return "\n".join([f"Action: {name}\nAction Input: {arguments}" for name, arguments in functions])
+
+ @override
+ @staticmethod
+ def tool_extractor(content: str) -> Union[str, list["FunctionCall"]]:
+ regex = re.compile(r"Action:\s*([a-zA-Z0-9_]+)\s*Action Input:\s*(.+?)(?=\s*Action:|\s*$)", re.DOTALL)
+ action_match: list[tuple[str, str]] = re.findall(regex, content)
+ if not action_match:
+ return content
+
+ results = []
+ for match in action_match:
+ tool_name = match[0].strip()
+ tool_input = match[1].strip().strip('"').strip("```")
+ try:
+ arguments = json.loads(tool_input)
+ results.append(FunctionCall(tool_name, json.dumps(arguments, ensure_ascii=False)))
+ except json.JSONDecodeError:
+ return content
+
+ return results
+
+
+class GLM4ToolUtils(ToolUtils):
+ r"""GLM-4 tool using template."""
+
+ @override
+ @staticmethod
+ def tool_formatter(tools: list[dict[str, Any]]) -> str:
+ tool_text = ""
+ for tool in tools:
+ tool = tool.get("function", "") if tool.get("type") == "function" else tool
+ tool_text += "\n\n## {name}\n\n{body}\n在调用上述函数时,请使用 Json 格式表示调用的参数。".format(
+ name=tool["name"], body=json.dumps(tool, indent=4, ensure_ascii=False)
+ )
+
+ return GLM4_TOOL_PROMPT.format(tool_text=tool_text)
+
+ @override
+ @staticmethod
+ def function_formatter(functions: list["FunctionCall"]) -> str:
+ if len(functions) > 1:
+ raise ValueError("GLM-4 does not support parallel functions.")
+
+ return f"{functions[0].name}\n{functions[0].arguments}"
+
+ @override
+ @staticmethod
+ def tool_extractor(content: str) -> Union[str, list["FunctionCall"]]:
+ if "\n" not in content:
+ return content
+
+ tool_name, tool_input = content.split("\n", maxsplit=1)
+ try:
+ arguments = json.loads(tool_input.strip())
+ except json.JSONDecodeError:
+ return content
+
+ return [FunctionCall(tool_name, json.dumps(arguments, ensure_ascii=False))]
+
+
+class Llama3ToolUtils(ToolUtils):
+ r"""Llama 3.x tool using template with `tools_in_user_message=False`.
+
+ Reference: https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/#json-based-tool-calling
+ """
+
+ @override
+ @staticmethod
+ def tool_formatter(tools: list[dict[str, Any]]) -> str:
+ date = datetime.now().strftime("%d %b %Y")
+ tool_text = ""
+ for tool in tools:
+ wrapped_tool = tool if tool.get("type") == "function" else {"type": "function", "function": tool}
+ tool_text += json.dumps(wrapped_tool, indent=4, ensure_ascii=False) + "\n\n"
+
+ return LLAMA3_TOOL_PROMPT.format(date=date, tool_text=tool_text)
+
+ @override
+ @staticmethod
+ def function_formatter(functions: list["FunctionCall"]) -> str:
+ function_objects = [{"name": name, "parameters": json.loads(arguments)} for name, arguments in functions]
+ return json.dumps(function_objects[0] if len(function_objects) == 1 else function_objects, ensure_ascii=False)
+
+ @override
+ @staticmethod
+ def tool_extractor(content: str) -> Union[str, list["FunctionCall"]]:
+ try:
+ tools = json.loads(content.strip())
+ except json.JSONDecodeError:
+ return content
+
+ tools = [tools] if not isinstance(tools, list) else tools
+ try:
+ return [FunctionCall(tool["name"], json.dumps(tool["parameters"], ensure_ascii=False)) for tool in tools]
+ except KeyError:
+ return content
+
+
+class MistralToolUtils(ToolUtils):
+ r"""Mistral v0.3 tool using template."""
+
+ @override
+ @staticmethod
+ def tool_formatter(tools: list[dict[str, Any]]) -> str:
+ wrapped_tools = []
+ for tool in tools:
+ wrapped_tools.append(tool if tool.get("type") == "function" else {"type": "function", "function": tool})
+
+ return "[AVAILABLE_TOOLS] " + json.dumps(wrapped_tools, ensure_ascii=False) + "[/AVAILABLE_TOOLS]"
+
+ @override
+ @staticmethod
+ def function_formatter(functions: list["FunctionCall"]) -> str:
+ return json.dumps(
+ [{"name": name, "arguments": json.loads(arguments)} for name, arguments in functions], ensure_ascii=False
+ )
+
+ @override
+ @staticmethod
+ def tool_extractor(content: str) -> Union[str, list["FunctionCall"]]:
+ try:
+ tools = json.loads(content.strip())
+ except json.JSONDecodeError:
+ return content
+
+ tools = [tools] if not isinstance(tools, list) else tools
+ try:
+ return [FunctionCall(tool["name"], json.dumps(tool["arguments"], ensure_ascii=False)) for tool in tools]
+ except KeyError:
+ return content
+
+
+class QwenToolUtils(ToolUtils):
+ r"""Qwen 2.5 tool using template."""
+
+ @override
+ @staticmethod
+ def tool_formatter(tools: list[dict[str, Any]]) -> str:
+ tool_text = ""
+ for tool in tools:
+ wrapped_tool = tool if tool.get("type") == "function" else {"type": "function", "function": tool}
+ tool_text += "\n" + json.dumps(wrapped_tool, ensure_ascii=False)
+
+ return QWEN_TOOL_PROMPT.format(tool_text=tool_text)
+
+ @override
+ @staticmethod
+ def function_formatter(functions: list["FunctionCall"]) -> str:
+ function_texts = [
+ json.dumps({"name": name, "arguments": json.loads(arguments)}, ensure_ascii=False)
+ for name, arguments in functions
+ ]
+ return "\n".join([f"\n{text}\n " for text in function_texts])
+
+ @override
+ @staticmethod
+ def tool_extractor(content: str) -> Union[str, list["FunctionCall"]]:
+ regex = re.compile(r"(.+?) (?=\s*|\s*$)", re.DOTALL)
+ tool_match: list[str] = re.findall(regex, content)
+ if not tool_match:
+ return content
+
+ results = []
+ for tool in tool_match:
+ try:
+ tool = json.loads(tool.strip())
+ except json.JSONDecodeError:
+ return content
+
+ if "name" not in tool or "arguments" not in tool:
+ return content
+
+ results.append(FunctionCall(tool["name"], json.dumps(tool["arguments"], ensure_ascii=False)))
+
+ return results
+
+
+class GLM4MOEToolUtils(QwenToolUtils):
+ r"""GLM-4-MOE tool using template."""
+
+ @override
+ @staticmethod
+ def tool_formatter(tools: list[dict[str, Any]]) -> str:
+ tool_text = ""
+ for tool in tools:
+ wrapped_tool = tool if tool.get("type") == "function" else {"type": "function", "function": tool}
+ tool_text += "\n" + json.dumps(wrapped_tool, ensure_ascii=False)
+
+ return GLM4_MOE_TOOL_PROMPT.format(tool_text=tool_text)
+
+ @override
+ @staticmethod
+ def function_formatter(functions: list["FunctionCall"]) -> str:
+ function_json = [
+ {"func_name": name, "func_key_values": json.loads(arguments)} for name, arguments in functions
+ ]
+ function_texts = []
+ for func in function_json:
+ prompt = "\n" + func["func_name"]
+ for key, value in func["func_key_values"].items():
+ prompt += "\n" + key + " "
+ if not isinstance(value, str):
+ value = json.dumps(value, ensure_ascii=False)
+ prompt += "\n" + value + " "
+ function_texts.append(prompt)
+
+ return "\n".join(function_texts)
+
+
+TOOLS = {
+ "default": DefaultToolUtils(),
+ "glm4": GLM4ToolUtils(),
+ "llama3": Llama3ToolUtils(),
+ "mistral": MistralToolUtils(),
+ "qwen": QwenToolUtils(),
+ "glm4_moe": GLM4MOEToolUtils(),
+}
+
+
+def get_tool_utils(name: str) -> "ToolUtils":
+ tool_utils = TOOLS.get(name, None)
+ if tool_utils is None:
+ raise ValueError(f"Tool utils `{name}` not found.")
+
+ return tool_utils
diff --git a/src/train_sft/src/llamafactory/eval/__init__.py b/src/train_sft/src/llamafactory/eval/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/train_sft/src/llamafactory/eval/evaluator.py b/src/train_sft/src/llamafactory/eval/evaluator.py
new file mode 100644
index 0000000000000000000000000000000000000000..7729c59bf413cb66054a3e06f2e42d6794cb495d
--- /dev/null
+++ b/src/train_sft/src/llamafactory/eval/evaluator.py
@@ -0,0 +1,158 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# This code is inspired by the Dan's test library.
+# https://github.com/hendrycks/test/blob/master/evaluate_flan.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# MIT License
+#
+# Copyright (c) 2020 Dan Hendrycks
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in all
+# copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+# SOFTWARE.
+
+import json
+import os
+from typing import TYPE_CHECKING, Any, Optional
+
+import numpy as np
+import torch
+from datasets import load_dataset
+from tqdm import tqdm, trange
+from transformers.utils import cached_file
+
+from ..data import get_template_and_fix_tokenizer
+from ..extras.constants import CHOICES, SUBJECTS
+from ..hparams import get_eval_args
+from ..model import load_model, load_tokenizer
+from .template import get_eval_template
+
+
+if TYPE_CHECKING:
+ from numpy.typing import NDArray
+
+
+class Evaluator:
+ def __init__(self, args: Optional[dict[str, Any]] = None) -> None:
+ self.model_args, self.data_args, self.eval_args, finetuning_args = get_eval_args(args)
+ self.tokenizer = load_tokenizer(self.model_args)["tokenizer"]
+ self.tokenizer.padding_side = "right" # avoid overflow issue in batched inference for llama2
+ self.template = get_template_and_fix_tokenizer(self.tokenizer, self.data_args)
+ self.model = load_model(self.tokenizer, self.model_args, finetuning_args)
+ self.eval_template = get_eval_template(self.eval_args.lang)
+ self.choice_inputs = [self.tokenizer.encode(ch, add_special_tokens=False)[-1] for ch in CHOICES]
+
+ @torch.inference_mode()
+ def batch_inference(self, batch_input: dict[str, "torch.Tensor"]) -> list[str]:
+ logits = self.model(**batch_input).logits
+ lengths = torch.sum(batch_input["attention_mask"], dim=-1)
+ word_probs = torch.stack([logits[i, lengths[i] - 1] for i in range(len(lengths))], dim=0)
+ choice_probs = torch.nn.functional.softmax(word_probs[:, self.choice_inputs], dim=-1).detach()
+ return [chr(ord("A") + offset.item()) for offset in torch.argmax(choice_probs, dim=-1)]
+
+ def eval(self) -> None:
+ eval_task = self.eval_args.task.split("_")[0]
+ eval_split = self.eval_args.task.split("_")[1]
+
+ mapping = cached_file(
+ path_or_repo_id=os.path.join(self.eval_args.task_dir, eval_task),
+ filename="mapping.json",
+ cache_dir=self.model_args.cache_dir,
+ token=self.model_args.hf_hub_token,
+ )
+
+ with open(mapping, encoding="utf-8") as f:
+ categorys: dict[str, dict[str, str]] = json.load(f)
+
+ category_corrects = {subj: np.array([], dtype="bool") for subj in SUBJECTS}
+ pbar = tqdm(categorys.keys(), desc="Processing subjects", position=0)
+ results = {}
+ for subject in pbar:
+ dataset = load_dataset(
+ path=os.path.join(self.eval_args.task_dir, eval_task),
+ name=subject,
+ cache_dir=self.model_args.cache_dir,
+ download_mode=self.eval_args.download_mode,
+ token=self.model_args.hf_hub_token,
+ trust_remote_code=self.model_args.trust_remote_code,
+ )
+ pbar.set_postfix_str(categorys[subject]["name"])
+ inputs, outputs, labels = [], [], []
+ for i in trange(len(dataset[eval_split]), desc="Formatting batches", position=1, leave=False):
+ support_set = (
+ dataset["train"].shuffle().select(range(min(self.eval_args.n_shot, len(dataset["train"]))))
+ )
+ messages = self.eval_template.format_example(
+ target_data=dataset[eval_split][i],
+ support_set=support_set,
+ subject_name=categorys[subject]["name"],
+ )
+
+ input_ids, _ = self.template.encode_oneturn(tokenizer=self.tokenizer, messages=messages)
+ inputs.append({"input_ids": input_ids, "attention_mask": [1] * len(input_ids)})
+ labels.append(messages[-1]["content"])
+
+ for i in trange(
+ 0, len(inputs), self.eval_args.batch_size, desc="Predicting batches", position=1, leave=False
+ ):
+ batch_input = self.tokenizer.pad(
+ inputs[i : i + self.eval_args.batch_size], return_attention_mask=True, return_tensors="pt"
+ ).to(self.model.device)
+ preds = self.batch_inference(batch_input)
+ outputs += preds
+
+ corrects = np.array(outputs) == np.array(labels)
+ category_name = categorys[subject]["category"]
+ category_corrects[category_name] = np.concatenate([category_corrects[category_name], corrects], axis=0)
+ category_corrects["Average"] = np.concatenate([category_corrects["Average"], corrects], axis=0)
+ results[subject] = {str(i): outputs[i] for i in range(len(outputs))}
+
+ pbar.close()
+ self._save_results(category_corrects, results)
+
+ def _save_results(self, category_corrects: dict[str, "NDArray"], results: dict[str, dict[int, str]]) -> None:
+ score_info = "\n".join(
+ [
+ f"{category_name:>15}: {100 * np.mean(category_correct):.2f}"
+ for category_name, category_correct in category_corrects.items()
+ if len(category_correct)
+ ]
+ )
+ print(score_info)
+ if self.eval_args.save_dir is not None:
+ os.makedirs(self.eval_args.save_dir, exist_ok=False)
+ with open(os.path.join(self.eval_args.save_dir, "results.json"), "w", encoding="utf-8", newline="\n") as f:
+ json.dump(results, f, indent=2)
+
+ with open(os.path.join(self.eval_args.save_dir, "results.log"), "w", encoding="utf-8", newline="\n") as f:
+ f.write(score_info)
+
+
+def run_eval() -> None:
+ Evaluator().eval()
diff --git a/src/train_sft/src/llamafactory/eval/template.py b/src/train_sft/src/llamafactory/eval/template.py
new file mode 100644
index 0000000000000000000000000000000000000000..5742469787a5001001a2702f183306bd2a312aef
--- /dev/null
+++ b/src/train_sft/src/llamafactory/eval/template.py
@@ -0,0 +1,79 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import dataclass
+
+from ..data import Role
+from ..extras.constants import CHOICES
+
+
+@dataclass
+class EvalTemplate:
+ system: str
+ choice: str
+ answer: str
+
+ def _parse_example(self, example: dict[str, str]) -> tuple[str, str]:
+ r"""Parse eval example.
+
+ input: a dict with keys {"question", "A", "B", "C", "D", "answer"}
+ output: a tuple of (prompt, response).
+ """
+ candidates = [self.choice.format(choice=ch, content=example[ch]) for ch in CHOICES if ch in example]
+ return "".join([example["question"]] + candidates + [self.answer]), example["answer"]
+
+ def format_example(
+ self, target_data: dict[str, str], support_set: list[dict[str, str]], subject_name: str
+ ) -> list[dict[str, str]]:
+ r"""Convert dataset examples to messages."""
+ messages = []
+ for k in range(len(support_set)):
+ prompt, response = self._parse_example(support_set[k])
+ messages.append({"role": Role.USER.value, "content": prompt})
+ messages.append({"role": Role.ASSISTANT.value, "content": response})
+
+ prompt, response = self._parse_example(target_data)
+ messages.append({"role": Role.USER.value, "content": prompt})
+ messages.append({"role": Role.ASSISTANT.value, "content": response})
+ messages[0]["content"] = self.system.format(subject=subject_name) + messages[0]["content"]
+ return messages
+
+
+eval_templates: dict[str, "EvalTemplate"] = {}
+
+
+def _register_eval_template(name: str, system: str, choice: str, answer: str) -> None:
+ eval_templates[name] = EvalTemplate(system=system, choice=choice, answer=answer)
+
+
+def get_eval_template(name: str) -> "EvalTemplate":
+ eval_template = eval_templates.get(name, None)
+ assert eval_template is not None, f"Template {name} does not exist."
+ return eval_template
+
+
+_register_eval_template(
+ name="en",
+ system="The following are multiple choice questions (with answers) about {subject}.\n\n",
+ choice="\n{choice}. {content}",
+ answer="\nAnswer:",
+)
+
+
+_register_eval_template(
+ name="zh",
+ system="以下是中国关于{subject}考试的单项选择题,请选出其中的正确答案。\n\n",
+ choice="\n{choice}. {content}",
+ answer="\n答案:",
+)
diff --git a/src/train_sft/src/llamafactory/extras/__init__.py b/src/train_sft/src/llamafactory/extras/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/train_sft/src/llamafactory/extras/constants.py b/src/train_sft/src/llamafactory/extras/constants.py
new file mode 100644
index 0000000000000000000000000000000000000000..f252a3a8964bdaf198e3a37acc8a37b69bf05ba8
--- /dev/null
+++ b/src/train_sft/src/llamafactory/extras/constants.py
@@ -0,0 +1,3464 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+from collections import OrderedDict, defaultdict
+from enum import Enum, unique
+from typing import Optional
+
+from peft.utils import SAFETENSORS_WEIGHTS_NAME as SAFE_ADAPTER_WEIGHTS_NAME
+from peft.utils import WEIGHTS_NAME as ADAPTER_WEIGHTS_NAME
+from transformers.utils import SAFE_WEIGHTS_INDEX_NAME, SAFE_WEIGHTS_NAME, WEIGHTS_INDEX_NAME, WEIGHTS_NAME
+
+
+AUDIO_PLACEHOLDER = os.getenv("AUDIO_PLACEHOLDER", "")
+
+CHECKPOINT_NAMES = {
+ SAFE_ADAPTER_WEIGHTS_NAME,
+ ADAPTER_WEIGHTS_NAME,
+ SAFE_WEIGHTS_INDEX_NAME,
+ SAFE_WEIGHTS_NAME,
+ WEIGHTS_INDEX_NAME,
+ WEIGHTS_NAME,
+}
+
+CHOICES = ["A", "B", "C", "D"]
+
+DATA_CONFIG = "dataset_info.json"
+
+DEFAULT_TEMPLATE = defaultdict(str)
+
+FILEEXT2TYPE = {
+ "arrow": "arrow",
+ "csv": "csv",
+ "json": "json",
+ "jsonl": "json",
+ "parquet": "parquet",
+ "txt": "text",
+}
+
+IGNORE_INDEX = -100
+
+IMAGE_PLACEHOLDER = os.getenv("IMAGE_PLACEHOLDER", "")
+
+LAYERNORM_NAMES = {"norm", "ln"}
+
+LLAMABOARD_CONFIG = "llamaboard_config.yaml"
+
+METHODS = ["full", "freeze", "lora"]
+
+MOD_SUPPORTED_MODELS = {"bloom", "falcon", "gemma", "llama", "mistral", "mixtral", "phi", "starcoder2"}
+
+MULTIMODAL_SUPPORTED_MODELS = set()
+
+PEFT_METHODS = {"lora"}
+
+RUNNING_LOG = "running_log.txt"
+
+SUBJECTS = ["Average", "STEM", "Social Sciences", "Humanities", "Other"]
+
+SUPPORTED_MODELS = OrderedDict()
+
+TRAINER_LOG = "trainer_log.jsonl"
+
+TRAINING_ARGS = "training_args.yaml"
+
+TRAINING_STAGES = {
+ "Supervised Fine-Tuning": "sft",
+ "Reward Modeling": "rm",
+ "PPO": "ppo",
+ "DPO": "dpo",
+ "KTO": "kto",
+ "Pre-Training": "pt",
+}
+
+STAGES_USE_PAIR_DATA = {"rm", "dpo"}
+
+SUPPORTED_CLASS_FOR_S2ATTN = {"llama"}
+
+SWANLAB_CONFIG = "swanlab_public_config.json"
+
+VIDEO_PLACEHOLDER = os.getenv("VIDEO_PLACEHOLDER", "")
+
+V_HEAD_WEIGHTS_NAME = "value_head.bin"
+
+V_HEAD_SAFE_WEIGHTS_NAME = "value_head.safetensors"
+
+
+class AttentionFunction(str, Enum):
+ AUTO = "auto"
+ DISABLED = "disabled"
+ SDPA = "sdpa"
+ FA2 = "fa2"
+
+
+class EngineName(str, Enum):
+ HF = "huggingface"
+ VLLM = "vllm"
+ SGLANG = "sglang"
+
+
+class DownloadSource(str, Enum):
+ DEFAULT = "hf"
+ MODELSCOPE = "ms"
+ OPENMIND = "om"
+
+
+@unique
+class QuantizationMethod(str, Enum):
+ r"""Borrowed from `transformers.utils.quantization_config.QuantizationMethod`."""
+
+ BNB = "bnb"
+ GPTQ = "gptq"
+ AWQ = "awq"
+ AQLM = "aqlm"
+ QUANTO = "quanto"
+ EETQ = "eetq"
+ HQQ = "hqq"
+ MXFP4 = "mxfp4"
+
+
+class RopeScaling(str, Enum):
+ LINEAR = "linear"
+ DYNAMIC = "dynamic"
+ YARN = "yarn"
+ LLAMA3 = "llama3"
+
+
+def register_model_group(
+ models: dict[str, dict[DownloadSource, str]],
+ template: Optional[str] = None,
+ multimodal: bool = False,
+) -> None:
+ for name, path in models.items():
+ SUPPORTED_MODELS[name] = path
+ if template is not None and (
+ any(suffix in name for suffix in ("-Chat", "-Distill", "-Instruct", "-Thinking")) or multimodal
+ ):
+ DEFAULT_TEMPLATE[name] = template
+
+ if multimodal:
+ MULTIMODAL_SUPPORTED_MODELS.add(name)
+
+
+register_model_group(
+ models={
+ "Aya-23-8B-Chat": {
+ DownloadSource.DEFAULT: "CohereForAI/aya-23-8B",
+ },
+ "Aya-23-35B-Chat": {
+ DownloadSource.DEFAULT: "CohereForAI/aya-23-35B",
+ },
+ },
+ template="cohere",
+)
+
+
+register_model_group(
+ models={
+ "Baichuan-7B-Base": {
+ DownloadSource.DEFAULT: "baichuan-inc/Baichuan-7B",
+ DownloadSource.MODELSCOPE: "baichuan-inc/baichuan-7B",
+ },
+ "Baichuan-13B-Base": {
+ DownloadSource.DEFAULT: "baichuan-inc/Baichuan-13B-Base",
+ DownloadSource.MODELSCOPE: "baichuan-inc/Baichuan-13B-Base",
+ },
+ "Baichuan-13B-Chat": {
+ DownloadSource.DEFAULT: "baichuan-inc/Baichuan-13B-Chat",
+ DownloadSource.MODELSCOPE: "baichuan-inc/Baichuan-13B-Chat",
+ },
+ },
+ template="baichuan",
+)
+
+
+register_model_group(
+ models={
+ "Baichuan2-7B-Base": {
+ DownloadSource.DEFAULT: "baichuan-inc/Baichuan2-7B-Base",
+ DownloadSource.MODELSCOPE: "baichuan-inc/Baichuan2-7B-Base",
+ },
+ "Baichuan2-13B-Base": {
+ DownloadSource.DEFAULT: "baichuan-inc/Baichuan2-13B-Base",
+ DownloadSource.MODELSCOPE: "baichuan-inc/Baichuan2-13B-Base",
+ DownloadSource.OPENMIND: "Baichuan/Baichuan2_13b_base_pt",
+ },
+ "Baichuan2-7B-Chat": {
+ DownloadSource.DEFAULT: "baichuan-inc/Baichuan2-7B-Chat",
+ DownloadSource.MODELSCOPE: "baichuan-inc/Baichuan2-7B-Chat",
+ DownloadSource.OPENMIND: "Baichuan/Baichuan2_7b_chat_pt",
+ },
+ "Baichuan2-13B-Chat": {
+ DownloadSource.DEFAULT: "baichuan-inc/Baichuan2-13B-Chat",
+ DownloadSource.MODELSCOPE: "baichuan-inc/Baichuan2-13B-Chat",
+ DownloadSource.OPENMIND: "Baichuan/Baichuan2_13b_chat_pt",
+ },
+ },
+ template="baichuan2",
+)
+
+
+register_model_group(
+ models={
+ "BLOOM-560M": {
+ DownloadSource.DEFAULT: "bigscience/bloom-560m",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/bloom-560m",
+ },
+ "BLOOM-3B": {
+ DownloadSource.DEFAULT: "bigscience/bloom-3b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/bloom-3b",
+ },
+ "BLOOM-7B1": {
+ DownloadSource.DEFAULT: "bigscience/bloom-7b1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/bloom-7b1",
+ },
+ },
+)
+
+
+register_model_group(
+ models={
+ "BLOOMZ-560M": {
+ DownloadSource.DEFAULT: "bigscience/bloomz-560m",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/bloomz-560m",
+ },
+ "BLOOMZ-3B": {
+ DownloadSource.DEFAULT: "bigscience/bloomz-3b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/bloomz-3b",
+ },
+ "BLOOMZ-7B1-mt": {
+ DownloadSource.DEFAULT: "bigscience/bloomz-7b1-mt",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/bloomz-7b1-mt",
+ },
+ },
+)
+
+
+register_model_group(
+ models={
+ "BlueLM-7B-Base": {
+ DownloadSource.DEFAULT: "vivo-ai/BlueLM-7B-Base",
+ DownloadSource.MODELSCOPE: "vivo-ai/BlueLM-7B-Base",
+ },
+ "BlueLM-7B-Chat": {
+ DownloadSource.DEFAULT: "vivo-ai/BlueLM-7B-Chat",
+ DownloadSource.MODELSCOPE: "vivo-ai/BlueLM-7B-Chat",
+ },
+ },
+ template="bluelm",
+)
+
+
+register_model_group(
+ models={
+ "Breeze-7B": {
+ DownloadSource.DEFAULT: "MediaTek-Research/Breeze-7B-Base-v1_0",
+ },
+ "Breeze-7B-Instruct": {
+ DownloadSource.DEFAULT: "MediaTek-Research/Breeze-7B-Instruct-v1_0",
+ },
+ },
+ template="breeze",
+)
+
+
+register_model_group(
+ models={
+ "ChatGLM2-6B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/chatglm2-6b",
+ DownloadSource.MODELSCOPE: "ZhipuAI/chatglm2-6b",
+ }
+ },
+ template="chatglm2",
+)
+
+
+register_model_group(
+ models={
+ "ChatGLM3-6B-Base": {
+ DownloadSource.DEFAULT: "zai-org/chatglm3-6b-base",
+ DownloadSource.MODELSCOPE: "ZhipuAI/chatglm3-6b-base",
+ },
+ "ChatGLM3-6B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/chatglm3-6b",
+ DownloadSource.MODELSCOPE: "ZhipuAI/chatglm3-6b",
+ },
+ },
+ template="chatglm3",
+)
+
+
+register_model_group(
+ models={
+ "Chinese-Llama-2-1.3B": {
+ DownloadSource.DEFAULT: "hfl/chinese-llama-2-1.3b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/chinese-llama-2-1.3b",
+ },
+ "Chinese-Llama-2-7B": {
+ DownloadSource.DEFAULT: "hfl/chinese-llama-2-7b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/chinese-llama-2-7b",
+ },
+ "Chinese-Llama-2-13B": {
+ DownloadSource.DEFAULT: "hfl/chinese-llama-2-13b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/chinese-llama-2-13b",
+ },
+ "Chinese-Alpaca-2-1.3B-Chat": {
+ DownloadSource.DEFAULT: "hfl/chinese-alpaca-2-1.3b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/chinese-alpaca-2-1.3b",
+ },
+ "Chinese-Alpaca-2-7B-Chat": {
+ DownloadSource.DEFAULT: "hfl/chinese-alpaca-2-7b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/chinese-alpaca-2-7b",
+ },
+ "Chinese-Alpaca-2-13B-Chat": {
+ DownloadSource.DEFAULT: "hfl/chinese-alpaca-2-13b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/chinese-alpaca-2-13b",
+ },
+ },
+ template="llama2_zh",
+)
+
+
+register_model_group(
+ models={
+ "CodeGeeX4-9B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/codegeex4-all-9b",
+ DownloadSource.MODELSCOPE: "ZhipuAI/codegeex4-all-9b",
+ },
+ },
+ template="codegeex4",
+)
+
+
+register_model_group(
+ models={
+ "CodeGemma-7B": {
+ DownloadSource.DEFAULT: "google/codegemma-7b",
+ },
+ "CodeGemma-7B-Instruct": {
+ DownloadSource.DEFAULT: "google/codegemma-7b-it",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/codegemma-7b-it",
+ },
+ "CodeGemma-1.1-2B": {
+ DownloadSource.DEFAULT: "google/codegemma-1.1-2b",
+ },
+ "CodeGemma-1.1-7B-Instruct": {
+ DownloadSource.DEFAULT: "google/codegemma-1.1-7b-it",
+ },
+ },
+ template="gemma",
+)
+
+
+register_model_group(
+ models={
+ "Codestral-22B-v0.1-Chat": {
+ DownloadSource.DEFAULT: "mistralai/Codestral-22B-v0.1",
+ DownloadSource.MODELSCOPE: "swift/Codestral-22B-v0.1",
+ },
+ },
+ template="mistral",
+)
+
+
+register_model_group(
+ models={
+ "CommandR-35B-Chat": {
+ DownloadSource.DEFAULT: "CohereForAI/c4ai-command-r-v01",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/c4ai-command-r-v01",
+ },
+ "CommandR-Plus-104B-Chat": {
+ DownloadSource.DEFAULT: "CohereForAI/c4ai-command-r-plus",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/c4ai-command-r-plus",
+ },
+ "CommandR-35B-4bit-Chat": {
+ DownloadSource.DEFAULT: "CohereForAI/c4ai-command-r-v01-4bit",
+ DownloadSource.MODELSCOPE: "mirror013/c4ai-command-r-v01-4bit",
+ },
+ "CommandR-Plus-104B-4bit-Chat": {
+ DownloadSource.DEFAULT: "CohereForAI/c4ai-command-r-plus-4bit",
+ },
+ },
+ template="cohere",
+)
+
+
+register_model_group(
+ models={
+ "DBRX-132B-Base": {
+ DownloadSource.DEFAULT: "databricks/dbrx-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/dbrx-base",
+ },
+ "DBRX-132B-Instruct": {
+ DownloadSource.DEFAULT: "databricks/dbrx-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/dbrx-instruct",
+ },
+ },
+ template="dbrx",
+)
+
+
+register_model_group(
+ models={
+ "DeepSeek-LLM-7B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-llm-7b-base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-llm-7b-base",
+ },
+ "DeepSeek-LLM-67B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-llm-67b-base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-llm-67b-base",
+ },
+ "DeepSeek-LLM-7B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-llm-7b-chat",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-llm-7b-chat",
+ },
+ "DeepSeek-LLM-67B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-llm-67b-chat",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-llm-67b-chat",
+ },
+ "DeepSeek-Math-7B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-math-7b-base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-math-7b-base",
+ },
+ "DeepSeek-Math-7B-Instruct": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-math-7b-instruct",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-math-7b-instruct",
+ },
+ "DeepSeek-MoE-16B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-moe-16b-base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-moe-16b-base",
+ },
+ "DeepSeek-MoE-16B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-moe-16b-chat",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-moe-16b-chat",
+ },
+ "DeepSeek-V2-16B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V2-Lite",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V2-Lite",
+ },
+ "DeepSeek-V2-236B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V2",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V2",
+ },
+ "DeepSeek-V2-16B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V2-Lite-Chat",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V2-Lite-Chat",
+ },
+ "DeepSeek-V2-236B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V2-Chat",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V2-Chat",
+ },
+ "DeepSeek-Coder-V2-16B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-Coder-V2-Lite-Base",
+ },
+ "DeepSeek-Coder-V2-236B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-Coder-V2-Base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-Coder-V2-Base",
+ },
+ "DeepSeek-Coder-V2-16B-Instruct": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct",
+ },
+ "DeepSeek-Coder-V2-236B-Instruct": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-Coder-V2-Instruct",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-Coder-V2-Instruct",
+ },
+ },
+ template="deepseek",
+)
+
+
+register_model_group(
+ models={
+ "DeepSeek-Coder-6.7B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-coder-6.7b-base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-coder-6.7b-base",
+ },
+ "DeepSeek-Coder-7B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-coder-7b-base-v1.5",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-coder-7b-base-v1.5",
+ },
+ "DeepSeek-Coder-33B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-coder-33b-base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-coder-33b-base",
+ },
+ "DeepSeek-Coder-6.7B-Instruct": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-coder-6.7b-instruct",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-coder-6.7b-instruct",
+ },
+ "DeepSeek-Coder-7B-Instruct": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-coder-7b-instruct-v1.5",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-coder-7b-instruct-v1.5",
+ },
+ "DeepSeek-Coder-33B-Instruct": {
+ DownloadSource.DEFAULT: "deepseek-ai/deepseek-coder-33b-instruct",
+ DownloadSource.MODELSCOPE: "deepseek-ai/deepseek-coder-33b-instruct",
+ },
+ },
+ template="deepseekcoder",
+)
+
+
+register_model_group(
+ models={
+ "DeepSeek-V2-0628-236B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V2-Chat-0628",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V2-Chat-0628",
+ },
+ "DeepSeek-V2.5-236B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V2.5",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V2.5",
+ },
+ "DeepSeek-V2.5-1210-236B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V2.5-1210",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V2.5-1210",
+ },
+ "DeepSeek-V3-671B-Base": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V3-Base",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V3-Base",
+ },
+ "DeepSeek-V3-671B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V3",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V3",
+ },
+ "DeepSeek-V3-0324-671B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-V3-0324",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-V3-0324",
+ },
+ },
+ template="deepseek3",
+)
+
+
+register_model_group(
+ models={
+ "DeepSeek-R1-1.5B-Distill": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
+ },
+ "DeepSeek-R1-7B-Distill": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
+ },
+ "DeepSeek-R1-8B-Distill": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
+ },
+ "DeepSeek-R1-14B-Distill": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-Distill-Qwen-14B",
+ },
+ "DeepSeek-R1-32B-Distill": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
+ },
+ "DeepSeek-R1-70B-Distill": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
+ },
+ "DeepSeek-R1-671B-Chat-Zero": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-Zero",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-Zero",
+ },
+ "DeepSeek-R1-671B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1",
+ },
+ "DeepSeek-R1-0528-8B-Distill": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
+ },
+ "DeepSeek-R1-0528-671B-Chat": {
+ DownloadSource.DEFAULT: "deepseek-ai/DeepSeek-R1-0528",
+ DownloadSource.MODELSCOPE: "deepseek-ai/DeepSeek-R1-0528",
+ },
+ },
+ template="deepseekr1",
+)
+
+
+register_model_group(
+ models={
+ "Devstral-Small-2507-Instruct": {
+ DownloadSource.DEFAULT: "mistralai/Devstral-Small-2507",
+ DownloadSource.MODELSCOPE: "mistralai/Devstral-Small-2507",
+ },
+ },
+ template="mistral_small",
+)
+
+
+register_model_group(
+ models={
+ "EXAONE-3.0-7.8B-Instruct": {
+ DownloadSource.DEFAULT: "LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct",
+ },
+ },
+ template="exaone",
+)
+
+
+register_model_group(
+ models={
+ "Falcon-7B": {
+ DownloadSource.DEFAULT: "tiiuae/falcon-7b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/falcon-7b",
+ },
+ "Falcon-11B": {
+ DownloadSource.DEFAULT: "tiiuae/falcon-11B",
+ DownloadSource.MODELSCOPE: "tiiuae/falcon-11B",
+ },
+ "Falcon-40B": {
+ DownloadSource.DEFAULT: "tiiuae/falcon-40b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/falcon-40b",
+ },
+ "Falcon-180B": {
+ DownloadSource.DEFAULT: "tiiuae/falcon-180b",
+ DownloadSource.MODELSCOPE: "modelscope/falcon-180B",
+ },
+ "Falcon-7B-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/falcon-7b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/falcon-7b-instruct",
+ },
+ "Falcon-40B-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/falcon-40b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/falcon-40b-instruct",
+ },
+ "Falcon-180B-Chat": {
+ DownloadSource.DEFAULT: "tiiuae/falcon-180b-chat",
+ DownloadSource.MODELSCOPE: "modelscope/falcon-180B-chat",
+ },
+ },
+ template="falcon",
+)
+
+register_model_group(
+ models={
+ "Falcon-H1-0.5B-Base": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-0.5B-Base",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-0.5B-Base",
+ },
+ "Falcon-H1-1.5B-Base": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-1.5B-Base",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-1.5B-Base",
+ },
+ "Falcon-H1-1.5B-Deep-Base": {
+ DownloadSource.DEFAULT: "tiuae/Falcon-H1-1.5B-Deep-Base",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-1.5B-Deep-Base",
+ },
+ "Falcon-H1-3B-Base": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-3B-Base",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-3B-Base",
+ },
+ "Falcon-H1-7B-Base": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-7B-Base",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-7B-Base",
+ },
+ "Falcon-H1-34B-Base": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-34B-Base",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-34B-Base",
+ },
+ "Falcon-H1-0.5B-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-0.5B-Instruct",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-0.5B-Instruct",
+ },
+ "Falcon-H1-1.5B-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-1.5B-Instruct",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-1.5B-Instruct",
+ },
+ "Falcon-H1-1.5B-Deep-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-1.5B-Deep-Instruct",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-1.5B-Deep-Instruct",
+ },
+ "Falcon-H1-3B-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-3B-Instruct",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-3B-Instruct",
+ },
+ "Falcon-H1-7B-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-7B-Instruct",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-7B-Instruct",
+ },
+ "Falcon-H1-34B-Instruct": {
+ DownloadSource.DEFAULT: "tiiuae/Falcon-H1-34B-Instruct",
+ DownloadSource.MODELSCOPE: "tiiuae/Falcon-H1-34B-Instruct",
+ },
+ },
+ template="falcon_h1",
+)
+
+
+register_model_group(
+ models={
+ "Gemma-2B": {
+ DownloadSource.DEFAULT: "google/gemma-2b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/gemma-2b",
+ },
+ "Gemma-7B": {
+ DownloadSource.DEFAULT: "google/gemma-7b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/gemma-2b-it",
+ },
+ "Gemma-2B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-2b-it",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/gemma-7b",
+ },
+ "Gemma-7B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-7b-it",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/gemma-7b-it",
+ },
+ "Gemma-1.1-2B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-1.1-2b-it",
+ },
+ "Gemma-1.1-7B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-1.1-7b-it",
+ },
+ },
+ template="gemma",
+)
+
+
+register_model_group(
+ models={
+ "Gemma-2-2B": {
+ DownloadSource.DEFAULT: "google/gemma-2-2b",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-2-2b",
+ },
+ "Gemma-2-9B": {
+ DownloadSource.DEFAULT: "google/gemma-2-9b",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-2-9b",
+ },
+ "Gemma-2-27B": {
+ DownloadSource.DEFAULT: "google/gemma-2-27b",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-2-27b",
+ },
+ "Gemma-2-2B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-2-2b-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-2-2b-it",
+ DownloadSource.OPENMIND: "LlamaFactory/gemma-2-2b-it",
+ },
+ "Gemma-2-9B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-2-9b-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-2-9b-it",
+ DownloadSource.OPENMIND: "LlamaFactory/gemma-2-9b-it",
+ },
+ "Gemma-2-27B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-2-27b-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-2-27b-it",
+ },
+ "Gemma-3-1B": {
+ DownloadSource.DEFAULT: "google/gemma-3-1b-pt",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-1b-pt",
+ },
+ "Gemma-3-1B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-3-1b-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-1b-it",
+ },
+ "MedGemma-27B-Instruct": {
+ DownloadSource.DEFAULT: "google/medgemma-27b-text-it",
+ DownloadSource.MODELSCOPE: "google/medgemma-27b-text-it",
+ },
+ },
+ template="gemma2",
+)
+
+
+register_model_group(
+ models={
+ "Gemma-3-4B": {
+ DownloadSource.DEFAULT: "google/gemma-3-4b-pt",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-4b-pt",
+ },
+ "Gemma-3-12B": {
+ DownloadSource.DEFAULT: "google/gemma-3-12b-pt",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-12b-pt",
+ },
+ "Gemma-3-27B": {
+ DownloadSource.DEFAULT: "google/gemma-3-27b-pt",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-27b-pt",
+ },
+ "Gemma-3-4B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-3-4b-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-4b-it",
+ },
+ "Gemma-3-12B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-3-12b-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-12b-it",
+ },
+ "Gemma-3-27B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-3-27b-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3-27b-it",
+ },
+ "MedGemma-4B": {
+ DownloadSource.DEFAULT: "google/medgemma-4b-pt",
+ DownloadSource.MODELSCOPE: "google/medgemma-4b-pt",
+ },
+ "MedGemma-4B-Instruct": {
+ DownloadSource.DEFAULT: "google/medgemma-4b-it",
+ DownloadSource.MODELSCOPE: "google/medgemma-4b-it",
+ },
+ },
+ template="gemma3",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Gemma-3n-E2B": {
+ DownloadSource.DEFAULT: "google/gemma-3n-E2B",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3n-E2B",
+ },
+ "Gemma-3n-E4B": {
+ DownloadSource.DEFAULT: "google/gemma-3n-E4B",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3n-E4B",
+ },
+ "Gemma-3n-E2B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-3n-E2B-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3n-E2B-it",
+ },
+ "Gemma-3n-E4B-Instruct": {
+ DownloadSource.DEFAULT: "google/gemma-3n-E4B-it",
+ DownloadSource.MODELSCOPE: "LLM-Research/gemma-3n-E4B-it",
+ },
+ },
+ template="gemma3n",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "GLM-4-9B": {
+ DownloadSource.DEFAULT: "zai-org/glm-4-9b",
+ DownloadSource.MODELSCOPE: "ZhipuAI/glm-4-9b",
+ },
+ "GLM-4-9B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/glm-4-9b-chat",
+ DownloadSource.MODELSCOPE: "ZhipuAI/glm-4-9b-chat",
+ DownloadSource.OPENMIND: "LlamaFactory/glm-4-9b-chat",
+ },
+ "GLM-4-9B-1M-Chat": {
+ DownloadSource.DEFAULT: "zai-org/glm-4-9b-chat-1m",
+ DownloadSource.MODELSCOPE: "ZhipuAI/glm-4-9b-chat-1m",
+ },
+ "GLM-4-0414-9B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4-9B-0414",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4-9B-0414",
+ },
+ "GLM-4-0414-32B-Base": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4-32B-Base-0414",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4-32B-Base-0414",
+ },
+ "GLM-4-0414-32B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4-32B-0414",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4-32B-0414",
+ },
+ },
+ template="glm4",
+)
+
+
+register_model_group(
+ models={
+ "GLM-4.1V-9B-Base": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4.1V-9B-Base",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4.1V-9B-Base",
+ },
+ "GLM-4.1V-9B-Thinking": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4.1V-9B-Thinking",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4.1V-9B-Thinking",
+ },
+ },
+ template="glm4v",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "GLM-4.5-Air-Base": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4.5-Air-Base",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4.5-Air-Base",
+ },
+ "GLM-4.5-Base": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4.5-Base",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4.5-Base",
+ },
+ "GLM-4.5-Air-Thinking": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4.5-Air",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4.5-Air",
+ },
+ "GLM-4.5-Thinking": {
+ DownloadSource.DEFAULT: "zai-org/GLM-4.5",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4.5",
+ },
+ },
+ template="glm4_moe",
+)
+
+
+register_model_group(
+ models={
+ "GLM-4.5V-Air-Thinking":{
+ DownloadSource.DEFAULT: "zai-org/GLM-4.5V",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-4.5V",
+ }
+ },
+ template="glm45v",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "GLM-Z1-0414-9B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/GLM-Z1-9B-0414",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-Z1-9B-0414",
+ },
+ "GLM-Z1-0414-32B-Chat": {
+ DownloadSource.DEFAULT: "zai-org/GLM-Z1-32B-0414",
+ DownloadSource.MODELSCOPE: "ZhipuAI/GLM-Z1-32B-0414",
+ },
+ },
+ template="glmz1",
+)
+
+
+register_model_group(
+ models={
+ "GPT-2-Small": {
+ DownloadSource.DEFAULT: "openai-community/gpt2",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/gpt2",
+ },
+ "GPT-2-Medium": {
+ DownloadSource.DEFAULT: "openai-community/gpt2-medium",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/gpt2-medium",
+ },
+ "GPT-2-Large": {
+ DownloadSource.DEFAULT: "openai-community/gpt2-large",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/gpt2-large",
+ },
+ "GPT-2-XL": {
+ DownloadSource.DEFAULT: "openai-community/gpt2-xl",
+ DownloadSource.MODELSCOPE: "goodbai95/GPT2-xl",
+ },
+ },
+)
+
+
+register_model_group(
+ models={
+ "GPT-OSS-20B-Thinking": {
+ DownloadSource.DEFAULT: "openai/gpt-oss-20b",
+ DownloadSource.MODELSCOPE: "openai/gpt-oss-20b",
+ },
+ "GPT-OSS-120B-Thinking": {
+ DownloadSource.DEFAULT: "openai/gpt-oss-120b",
+ DownloadSource.MODELSCOPE: "openai/gpt-oss-120b",
+ },
+ },
+ template="gpt",
+)
+
+
+register_model_group(
+ models={
+ "Granite-3.0-1B-A400M-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-1b-a400m-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-1b-a400m-base",
+ },
+ "Granite-3.0-3B-A800M-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-3b-a800m-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-3b-a800m-base",
+ },
+ "Granite-3.0-2B-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-2b-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-2b-base",
+ },
+ "Granite-3.0-8B-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-8b-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-8b-base",
+ },
+ "Granite-3.0-1B-A400M-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-1b-a400m-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-1b-a400m-instruct",
+ },
+ "Granite-3.0-3B-A800M-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-3b-a800m-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-3b-a800m-instruct",
+ },
+ "Granite-3.0-2B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-2b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-2b-instruct",
+ },
+ "Granite-3.0-8B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.0-8b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.0-8b-instruct",
+ },
+ "Granite-3.1-1B-A400M-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-1b-a400m-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-1b-a400m-base",
+ },
+ "Granite-3.1-3B-A800M-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-3b-a800m-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-3b-a800m-base",
+ },
+ "Granite-3.1-2B-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-2b-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-2b-base",
+ },
+ "Granite-3.1-8B-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-8b-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-8b-base",
+ },
+ "Granite-3.1-1B-A400M-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-1b-a400m-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-1b-a400m-instruct",
+ },
+ "Granite-3.1-3B-A800M-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-3b-a800m-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-3b-a800m-instruct",
+ },
+ "Granite-3.1-2B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-2b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-2b-instruct",
+ },
+ "Granite-3.1-8B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.1-8b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.1-8b-instruct",
+ },
+ "Granite-3.2-2B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.2-2b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.2-2b-instruct",
+ },
+ "Granite-3.2-8B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.2-8b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.2-8b-instruct",
+ },
+ "Granite-3.3-2B-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.3-2b-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.3-2b-base",
+ },
+ "Granite-3.3-8B-Base": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.3-8b-base",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.3-8b-base",
+ },
+ "Granite-3.3-2B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.3-2b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.3-2b-instruct",
+ },
+ "Granite-3.3-8B-Instruct": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-3.3-8b-instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-3.3-8b-instruct",
+ },
+ },
+ template="granite3",
+)
+
+
+register_model_group(
+ models={
+ "Granite-Vision-3.2-2B": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-vision-3.2-2b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/granite-vision-3.2-2b",
+ },
+ },
+ template="granite3_vision",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Granite-4.0-tiny-preview": {
+ DownloadSource.DEFAULT: "ibm-granite/granite-4.0-tiny-preview",
+ DownloadSource.MODELSCOPE: "ibm-granite/granite-4.0-tiny-preview",
+ },
+ },
+ template="granite4",
+)
+
+
+register_model_group(
+ models={
+ "Hunyuan-7B-Instruct": {
+ DownloadSource.DEFAULT: "tencent/Hunyuan-7B-Instruct",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Hunyuan-7B-Instruct",
+ },
+ },
+ template="hunyuan",
+)
+
+
+register_model_group(
+ models={
+ "Index-1.9B-Base": {
+ DownloadSource.DEFAULT: "IndexTeam/Index-1.9B",
+ DownloadSource.MODELSCOPE: "IndexTeam/Index-1.9B",
+ },
+ "Index-1.9B-Base-Pure": {
+ DownloadSource.DEFAULT: "IndexTeam/Index-1.9B-Pure",
+ DownloadSource.MODELSCOPE: "IndexTeam/Index-1.9B-Pure",
+ },
+ "Index-1.9B-Chat": {
+ DownloadSource.DEFAULT: "IndexTeam/Index-1.9B-Chat",
+ DownloadSource.MODELSCOPE: "IndexTeam/Index-1.9B-Chat",
+ },
+ "Index-1.9B-Character-Chat": {
+ DownloadSource.DEFAULT: "IndexTeam/Index-1.9B-Character",
+ DownloadSource.MODELSCOPE: "IndexTeam/Index-1.9B-Character",
+ },
+ "Index-1.9B-Chat-32K": {
+ DownloadSource.DEFAULT: "IndexTeam/Index-1.9B-32K",
+ DownloadSource.MODELSCOPE: "IndexTeam/Index-1.9B-32K",
+ },
+ },
+ template="index",
+)
+
+
+register_model_group(
+ models={
+ "InternLM-7B": {
+ DownloadSource.DEFAULT: "internlm/internlm-7b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm-7b",
+ },
+ "InternLM-20B": {
+ DownloadSource.DEFAULT: "internlm/internlm-20b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm-20b",
+ },
+ "InternLM-7B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm-chat-7b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm-chat-7b",
+ },
+ "InternLM-20B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm-chat-20b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm-chat-20b",
+ },
+ },
+ template="intern",
+)
+
+
+register_model_group(
+ models={
+ "InternLM2-7B": {
+ DownloadSource.DEFAULT: "internlm/internlm2-7b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2-7b",
+ },
+ "InternLM2-20B": {
+ DownloadSource.DEFAULT: "internlm/internlm2-20b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2-20b",
+ },
+ "InternLM2-7B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm2-chat-7b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2-chat-7b",
+ },
+ "InternLM2-20B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm2-chat-20b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2-chat-20b",
+ },
+ "InternLM2.5-1.8B": {
+ DownloadSource.DEFAULT: "internlm/internlm2_5-1_8b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2_5-1_8b",
+ DownloadSource.OPENMIND: "Intern/internlm2_5-1_8b",
+ },
+ "InternLM2.5-7B": {
+ DownloadSource.DEFAULT: "internlm/internlm2_5-7b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2_5-7b",
+ },
+ "InternLM2.5-20B": {
+ DownloadSource.DEFAULT: "internlm/internlm2_5-20b",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2_5-20b",
+ DownloadSource.OPENMIND: "Intern/internlm2_5-20b",
+ },
+ "InternLM2.5-1.8B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm2_5-1_8b-chat",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2_5-1_8b-chat",
+ DownloadSource.OPENMIND: "Intern/internlm2_5-1_8b-chat",
+ },
+ "InternLM2.5-7B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm2_5-7b-chat",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2_5-7b-chat",
+ DownloadSource.OPENMIND: "Intern/internlm2_5-7b-chat",
+ },
+ "InternLM2.5-7B-1M-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm2_5-7b-chat-1m",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2_5-7b-chat-1m",
+ DownloadSource.OPENMIND: "Intern/internlm2_5-7b-chat-1m",
+ },
+ "InternLM2.5-20B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm2_5-20b-chat",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm2_5-20b-chat",
+ DownloadSource.OPENMIND: "Intern/internlm2_5-20b-chat",
+ },
+ "InternLM3-8B-Chat": {
+ DownloadSource.DEFAULT: "internlm/internlm3-8b-instruct",
+ DownloadSource.MODELSCOPE: "Shanghai_AI_Laboratory/internlm3-8b-instruct",
+ },
+ },
+ template="intern2",
+)
+
+
+register_model_group(
+ models={
+ "InternVL2.5-2B-MPO": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL2_5-2B-MPO-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL2_5-2B-MPO-hf",
+ },
+ "InternVL2.5-8B-MPO": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL2_5-8B-MPO-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL2_5-8B-MPO-hf",
+ },
+ "InternVL3-1B-hf": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL3-1B-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL3-1B-hf",
+ },
+ "InternVL3-2B-hf": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL3-2B-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL3-2B-hf",
+ },
+ "InternVL3-8B-hf": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL3-8B-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL3-8B-hf",
+ },
+ "InternVL3-14B-hf": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL3-14B-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL3-14B-hf",
+ },
+ "InternVL3-38B-hf": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL3-38B-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL3-38B-hf",
+ },
+ "InternVL3-78B-hf": {
+ DownloadSource.DEFAULT: "OpenGVLab/InternVL3-78B-hf",
+ DownloadSource.MODELSCOPE: "OpenGVLab/InternVL3-78B-hf",
+ },
+ },
+ template="intern_vl",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Jamba-v0.1": {
+ DownloadSource.DEFAULT: "ai21labs/Jamba-v0.1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Jamba-v0.1",
+ }
+ },
+)
+
+
+register_model_group(
+ models={
+ "Keye-VL-8B-Chat": {
+ DownloadSource.DEFAULT: "Kwai-Keye/Keye-VL-8B-Preview",
+ DownloadSource.MODELSCOPE: "Kwai-Keye/Keye-VL-8B-Preview",
+ },
+ },
+ template="keye_vl",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Kimi-Dev-72B-Instruct": {
+ DownloadSource.DEFAULT: "moonshotai/Kimi-Dev-72B",
+ DownloadSource.MODELSCOPE: "moonshotai/Kimi-Dev-72B",
+ },
+ },
+ template="qwen",
+)
+
+
+register_model_group(
+ models={
+ "Kimi-VL-A3B-Instruct": {
+ DownloadSource.DEFAULT: "moonshotai/Kimi-VL-A3B-Instruct",
+ DownloadSource.MODELSCOPE: "moonshotai/Kimi-VL-A3B-Instruct",
+ },
+ "Kimi-VL-A3B-Thinking": {
+ DownloadSource.DEFAULT: "moonshotai/Kimi-VL-A3B-Thinking",
+ DownloadSource.MODELSCOPE: "moonshotai/Kimi-VL-A3B-Thinking",
+ },
+ "Kimi-VL-A3B-Thinking-2506": {
+ DownloadSource.DEFAULT: "moonshotai/Kimi-VL-A3B-Thinking-2506",
+ DownloadSource.MODELSCOPE: "moonshotai/Kimi-VL-A3B-Thinking-2506",
+ },
+ },
+ template="kimi_vl",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LingoWhale-8B": {
+ DownloadSource.DEFAULT: "deeplang-ai/LingoWhale-8B",
+ DownloadSource.MODELSCOPE: "DeepLang/LingoWhale-8B",
+ }
+ },
+)
+
+
+register_model_group(
+ models={
+ "Llama-7B": {
+ DownloadSource.DEFAULT: "huggyllama/llama-7b",
+ DownloadSource.MODELSCOPE: "skyline2006/llama-7b",
+ },
+ "Llama-13B": {
+ DownloadSource.DEFAULT: "huggyllama/llama-13b",
+ DownloadSource.MODELSCOPE: "skyline2006/llama-13b",
+ },
+ "Llama-30B": {
+ DownloadSource.DEFAULT: "huggyllama/llama-30b",
+ DownloadSource.MODELSCOPE: "skyline2006/llama-30b",
+ },
+ "Llama-65B": {
+ DownloadSource.DEFAULT: "huggyllama/llama-65b",
+ DownloadSource.MODELSCOPE: "skyline2006/llama-65b",
+ },
+ }
+)
+
+
+register_model_group(
+ models={
+ "Llama-2-7B": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-2-7b-hf",
+ DownloadSource.MODELSCOPE: "modelscope/Llama-2-7b-ms",
+ },
+ "Llama-2-13B": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-2-13b-hf",
+ DownloadSource.MODELSCOPE: "modelscope/Llama-2-13b-ms",
+ },
+ "Llama-2-70B": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-2-70b-hf",
+ DownloadSource.MODELSCOPE: "modelscope/Llama-2-70b-ms",
+ },
+ "Llama-2-7B-Chat": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-2-7b-chat-hf",
+ DownloadSource.MODELSCOPE: "modelscope/Llama-2-7b-chat-ms",
+ },
+ "Llama-2-13B-Chat": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-2-13b-chat-hf",
+ DownloadSource.MODELSCOPE: "modelscope/Llama-2-13b-chat-ms",
+ },
+ "Llama-2-70B-Chat": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-2-70b-chat-hf",
+ DownloadSource.MODELSCOPE: "modelscope/Llama-2-70b-chat-ms",
+ },
+ },
+ template="llama2",
+)
+
+
+register_model_group(
+ models={
+ "Llama-3-8B": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3-8B",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3-8B",
+ },
+ "Llama-3-70B": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3-70B",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3-70B",
+ },
+ "Llama-3-8B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3-8B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3-8B-Instruct",
+ },
+ "Llama-3-70B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3-70B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3-70B-Instruct",
+ },
+ "Llama-3-8B-Chinese-Chat": {
+ DownloadSource.DEFAULT: "shenzhi-wang/Llama3-8B-Chinese-Chat",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama3-8B-Chinese-Chat",
+ DownloadSource.OPENMIND: "LlamaFactory/Llama3-Chinese-8B-Instruct",
+ },
+ "Llama-3-70B-Chinese-Chat": {
+ DownloadSource.DEFAULT: "shenzhi-wang/Llama3-70B-Chinese-Chat",
+ },
+ "Llama-3.1-8B": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3.1-8B",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3.1-8B",
+ },
+ "Llama-3.1-70B": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3.1-70B",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3.1-70B",
+ },
+ "Llama-3.1-405B": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3.1-405B",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3.1-405B",
+ },
+ "Llama-3.1-8B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3.1-8B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3.1-8B-Instruct",
+ },
+ "Llama-3.1-70B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3.1-70B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3.1-70B-Instruct",
+ },
+ "Llama-3.1-405B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Meta-Llama-3.1-405B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Meta-Llama-3.1-405B-Instruct",
+ },
+ "Llama-3.1-8B-Chinese-Chat": {
+ DownloadSource.DEFAULT: "shenzhi-wang/Llama3.1-8B-Chinese-Chat",
+ DownloadSource.MODELSCOPE: "XD_AI/Llama3.1-8B-Chinese-Chat",
+ },
+ "Llama-3.1-70B-Chinese-Chat": {
+ DownloadSource.DEFAULT: "shenzhi-wang/Llama3.1-70B-Chinese-Chat",
+ DownloadSource.MODELSCOPE: "XD_AI/Llama3.1-70B-Chinese-Chat",
+ },
+ "Llama-3.2-1B": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-1B",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-1B",
+ },
+ "Llama-3.2-3B": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-3B",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-3B",
+ },
+ "Llama-3.2-1B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-1B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-1B-Instruct",
+ },
+ "Llama-3.2-3B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-3B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-3B-Instruct",
+ },
+ "Llama-3.3-70B-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.3-70B-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.3-70B-Instruct",
+ },
+ },
+ template="llama3",
+)
+
+
+register_model_group(
+ models={
+ "Llama-3.2-11B-Vision": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-11B-Vision",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-11B-Vision",
+ },
+ "Llama-3.2-11B-Vision-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-11B-Vision-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-11B-Vision-Instruct",
+ },
+ "Llama-3.2-90B-Vision": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-90B-Vision",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-90B-Vision",
+ },
+ "Llama-3.2-90B-Vision-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-3.2-90B-Vision-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-3.2-90B-Vision-Instruct",
+ },
+ },
+ template="mllama",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Llama-4-Scout-17B-16E": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-4-Scout-17B-16E",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-4-Scout-17B-16E",
+ },
+ "Llama-4-Scout-17B-16E-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-4-Scout-17B-16E-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-4-Scout-17B-16E-Instruct",
+ },
+ "Llama-4-Maverick-17B-128E": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-4-Maverick-17B-128E",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-4-Maverick-17B-128E",
+ },
+ "Llama-4-Maverick-17B-128E-Instruct": {
+ DownloadSource.DEFAULT: "meta-llama/Llama-4-Maverick-17B-128E-Instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Llama-4-Maverick-17B-128E-Instruct",
+ },
+ },
+ template="llama4",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-1.5-7B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-1.5-7b-hf",
+ DownloadSource.MODELSCOPE: "swift/llava-1.5-7b-hf",
+ },
+ "LLaVA-1.5-13B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-1.5-13b-hf",
+ DownloadSource.MODELSCOPE: "swift/llava-1.5-13b-hf",
+ },
+ },
+ template="llava",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-7B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-v1.6-vicuna-7b-hf",
+ DownloadSource.MODELSCOPE: "swift/llava-v1.6-vicuna-7b-hf",
+ },
+ "LLaVA-NeXT-13B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-v1.6-vicuna-13b-hf",
+ DownloadSource.MODELSCOPE: "swift/llava-v1.6-vicuna-13b-hf",
+ },
+ },
+ template="llava_next",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-Mistral-7B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-v1.6-mistral-7b-hf",
+ DownloadSource.MODELSCOPE: "swift/llava-v1.6-mistral-7b-hf",
+ },
+ },
+ template="llava_next_mistral",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-Llama3-8B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llama3-llava-next-8b-hf",
+ DownloadSource.MODELSCOPE: "swift/llama3-llava-next-8b-hf",
+ },
+ },
+ template="llava_next_llama3",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-34B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-v1.6-34b-hf",
+ DownloadSource.MODELSCOPE: "LLM-Research/llava-v1.6-34b-hf",
+ },
+ },
+ template="llava_next_yi",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-72B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-next-72b-hf",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/llava-next-72b-hf",
+ },
+ "LLaVA-NeXT-110B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/llava-next-110b-hf",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/llava-next-110b-hf",
+ },
+ },
+ template="llava_next_qwen",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-Video-7B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/LLaVA-NeXT-Video-7B-hf",
+ DownloadSource.MODELSCOPE: "swift/LLaVA-NeXT-Video-7B-hf",
+ },
+ "LLaVA-NeXT-Video-7B-DPO-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/LLaVA-NeXT-Video-7B-DPO-hf",
+ DownloadSource.MODELSCOPE: "swift/LLaVA-NeXT-Video-7B-DPO-hf",
+ },
+ },
+ template="llava_next_video",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-Video-7B-32k-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/LLaVA-NeXT-Video-7B-32K-hf",
+ DownloadSource.MODELSCOPE: "swift/LLaVA-NeXT-Video-7B-32K-hf",
+ },
+ },
+ template="llava_next_video_mistral",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "LLaVA-NeXT-Video-34B-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/LLaVA-NeXT-Video-34B-hf",
+ DownloadSource.MODELSCOPE: "swift/LLaVA-NeXT-Video-34B-hf",
+ },
+ "LLaVA-NeXT-Video-34B-DPO-Chat": {
+ DownloadSource.DEFAULT: "llava-hf/LLaVA-NeXT-Video-34B-DPO-hf",
+ },
+ },
+ template="llava_next_video_yi",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Marco-o1-Chat": {
+ DownloadSource.DEFAULT: "AIDC-AI/Marco-o1",
+ DownloadSource.MODELSCOPE: "AIDC-AI/Marco-o1",
+ },
+ },
+ template="marco",
+)
+
+
+register_model_group(
+ models={
+ "MiMo-7B-Base": {
+ DownloadSource.DEFAULT: "XiaomiMiMo/MiMo-7B-Base",
+ DownloadSource.MODELSCOPE: "XiaomiMiMo/MiMo-7B-Base",
+ },
+ "MiMo-7B-Instruct": {
+ DownloadSource.DEFAULT: "XiaomiMiMo/MiMo-7B-SFT",
+ DownloadSource.MODELSCOPE: "XiaomiMiMo/MiMo-7B-SFT",
+ },
+ "MiMo-7B-Instruct-RL": {
+ DownloadSource.DEFAULT: "XiaomiMiMo/MiMo-7B-RL",
+ DownloadSource.MODELSCOPE: "XiaomiMiMo/MiMo-7B-RL",
+ },
+ "MiMo-7B-RL-ZERO": {
+ DownloadSource.DEFAULT: "XiaomiMiMo/MiMo-7B-RL-ZERO",
+ DownloadSource.MODELSCOPE: "XiaomiMiMo/MiMo-7B-RL-ZERO",
+ },
+ },
+ template="mimo",
+)
+
+
+register_model_group(
+ models={
+ "MiMo-7B-VL-Instruct": {
+ DownloadSource.DEFAULT: "XiaomiMiMo/MiMo-VL-7B-SFT",
+ DownloadSource.MODELSCOPE: "XiaomiMiMo/MiMo-VL-7B-SFT",
+ },
+ "MiMo-7B-VL-RL": {
+ DownloadSource.DEFAULT: "XiaomiMiMo/MiMo-VL-7B-RL",
+ DownloadSource.MODELSCOPE: "XiaomiMiMo/MiMo-VL-7B-RL",
+ },
+ },
+ template="mimo_vl",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "MiniCPM-2B-SFT-Chat": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM-2B-sft-bf16",
+ DownloadSource.MODELSCOPE: "OpenBMB/miniCPM-bf16",
+ },
+ "MiniCPM-2B-DPO-Chat": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM-2B-dpo-bf16",
+ DownloadSource.MODELSCOPE: "OpenBMB/MiniCPM-2B-dpo-bf16",
+ },
+ },
+ template="cpm",
+)
+
+
+register_model_group(
+ models={
+ "MiniCPM3-4B-Chat": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM3-4B",
+ DownloadSource.MODELSCOPE: "OpenBMB/MiniCPM3-4B",
+ DownloadSource.OPENMIND: "LlamaFactory/MiniCPM3-4B",
+ },
+ },
+ template="cpm3",
+)
+
+
+register_model_group(
+ models={
+ "MiniCPM4-0.5B-Chat": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM4-0.5B",
+ DownloadSource.MODELSCOPE: "OpenBMB/MiniCPM4-0.5B",
+ },
+ "MiniCPM4-8B-Chat": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM4-8B",
+ DownloadSource.MODELSCOPE: "OpenBMB/MiniCPM4-8B",
+ },
+ },
+ template="cpm4",
+)
+
+
+register_model_group(
+ models={
+ "MiniCPM-o-2_6": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM-o-2_6",
+ DownloadSource.MODELSCOPE: "OpenBMB/MiniCPM-o-2_6",
+ },
+ },
+ template="minicpm_o",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "MiniCPM-V-2_6": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM-V-2_6",
+ DownloadSource.MODELSCOPE: "OpenBMB/MiniCPM-V-2_6",
+ },
+ },
+ template="minicpm_v",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "MiniCPM-V-4": {
+ DownloadSource.DEFAULT: "openbmb/MiniCPM-V-4",
+ DownloadSource.MODELSCOPE: "OpenBMB/MiniCPM-V-4",
+ },
+ },
+ template="minicpm_v",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Ministral-8B-Instruct-2410": {
+ DownloadSource.DEFAULT: "mistralai/Ministral-8B-Instruct-2410",
+ DownloadSource.MODELSCOPE: "mistralai/Ministral-8B-Instruct-2410",
+ },
+ "Mistral-Nemo-Base-2407": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-Nemo-Base-2407",
+ DownloadSource.MODELSCOPE: "LLM-Research/Mistral-Nemo-Base-2407",
+ },
+ "Mistral-Nemo-Instruct-2407": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-Nemo-Instruct-2407",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mistral-Nemo-Instruct-2407",
+ },
+ },
+ template="ministral",
+)
+
+
+register_model_group(
+ models={
+ "Mistral-7B-v0.1": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-7B-v0.1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mistral-7B-v0.1",
+ },
+ "Mistral-7B-v0.2": {
+ DownloadSource.DEFAULT: "alpindale/Mistral-7B-v0.2-hf",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mistral-7B-v0.2-hf",
+ },
+ "Mistral-7B-v0.3": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-7B-v0.3",
+ DownloadSource.MODELSCOPE: "LLM-Research/mistral-7b-v0.3",
+ },
+ "Mistral-7B-Instruct-v0.1": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-7B-Instruct-v0.1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mistral-7B-Instruct-v0.1",
+ },
+ "Mistral-7B-Instruct-v0.2": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-7B-Instruct-v0.2",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mistral-7B-Instruct-v0.2",
+ },
+ "Mistral-7B-Instruct-v0.3": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-7B-Instruct-v0.3",
+ DownloadSource.MODELSCOPE: "LLM-Research/Mistral-7B-Instruct-v0.3",
+ },
+ },
+ template="mistral",
+)
+
+
+register_model_group(
+ models={
+ "Mistral-Small-24B-Base-2501": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-Small-24B-Base-2501",
+ DownloadSource.MODELSCOPE: "mistralai/Mistral-Small-24B-Base-2501",
+ },
+ "Mistral-Small-24B-Instruct-2501": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-Small-24B-Instruct-2501",
+ DownloadSource.MODELSCOPE: "mistralai/Mistral-Small-24B-Instruct-2501",
+ },
+ },
+ template="mistral_small",
+)
+
+
+register_model_group(
+ models={
+ "Mistral-Small-3.1-24B-Base": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-Small-3.1-24B-Base-2503",
+ DownloadSource.MODELSCOPE: "mistralai/Mistral-Small-3.1-24B-Base-2503",
+ },
+ "Mistral-Small-3.1-24B-Instruct": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-Small-3.1-24B-Instruct-2503",
+ DownloadSource.MODELSCOPE: "mistralai/Mistral-Small-3.1-24B-Instruct-2503",
+ },
+ "Mistral-Small-3.2-24B-Instruct": {
+ DownloadSource.DEFAULT: "mistralai/Mistral-Small-3.2-24B-Instruct-2506",
+ DownloadSource.MODELSCOPE: "mistralai/Mistral-Small-3.2-24B-Instruct-2506",
+ },
+ },
+ template="mistral_small",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Mixtral-8x7B-v0.1": {
+ DownloadSource.DEFAULT: "mistralai/Mixtral-8x7B-v0.1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mixtral-8x7B-v0.1",
+ },
+ "Mixtral-8x22B-v0.1": {
+ DownloadSource.DEFAULT: "mistralai/Mixtral-8x22B-v0.1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mixtral-8x22B-v0.1",
+ },
+ "Mixtral-8x7B-v0.1-Instruct": {
+ DownloadSource.DEFAULT: "mistralai/Mixtral-8x7B-Instruct-v0.1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mixtral-8x7B-Instruct-v0.1",
+ },
+ "Mixtral-8x22B-v0.1-Instruct": {
+ DownloadSource.DEFAULT: "mistralai/Mixtral-8x22B-Instruct-v0.1",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Mixtral-8x22B-Instruct-v0.1",
+ },
+ },
+ template="mistral",
+)
+
+
+register_model_group(
+ models={
+ "Moonlight-16B-A3B": {
+ DownloadSource.DEFAULT: "moonshotai/Moonlight-16B-A3B",
+ DownloadSource.MODELSCOPE: "moonshotai/Moonlight-16B-A3B",
+ },
+ "Moonlight-16B-A3B-Instruct": {
+ DownloadSource.DEFAULT: "moonshotai/Moonlight-16B-A3B-Instruct",
+ DownloadSource.MODELSCOPE: "moonshotai/Moonlight-16B-A3B-Instruct",
+ },
+ },
+ template="moonlight",
+)
+
+
+register_model_group(
+ models={
+ "OLMo-1B": {
+ DownloadSource.DEFAULT: "allenai/OLMo-1B-hf",
+ },
+ "OLMo-7B": {
+ DownloadSource.DEFAULT: "allenai/OLMo-7B-hf",
+ },
+ "OLMo-7B-Chat": {
+ DownloadSource.DEFAULT: "ssec-uw/OLMo-7B-Instruct-hf",
+ },
+ "OLMo-1.7-7B": {
+ DownloadSource.DEFAULT: "allenai/OLMo-1.7-7B-hf",
+ },
+ },
+)
+
+
+register_model_group(
+ models={
+ "OpenChat3.5-7B-Chat": {
+ DownloadSource.DEFAULT: "openchat/openchat-3.5-0106",
+ DownloadSource.MODELSCOPE: "xcwzxcwz/openchat-3.5-0106",
+ }
+ },
+ template="openchat",
+)
+
+
+register_model_group(
+ models={
+ "OpenChat3.6-8B-Chat": {
+ DownloadSource.DEFAULT: "openchat/openchat-3.6-8b-20240522",
+ }
+ },
+ template="openchat-3.6",
+)
+
+
+register_model_group(
+ models={
+ "OpenCoder-1.5B-Base": {
+ DownloadSource.DEFAULT: "infly/OpenCoder-1.5B-Base",
+ DownloadSource.MODELSCOPE: "infly/OpenCoder-1.5B-Base",
+ },
+ "OpenCoder-8B-Base": {
+ DownloadSource.DEFAULT: "infly/OpenCoder-8B-Base",
+ DownloadSource.MODELSCOPE: "infly/OpenCoder-8B-Base",
+ },
+ "OpenCoder-1.5B-Instruct": {
+ DownloadSource.DEFAULT: "infly/OpenCoder-1.5B-Instruct",
+ DownloadSource.MODELSCOPE: "infly/OpenCoder-1.5B-Instruct",
+ },
+ "OpenCoder-8B-Instruct": {
+ DownloadSource.DEFAULT: "infly/OpenCoder-8B-Instruct",
+ DownloadSource.MODELSCOPE: "infly/OpenCoder-8B-Instruct",
+ },
+ },
+ template="opencoder",
+)
+
+
+register_model_group(
+ models={
+ "Orion-14B-Base": {
+ DownloadSource.DEFAULT: "OrionStarAI/Orion-14B-Base",
+ DownloadSource.MODELSCOPE: "OrionStarAI/Orion-14B-Base",
+ },
+ "Orion-14B-Chat": {
+ DownloadSource.DEFAULT: "OrionStarAI/Orion-14B-Chat",
+ DownloadSource.MODELSCOPE: "OrionStarAI/Orion-14B-Chat",
+ },
+ "Orion-14B-Long-Chat": {
+ DownloadSource.DEFAULT: "OrionStarAI/Orion-14B-LongChat",
+ DownloadSource.MODELSCOPE: "OrionStarAI/Orion-14B-LongChat",
+ },
+ "Orion-14B-RAG-Chat": {
+ DownloadSource.DEFAULT: "OrionStarAI/Orion-14B-Chat-RAG",
+ DownloadSource.MODELSCOPE: "OrionStarAI/Orion-14B-Chat-RAG",
+ },
+ "Orion-14B-Plugin-Chat": {
+ DownloadSource.DEFAULT: "OrionStarAI/Orion-14B-Chat-Plugin",
+ DownloadSource.MODELSCOPE: "OrionStarAI/Orion-14B-Chat-Plugin",
+ },
+ },
+ template="orion",
+)
+
+
+register_model_group(
+ models={
+ "PaliGemma-3B-pt-224": {
+ DownloadSource.DEFAULT: "google/paligemma-3b-pt-224",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma-3b-pt-224",
+ },
+ "PaliGemma-3B-pt-448": {
+ DownloadSource.DEFAULT: "google/paligemma-3b-pt-448",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma-3b-pt-448",
+ },
+ "PaliGemma-3B-pt-896": {
+ DownloadSource.DEFAULT: "google/paligemma-3b-pt-896",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma-3b-pt-896",
+ },
+ "PaliGemma-3B-mix-224": {
+ DownloadSource.DEFAULT: "google/paligemma-3b-mix-224",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma-3b-mix-224",
+ },
+ "PaliGemma-3B-mix-448": {
+ DownloadSource.DEFAULT: "google/paligemma-3b-mix-448",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma-3b-mix-448",
+ },
+ },
+ template="paligemma",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "PaliGemma2-3B-pt-224": {
+ DownloadSource.DEFAULT: "google/paligemma2-3b-pt-224",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-3b-pt-224",
+ },
+ "PaliGemma2-3B-pt-448": {
+ DownloadSource.DEFAULT: "google/paligemma2-3b-pt-448",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-3b-pt-448",
+ },
+ "PaliGemma2-3B-pt-896": {
+ DownloadSource.DEFAULT: "google/paligemma2-3b-pt-896",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-3b-pt-896",
+ },
+ "PaliGemma2-10B-pt-224": {
+ DownloadSource.DEFAULT: "google/paligemma2-10b-pt-224",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-10b-pt-224",
+ },
+ "PaliGemma2-10B-pt-448": {
+ DownloadSource.DEFAULT: "google/paligemma2-10b-pt-448",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-10b-pt-448",
+ },
+ "PaliGemma2-10B-pt-896": {
+ DownloadSource.DEFAULT: "google/paligemma2-10b-pt-896",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-10b-pt-896",
+ },
+ "PaliGemma2-28B-pt-224": {
+ DownloadSource.DEFAULT: "google/paligemma2-28b-pt-224",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-28b-pt-224",
+ },
+ "PaliGemma2-28B-pt-448": {
+ DownloadSource.DEFAULT: "google/paligemma2-28b-pt-448",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-28b-pt-448",
+ },
+ "PaliGemma2-28B-pt-896": {
+ DownloadSource.DEFAULT: "google/paligemma2-28b-pt-896",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/paligemma2-28b-pt-896",
+ },
+ "PaliGemma2-3B-mix-224": {
+ DownloadSource.DEFAULT: "google/paligemma2-3b-mix-224",
+ DownloadSource.MODELSCOPE: "mlx-community/paligemma2-3b-mix-224-bf16",
+ },
+ "PaliGemma2-3B-mix-448": {
+ DownloadSource.DEFAULT: "google/paligemma2-3b-mix-448",
+ DownloadSource.MODELSCOPE: "mlx-community/paligemma2-3b-mix-448-bf16",
+ },
+ "PaliGemma2-10B-mix-224": {
+ DownloadSource.DEFAULT: "google/paligemma2-10b-mix-224",
+ DownloadSource.MODELSCOPE: "mlx-community/paligemma2-10b-mix-224-bf16",
+ },
+ "PaliGemma2-10B-mix-448": {
+ DownloadSource.DEFAULT: "google/paligemma2-10b-mix-448",
+ DownloadSource.MODELSCOPE: "mlx-community/paligemma2-10b-mix-448-bf16",
+ },
+ "PaliGemma2-28B-mix-224": {
+ DownloadSource.DEFAULT: "google/paligemma2-28b-mix-224",
+ DownloadSource.MODELSCOPE: "mlx-community/paligemma2-28b-mix-224-bf16",
+ },
+ "PaliGemma2-28B-mix-448": {
+ DownloadSource.DEFAULT: "google/paligemma2-28b-mix-448",
+ DownloadSource.MODELSCOPE: "mlx-community/paligemma2-28b-mix-448-bf16",
+ },
+ },
+ template="paligemma",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Phi-1.5-1.3B": {
+ DownloadSource.DEFAULT: "microsoft/phi-1_5",
+ DownloadSource.MODELSCOPE: "allspace/PHI_1-5",
+ },
+ "Phi-2-2.7B": {
+ DownloadSource.DEFAULT: "microsoft/phi-2",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/phi-2",
+ },
+ }
+)
+
+
+register_model_group(
+ models={
+ "Phi-3-4B-4k-Instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3-mini-4k-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3-mini-4k-instruct",
+ },
+ "Phi-3-4B-128k-Instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3-mini-128k-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3-mini-128k-instruct",
+ },
+ "Phi-3-14B-8k-Instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3-medium-4k-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3-medium-4k-instruct",
+ },
+ "Phi-3-14B-128k-Instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3-medium-128k-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3-medium-128k-instruct",
+ },
+ "Phi-3.5-4B-instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3.5-mini-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3.5-mini-instruct",
+ },
+ "Phi-3.5-MoE-42B-A6.6B-instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3.5-MoE-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3.5-MoE-instruct",
+ },
+ },
+ template="phi",
+)
+
+
+register_model_group(
+ models={
+ "Phi-3-7B-8k-Instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3-small-8k-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3-small-8k-instruct",
+ },
+ "Phi-3-7B-128k-Instruct": {
+ DownloadSource.DEFAULT: "microsoft/Phi-3-small-128k-instruct",
+ DownloadSource.MODELSCOPE: "LLM-Research/Phi-3-small-128k-instruct",
+ },
+ },
+ template="phi_small",
+)
+
+
+register_model_group(
+ models={
+ "Phi-4-14B-Instruct": {
+ DownloadSource.DEFAULT: "microsoft/phi-4",
+ DownloadSource.MODELSCOPE: "LLM-Research/phi-4",
+ },
+ },
+ template="phi4",
+)
+
+
+register_model_group(
+ models={
+ "Pixtral-12B": {
+ DownloadSource.DEFAULT: "mistral-community/pixtral-12b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/pixtral-12b",
+ }
+ },
+ template="pixtral",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Qwen-1.8B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-1_8B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-1_8B",
+ },
+ "Qwen-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-7B",
+ },
+ "Qwen-14B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-14B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-14B",
+ },
+ "Qwen-72B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-72B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-72B",
+ },
+ "Qwen-1.8B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-1_8B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-1_8B-Chat",
+ },
+ "Qwen-7B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-7B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-7B-Chat",
+ },
+ "Qwen-14B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-14B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-14B-Chat",
+ },
+ "Qwen-72B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-72B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-72B-Chat",
+ },
+ "Qwen-1.8B-Chat-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-1_8B-Chat-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-1_8B-Chat-Int8",
+ },
+ "Qwen-1.8B-Chat-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-1_8B-Chat-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-1_8B-Chat-Int4",
+ },
+ "Qwen-7B-Chat-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-7B-Chat-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-7B-Chat-Int8",
+ },
+ "Qwen-7B-Chat-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-7B-Chat-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-7B-Chat-Int4",
+ },
+ "Qwen-14B-Chat-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-14B-Chat-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-14B-Chat-Int8",
+ },
+ "Qwen-14B-Chat-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-14B-Chat-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-14B-Chat-Int4",
+ },
+ "Qwen-72B-Chat-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-72B-Chat-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-72B-Chat-Int8",
+ },
+ "Qwen-72B-Chat-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen-72B-Chat-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen-72B-Chat-Int4",
+ },
+ },
+ template="qwen",
+)
+
+
+register_model_group(
+ models={
+ "Qwen1.5-0.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-0.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-0.5B",
+ },
+ "Qwen1.5-1.8B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-1.8B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-1.8B",
+ },
+ "Qwen1.5-4B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-4B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-4B",
+ },
+ "Qwen1.5-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-7B",
+ },
+ "Qwen1.5-14B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-14B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-14B",
+ },
+ "Qwen1.5-32B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-32B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-32B",
+ },
+ "Qwen1.5-72B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-72B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-72B",
+ },
+ "Qwen1.5-110B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-110B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-110B",
+ },
+ "Qwen1.5-MoE-A2.7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-MoE-A2.7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-MoE-A2.7B",
+ },
+ "Qwen1.5-0.5B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-0.5B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-0.5B-Chat",
+ },
+ "Qwen1.5-1.8B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-1.8B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-1.8B-Chat",
+ },
+ "Qwen1.5-4B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-4B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-4B-Chat",
+ },
+ "Qwen1.5-7B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-7B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-7B-Chat",
+ },
+ "Qwen1.5-14B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-14B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-14B-Chat",
+ },
+ "Qwen1.5-32B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-32B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-32B-Chat",
+ },
+ "Qwen1.5-72B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-72B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-72B-Chat",
+ },
+ "Qwen1.5-110B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-110B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-110B-Chat",
+ },
+ "Qwen1.5-MoE-A2.7B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-MoE-A2.7B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-MoE-A2.7B-Chat",
+ },
+ "Qwen1.5-0.5B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-0.5B-Chat-GPTQ-Int8",
+ },
+ "Qwen1.5-0.5B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-0.5B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-0.5B-Chat-AWQ",
+ },
+ "Qwen1.5-1.8B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-1.8B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-1.8B-Chat-GPTQ-Int8",
+ },
+ "Qwen1.5-1.8B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-1.8B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-1.8B-Chat-AWQ",
+ },
+ "Qwen1.5-4B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-4B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-4B-Chat-GPTQ-Int8",
+ },
+ "Qwen1.5-4B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-4B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-4B-Chat-AWQ",
+ },
+ "Qwen1.5-7B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-7B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-7B-Chat-GPTQ-Int8",
+ },
+ "Qwen1.5-7B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-7B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-7B-Chat-AWQ",
+ },
+ "Qwen1.5-14B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-14B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-14B-Chat-GPTQ-Int8",
+ },
+ "Qwen1.5-14B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-14B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-14B-Chat-AWQ",
+ },
+ "Qwen1.5-32B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-32B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-32B-Chat-AWQ",
+ },
+ "Qwen1.5-72B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-72B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-72B-Chat-GPTQ-Int8",
+ },
+ "Qwen1.5-72B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-72B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-72B-Chat-AWQ",
+ },
+ "Qwen1.5-110B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-110B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-110B-Chat-AWQ",
+ },
+ "Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen1.5-MoE-A2.7B-Chat-GPTQ-Int4",
+ },
+ "CodeQwen1.5-7B": {
+ DownloadSource.DEFAULT: "Qwen/CodeQwen1.5-7B",
+ DownloadSource.MODELSCOPE: "Qwen/CodeQwen1.5-7B",
+ },
+ "CodeQwen1.5-7B-Chat": {
+ DownloadSource.DEFAULT: "Qwen/CodeQwen1.5-7B-Chat",
+ DownloadSource.MODELSCOPE: "Qwen/CodeQwen1.5-7B-Chat",
+ },
+ "CodeQwen1.5-7B-Chat-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/CodeQwen1.5-7B-Chat-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/CodeQwen1.5-7B-Chat-AWQ",
+ },
+ },
+ template="qwen",
+)
+
+
+register_model_group(
+ models={
+ "Qwen2-0.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-0.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-0.5B",
+ },
+ "Qwen2-1.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-1.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-1.5B",
+ },
+ "Qwen2-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-7B",
+ },
+ "Qwen2-72B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-72B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-72B",
+ },
+ "Qwen2-MoE-57B-A14B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-57B-A14B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-57B-A14B",
+ },
+ "Qwen2-0.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-0.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-0.5B-Instruct",
+ DownloadSource.OPENMIND: "LlamaFactory/Qwen2-0.5B-Instruct",
+ },
+ "Qwen2-1.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-1.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-1.5B-Instruct",
+ DownloadSource.OPENMIND: "LlamaFactory/Qwen2-1.5B-Instruct",
+ },
+ "Qwen2-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-7B-Instruct",
+ DownloadSource.OPENMIND: "LlamaFactory/Qwen2-7B-Instruct",
+ },
+ "Qwen2-72B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-72B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-72B-Instruct",
+ },
+ "Qwen2-MoE-57B-A14B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-57B-A14B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-57B-A14B-Instruct",
+ },
+ "Qwen2-0.5B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-0.5B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-0.5B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2-0.5B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-0.5B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-0.5B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-0.5B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-0.5B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-0.5B-Instruct-AWQ",
+ },
+ "Qwen2-1.5B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-1.5B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-1.5B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2-1.5B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-1.5B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-1.5B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-1.5B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-1.5B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-1.5B-Instruct-AWQ",
+ },
+ "Qwen2-7B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-7B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-7B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2-7B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-7B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-7B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-7B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-7B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-7B-Instruct-AWQ",
+ },
+ "Qwen2-72B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-72B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-72B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2-72B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-72B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-72B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-72B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-72B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-72B-Instruct-AWQ",
+ },
+ "Qwen2-57B-A14B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-57B-A14B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-57B-A14B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-Math-1.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Math-1.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Math-1.5B",
+ },
+ "Qwen2-Math-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Math-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Math-7B",
+ },
+ "Qwen2-Math-72B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Math-72B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Math-72B",
+ },
+ "Qwen2-Math-1.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Math-1.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Math-1.5B-Instruct",
+ },
+ "Qwen2-Math-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Math-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Math-7B-Instruct",
+ },
+ "Qwen2-Math-72B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Math-72B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Math-72B-Instruct",
+ },
+ },
+ template="qwen",
+)
+
+
+register_model_group(
+ models={
+ "Qwen2.5-0.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-0.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-0.5B",
+ },
+ "Qwen2.5-1.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-1.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-1.5B",
+ },
+ "Qwen2.5-3B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-3B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-3B",
+ },
+ "Qwen2.5-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-7B",
+ },
+ "Qwen2.5-14B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-14B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-14B",
+ },
+ "Qwen2.5-32B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-32B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-32B",
+ },
+ "Qwen2.5-72B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-72B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-72B",
+ },
+ "Qwen2.5-0.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-0.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-0.5B-Instruct",
+ },
+ "Qwen2.5-1.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-1.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-1.5B-Instruct",
+ },
+ "Qwen2.5-3B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-3B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-3B-Instruct",
+ },
+ "Qwen2.5-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-7B-Instruct",
+ },
+ "Qwen2.5-14B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-14B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-14B-Instruct",
+ },
+ "Qwen2.5-32B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-32B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-32B-Instruct",
+ },
+ "Qwen2.5-72B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-72B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-72B-Instruct",
+ },
+ "Qwen2.5-7B-Instruct-1M": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-7B-Instruct-1M",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-7B-Instruct-1M",
+ },
+ "Qwen2.5-14B-Instruct-1M": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-14B-Instruct-1M",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-14B-Instruct-1M",
+ },
+ "Qwen2.5-0.5B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2.5-0.5B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2.5-0.5B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-0.5B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-0.5B-Instruct-AWQ",
+ },
+ "Qwen2.5-1.5B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-1.5B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-1.5B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2.5-1.5B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-1.5B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-1.5B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2.5-1.5B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-1.5B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-1.5B-Instruct-AWQ",
+ },
+ "Qwen2.5-3B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-3B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-3B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2.5-3B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-3B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-3B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2.5-3B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-3B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-3B-Instruct-AWQ",
+ },
+ "Qwen2.5-7B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-7B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-7B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2.5-7B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2.5-7B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-7B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-7B-Instruct-AWQ",
+ },
+ "Qwen2.5-14B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-14B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-14B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2.5-14B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-14B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-14B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2.5-14B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-14B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-14B-Instruct-AWQ",
+ },
+ "Qwen2.5-32B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-32B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-32B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2.5-32B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-32B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-32B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2.5-32B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-32B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-32B-Instruct-AWQ",
+ },
+ "Qwen2.5-72B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2.5-72B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2.5-72B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-72B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-72B-Instruct-AWQ",
+ },
+ "Qwen2.5-Coder-0.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-0.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-0.5B",
+ },
+ "Qwen2.5-Coder-1.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-1.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-1.5B",
+ },
+ "Qwen2.5-Coder-3B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-3B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-3B",
+ },
+ "Qwen2.5-Coder-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-7B",
+ },
+ "Qwen2.5-Coder-14B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-14B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-14B",
+ },
+ "Qwen2.5-Coder-32B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-32B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-32B",
+ },
+ "Qwen2.5-Coder-0.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-0.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-0.5B-Instruct",
+ },
+ "Qwen2.5-Coder-1.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-1.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-1.5B-Instruct",
+ },
+ "Qwen2.5-Coder-3B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-3B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-3B-Instruct",
+ },
+ "Qwen2.5-Coder-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-7B-Instruct",
+ },
+ "Qwen2.5-Coder-14B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-14B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-14B-Instruct",
+ },
+ "Qwen2.5-Coder-32B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Coder-32B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-32B-Instruct",
+ },
+ "Qwen2.5-Math-1.5B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Math-1.5B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Math-1.5B",
+ },
+ "Qwen2.5-Math-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Math-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Math-7B",
+ },
+ "Qwen2.5-Math-72B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Math-72B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Math-72B",
+ },
+ "Qwen2.5-Math-1.5B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Math-1.5B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-1.5B-Instruct",
+ },
+ "Qwen2.5-Math-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Math-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-7B-Instruct",
+ },
+ "Qwen2.5-Math-72B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Math-72B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Coder-72B-Instruct",
+ },
+ "QwQ-32B-Preview-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/QwQ-32B-Preview",
+ DownloadSource.MODELSCOPE: "Qwen/QwQ-32B-Preview",
+ },
+ "QwQ-32B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/QwQ-32B",
+ DownloadSource.MODELSCOPE: "Qwen/QwQ-32B",
+ },
+ },
+ template="qwen",
+)
+
+
+register_model_group(
+ models={
+ "Qwen3-0.6B-Base": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-0.6B-Base",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-0.6B-Base",
+ },
+ "Qwen3-1.7B-Base": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-1.7B-Base",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-1.7B-Base",
+ },
+ "Qwen3-4B-Base": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-4B-Base",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-4B-Base",
+ },
+ "Qwen3-8B-Base": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-8B-Base",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-8B-Base",
+ },
+ "Qwen3-14B-Base": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-14B-Base",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-14B-Base",
+ },
+ "Qwen3-30B-A3B-Base": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-30B-A3B-Base",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-30B-A3B-Base",
+ },
+ "Qwen3-0.6B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-0.6B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-0.6B",
+ },
+ "Qwen3-1.7B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-1.7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-1.7B",
+ },
+ "Qwen3-4B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-4B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-4B",
+ },
+ "Qwen3-4B-Thinking-2507": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-4B-Thinking-2507",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-4B-Thinking-2507",
+ },
+ "Qwen3-8B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-8B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-8B",
+ },
+ "Qwen3-14B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-14B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-14B",
+ },
+ "Qwen3-32B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-32B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-32B",
+ },
+ "Qwen3-30B-A3B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-30B-A3B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-30B-A3B",
+ },
+ "Qwen3-30B-A3B-Thinking-2507": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-30B-A3B-Thinking-2507",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-30B-A3B-Thinking-2507",
+ },
+ "Qwen3-235B-A22B-Thinking": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-235B-A22B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-235B-A22B",
+ },
+ "Qwen3-235B-A22B-Thinking-2507": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-235B-A22B-Thinking-2507",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-235B-A22B-Thinking-2507",
+ },
+ "Qwen3-0.6B-Thinking-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-0.6B-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-0.6B-GPTQ-Int8",
+ },
+ "Qwen3-1.7B-Thinking-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-1.7B-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-1.7B-GPTQ-Int8",
+ },
+ "Qwen3-4B-Thinking-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-4B-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-4B-AWQ",
+ },
+ "Qwen3-8B-Thinking-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-8B-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-8B-AWQ",
+ },
+ "Qwen3-14B-Thinking-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-14B-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-14B-AWQ",
+ },
+ "Qwen3-32B-Thinking-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-32B-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-32B-AWQ",
+ },
+ "Qwen3-30B-A3B-Thinking-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-30B-A3B-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-30B-A3B-GPTQ-Int4",
+ },
+ "Qwen3-235B-A22B-Thinking-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-235B-A22B-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-235B-A22B-GPTQ-Int4",
+ },
+ },
+ template="qwen3",
+)
+
+
+register_model_group(
+ models={
+ "Qwen3-4B-Instruct-2507": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-4B-Instruct-2507",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-4B-Instruct-2507",
+ },
+ "Qwen3-30B-A3B-Instruct-2507": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-30B-A3B-Instruct-2507",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-30B-A3B-Instruct-2507",
+ },
+ "Qwen3-235B-A22B-Instruct-2507": {
+ DownloadSource.DEFAULT: "Qwen/Qwen3-235B-A22B-Instruct-2507",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen3-235B-A22B-Instruct-2507",
+ },
+ },
+ template="qwen3_nothink",
+)
+
+
+register_model_group(
+ models={
+ "Qwen2-Audio-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Audio-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Audio-7B",
+ },
+ "Qwen2-Audio-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-Audio-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-Audio-7B-Instruct",
+ },
+ },
+ template="qwen2_audio",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Qwen2.5-Omni-3B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Omni-3B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Omni-3B",
+ },
+ "Qwen2.5-Omni-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Omni-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Omni-7B",
+ },
+ "Qwen2.5-Omni-7B-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Omni-7B-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Omni-7B-GPTQ-Int4",
+ },
+ "Qwen2.5-Omni-7B-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-Omni-7B-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-Omni-7B-AWQ",
+ },
+ },
+ template="qwen2_omni",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Qwen2-VL-2B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-2B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-2B",
+ },
+ "Qwen2-VL-7B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-7B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-7B",
+ },
+ "Qwen2-VL-72B": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-72B",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-72B",
+ },
+ "Qwen2-VL-2B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-2B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-2B-Instruct",
+ DownloadSource.OPENMIND: "LlamaFactory/Qwen2-VL-2B-Instruct",
+ },
+ "Qwen2-VL-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-7B-Instruct",
+ DownloadSource.OPENMIND: "LlamaFactory/Qwen2-VL-7B-Instruct",
+ },
+ "Qwen2-VL-72B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-72B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-72B-Instruct",
+ },
+ "Qwen2-VL-2B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2-VL-2B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-VL-2B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-2B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-2B-Instruct-AWQ",
+ },
+ "Qwen2-VL-7B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2-VL-7B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-VL-7B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-7B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-7B-Instruct-AWQ",
+ },
+ "Qwen2-VL-72B-Instruct-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int8",
+ },
+ "Qwen2-VL-72B-Instruct-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int4",
+ },
+ "Qwen2-VL-72B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2-VL-72B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2-VL-72B-Instruct-AWQ",
+ },
+ "QVQ-72B-Preview": {
+ DownloadSource.DEFAULT: "Qwen/QVQ-72B-Preview",
+ DownloadSource.MODELSCOPE: "Qwen/QVQ-72B-Preview",
+ },
+ "Qwen2.5-VL-3B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-VL-3B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-VL-3B-Instruct",
+ },
+ "Qwen2.5-VL-7B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-VL-7B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-VL-7B-Instruct",
+ },
+ "Qwen2.5-VL-32B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-VL-32B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-VL-32B-Instruct",
+ },
+ "Qwen2.5-VL-72B-Instruct": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-VL-72B-Instruct",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-VL-72B-Instruct",
+ },
+ "Qwen2.5-VL-3B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-VL-3B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-VL-3B-Instruct-AWQ",
+ },
+ "Qwen2.5-VL-7B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-VL-7B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-VL-7B-Instruct-AWQ",
+ },
+ "Qwen2.5-VL-72B-Instruct-AWQ": {
+ DownloadSource.DEFAULT: "Qwen/Qwen2.5-VL-72B-Instruct-AWQ",
+ DownloadSource.MODELSCOPE: "Qwen/Qwen2.5-VL-72B-Instruct-AWQ",
+ },
+ },
+ template="qwen2_vl",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Seed-Coder-8B-Base": {
+ DownloadSource.DEFAULT: "ByteDance-Seed/Seed-Coder-8B-Base",
+ },
+ "Seed-Coder-8B-Instruct": {
+ DownloadSource.DEFAULT: "ByteDance-Seed/Seed-Coder-8B-Instruct",
+ },
+ "Seed-Coder-8B-Instruct-Reasoning": {
+ DownloadSource.DEFAULT: "ByteDance-Seed/Seed-Coder-8B-Reasoning-bf16",
+ },
+ },
+ template="seed_coder",
+)
+
+
+register_model_group(
+ models={
+ "Skywork-13B-Base": {
+ DownloadSource.DEFAULT: "Skywork/Skywork-13B-base",
+ DownloadSource.MODELSCOPE: "skywork/Skywork-13B-base",
+ }
+ }
+)
+
+
+register_model_group(
+ models={
+ "Skywork-o1-Open-Llama-3.1-8B": {
+ DownloadSource.DEFAULT: "Skywork/Skywork-o1-Open-Llama-3.1-8B",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/Skywork-o1-Open-Llama-3.1-8B",
+ }
+ },
+ template="skywork_o1",
+)
+
+
+register_model_group(
+ models={
+ "SmolLM-135M": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM-135M",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM-135M",
+ },
+ "SmolLM-360M": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM-360M",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM-360M",
+ },
+ "SmolLM-1.7B": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM-1.7B",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM-1.7B",
+ },
+ "SmolLM-135M-Instruct": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM-135M-Instruct",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM-135M-Instruct",
+ },
+ "SmolLM-360M-Instruct": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM-360M-Instruct",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM-360M-Instruct",
+ },
+ "SmolLM-1.7B-Instruct": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM-1.7B-Instruct",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM-1.7B-Instruct",
+ },
+ },
+ template="smollm",
+)
+
+
+register_model_group(
+ models={
+ "SmolLM2-135M": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM2-135M",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM2-135M",
+ },
+ "SmolLM2-360M": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM2-360M",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM2-360M",
+ },
+ "SmolLM2-1.7B": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM2-1.7B",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM2-1.7B",
+ },
+ "SmolLM2-135M-Instruct": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM2-135M-Instruct",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM2-135M-Instruct",
+ },
+ "SmolLM2-360M-Instruct": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM2-360M-Instruct",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM2-360M-Instruct",
+ },
+ "SmolLM2-1.7B-Instruct": {
+ DownloadSource.DEFAULT: "HuggingFaceTB/SmolLM2-1.7B-Instruct",
+ DownloadSource.MODELSCOPE: "HuggingFaceTB/SmolLM2-1.7B-Instruct",
+ },
+ },
+ template="smollm2",
+)
+
+
+register_model_group(
+ models={
+ "SOLAR-10.7B-v1.0": {
+ DownloadSource.DEFAULT: "upstage/SOLAR-10.7B-v1.0",
+ },
+ "SOLAR-10.7B-Instruct-v1.0": {
+ DownloadSource.DEFAULT: "upstage/SOLAR-10.7B-Instruct-v1.0",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/SOLAR-10.7B-Instruct-v1.0",
+ },
+ },
+ template="solar",
+)
+
+
+register_model_group(
+ models={
+ "StarCoder2-3B": {
+ DownloadSource.DEFAULT: "bigcode/starcoder2-3b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/starcoder2-3b",
+ },
+ "StarCoder2-7B": {
+ DownloadSource.DEFAULT: "bigcode/starcoder2-7b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/starcoder2-7b",
+ },
+ "StarCoder2-15B": {
+ DownloadSource.DEFAULT: "bigcode/starcoder2-15b",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/starcoder2-15b",
+ },
+ }
+)
+
+
+register_model_group(
+ models={
+ "TeleChat-1B-Chat": {
+ DownloadSource.DEFAULT: "Tele-AI/TeleChat-1B",
+ DownloadSource.MODELSCOPE: "TeleAI/TeleChat-1B",
+ },
+ "TeleChat-7B-Chat": {
+ DownloadSource.DEFAULT: "Tele-AI/telechat-7B",
+ DownloadSource.MODELSCOPE: "TeleAI/telechat-7B",
+ DownloadSource.OPENMIND: "TeleAI/TeleChat-7B-pt",
+ },
+ "TeleChat-12B-Chat": {
+ DownloadSource.DEFAULT: "Tele-AI/TeleChat-12B-v2",
+ DownloadSource.MODELSCOPE: "TeleAI/TeleChat-12B-v2",
+ DownloadSource.OPENMIND: "TeleAI/TeleChat-12B-pt",
+ },
+ "TeleChat-52B-Chat": {
+ DownloadSource.DEFAULT: "Tele-AI/TeleChat-52B",
+ },
+ },
+ template="telechat",
+)
+
+
+register_model_group(
+ models={
+ "TeleChat2-3B-Chat": {
+ DownloadSource.DEFAULT: "Tele-AI/TeleChat2-3B",
+ DownloadSource.MODELSCOPE: "TeleAI/TeleChat2-3B",
+ },
+ "TeleChat2-7B-Chat": {
+ DownloadSource.DEFAULT: "Tele-AI/TeleChat2-7B",
+ DownloadSource.MODELSCOPE: "TeleAI/TeleChat2-7B",
+ },
+ "TeleChat2-35B-Chat": {
+ DownloadSource.MODELSCOPE: "TeleAI/TeleChat2-35B-Nov",
+ },
+ "TeleChat2-115B-Chat": {
+ DownloadSource.DEFAULT: "Tele-AI/TeleChat2-115B",
+ DownloadSource.MODELSCOPE: "TeleAI/TeleChat2-115B",
+ },
+ },
+ template="telechat2",
+)
+
+
+register_model_group(
+ models={
+ "Vicuna-v1.5-7B-Chat": {
+ DownloadSource.DEFAULT: "lmsys/vicuna-7b-v1.5",
+ DownloadSource.MODELSCOPE: "Xorbits/vicuna-7b-v1.5",
+ },
+ "Vicuna-v1.5-13B-Chat": {
+ DownloadSource.DEFAULT: "lmsys/vicuna-13b-v1.5",
+ DownloadSource.MODELSCOPE: "Xorbits/vicuna-13b-v1.5",
+ },
+ },
+ template="vicuna",
+)
+
+
+register_model_group(
+ models={
+ "Video-LLaVA-7B-Chat": {
+ DownloadSource.DEFAULT: "LanguageBind/Video-LLaVA-7B-hf",
+ },
+ },
+ template="video_llava",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "XuanYuan-6B": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-6B",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-6B",
+ },
+ "XuanYuan-70B": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-70B",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-70B",
+ },
+ "XuanYuan2-70B": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan2-70B",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan2-70B",
+ },
+ "XuanYuan-6B-Chat": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-6B-Chat",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-6B-Chat",
+ },
+ "XuanYuan-70B-Chat": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-70B-Chat",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-70B-Chat",
+ },
+ "XuanYuan2-70B-Chat": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan2-70B-Chat",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan2-70B-Chat",
+ },
+ "XuanYuan-6B-Chat-8bit": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-6B-Chat-8bit",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-6B-Chat-8bit",
+ },
+ "XuanYuan-6B-Chat-4bit": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-6B-Chat-4bit",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-6B-Chat-4bit",
+ },
+ "XuanYuan-70B-Chat-8bit": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-70B-Chat-8bit",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-70B-Chat-8bit",
+ },
+ "XuanYuan-70B-Chat-4bit": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan-70B-Chat-4bit",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan-70B-Chat-4bit",
+ },
+ "XuanYuan2-70B-Chat-8bit": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan2-70B-Chat-8bit",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan2-70B-Chat-8bit",
+ },
+ "XuanYuan2-70B-Chat-4bit": {
+ DownloadSource.DEFAULT: "Duxiaoman-DI/XuanYuan2-70B-Chat-4bit",
+ DownloadSource.MODELSCOPE: "Duxiaoman-DI/XuanYuan2-70B-Chat-4bit",
+ },
+ },
+ template="xuanyuan",
+)
+
+
+register_model_group(
+ models={
+ "XVERSE-7B": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-7B",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-7B",
+ },
+ "XVERSE-13B": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-13B",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-13B",
+ },
+ "XVERSE-65B": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-65B",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-65B",
+ },
+ "XVERSE-65B-2": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-65B-2",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-65B-2",
+ },
+ "XVERSE-7B-Chat": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-7B-Chat",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-7B-Chat",
+ },
+ "XVERSE-13B-Chat": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-13B-Chat",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-13B-Chat",
+ },
+ "XVERSE-65B-Chat": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-65B-Chat",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-65B-Chat",
+ },
+ "XVERSE-MoE-A4.2B": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-MoE-A4.2B",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-MoE-A4.2B",
+ },
+ "XVERSE-7B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-7B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-7B-Chat-GPTQ-Int8",
+ },
+ "XVERSE-7B-Chat-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-7B-Chat-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-7B-Chat-GPTQ-Int4",
+ },
+ "XVERSE-13B-Chat-GPTQ-Int8": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-13B-Chat-GPTQ-Int8",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-13B-Chat-GPTQ-Int8",
+ },
+ "XVERSE-13B-Chat-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-13B-Chat-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-13B-Chat-GPTQ-Int4",
+ },
+ "XVERSE-65B-Chat-GPTQ-Int4": {
+ DownloadSource.DEFAULT: "xverse/XVERSE-65B-Chat-GPTQ-Int4",
+ DownloadSource.MODELSCOPE: "xverse/XVERSE-65B-Chat-GPTQ-Int4",
+ },
+ },
+ template="xverse",
+)
+
+
+register_model_group(
+ models={
+ "Yayi-7B": {
+ DownloadSource.DEFAULT: "wenge-research/yayi-7b-llama2",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/yayi-7b-llama2",
+ },
+ "Yayi-13B": {
+ DownloadSource.DEFAULT: "wenge-research/yayi-13b-llama2",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/yayi-13b-llama2",
+ },
+ },
+ template="yayi",
+)
+
+
+register_model_group(
+ models={
+ "Yi-6B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-6B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-6B",
+ },
+ "Yi-9B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-9B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-9B",
+ },
+ "Yi-34B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-34B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-34B",
+ },
+ "Yi-6B-Chat": {
+ DownloadSource.DEFAULT: "01-ai/Yi-6B-Chat",
+ DownloadSource.MODELSCOPE: "01ai/Yi-6B-Chat",
+ },
+ "Yi-34B-Chat": {
+ DownloadSource.DEFAULT: "01-ai/Yi-34B-Chat",
+ DownloadSource.MODELSCOPE: "01ai/Yi-34B-Chat",
+ },
+ "Yi-6B-Chat-8bits": {
+ DownloadSource.DEFAULT: "01-ai/Yi-6B-Chat-8bits",
+ DownloadSource.MODELSCOPE: "01ai/Yi-6B-Chat-8bits",
+ },
+ "Yi-6B-Chat-4bits": {
+ DownloadSource.DEFAULT: "01-ai/Yi-6B-Chat-4bits",
+ DownloadSource.MODELSCOPE: "01ai/Yi-6B-Chat-4bits",
+ },
+ "Yi-34B-Chat-8bits": {
+ DownloadSource.DEFAULT: "01-ai/Yi-34B-Chat-8bits",
+ DownloadSource.MODELSCOPE: "01ai/Yi-34B-Chat-8bits",
+ },
+ "Yi-34B-Chat-4bits": {
+ DownloadSource.DEFAULT: "01-ai/Yi-34B-Chat-4bits",
+ DownloadSource.MODELSCOPE: "01ai/Yi-34B-Chat-4bits",
+ },
+ "Yi-1.5-6B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-1.5-6B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-1.5-6B",
+ },
+ "Yi-1.5-9B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-1.5-9B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-1.5-9B",
+ },
+ "Yi-1.5-34B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-1.5-34B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-1.5-34B",
+ },
+ "Yi-1.5-6B-Chat": {
+ DownloadSource.DEFAULT: "01-ai/Yi-1.5-6B-Chat",
+ DownloadSource.MODELSCOPE: "01ai/Yi-1.5-6B-Chat",
+ DownloadSource.OPENMIND: "LlamaFactory/Yi-1.5-6B-Chat",
+ },
+ "Yi-1.5-9B-Chat": {
+ DownloadSource.DEFAULT: "01-ai/Yi-1.5-9B-Chat",
+ DownloadSource.MODELSCOPE: "01ai/Yi-1.5-9B-Chat",
+ },
+ "Yi-1.5-34B-Chat": {
+ DownloadSource.DEFAULT: "01-ai/Yi-1.5-34B-Chat",
+ DownloadSource.MODELSCOPE: "01ai/Yi-1.5-34B-Chat",
+ },
+ "Yi-Coder-1.5B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-Coder-1.5B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-Coder-1.5B",
+ },
+ "Yi-Coder-9B": {
+ DownloadSource.DEFAULT: "01-ai/Yi-Coder-9B",
+ DownloadSource.MODELSCOPE: "01ai/Yi-Coder-9B",
+ },
+ "Yi-Coder-1.5B-Chat": {
+ DownloadSource.DEFAULT: "01-ai/Yi-Coder-1.5B-Chat",
+ DownloadSource.MODELSCOPE: "01ai/Yi-Coder-1.5B-Chat",
+ },
+ "Yi-Coder-9B-Chat": {
+ DownloadSource.DEFAULT: "01-ai/Yi-Coder-9B-Chat",
+ DownloadSource.MODELSCOPE: "01ai/Yi-Coder-9B-Chat",
+ },
+ },
+ template="yi",
+)
+
+
+register_model_group(
+ models={
+ "Yi-VL-6B-Chat": {
+ DownloadSource.DEFAULT: "BUAADreamer/Yi-VL-6B-hf",
+ },
+ "Yi-VL-34B-Chat": {
+ DownloadSource.DEFAULT: "BUAADreamer/Yi-VL-34B-hf",
+ },
+ },
+ template="yi_vl",
+ multimodal=True,
+)
+
+
+register_model_group(
+ models={
+ "Yuan2-2B-Chat": {
+ DownloadSource.DEFAULT: "IEITYuan/Yuan2-2B-hf",
+ DownloadSource.MODELSCOPE: "YuanLLM/Yuan2.0-2B-hf",
+ },
+ "Yuan2-51B-Chat": {
+ DownloadSource.DEFAULT: "IEITYuan/Yuan2-51B-hf",
+ DownloadSource.MODELSCOPE: "YuanLLM/Yuan2.0-51B-hf",
+ },
+ "Yuan2-102B-Chat": {
+ DownloadSource.DEFAULT: "IEITYuan/Yuan2-102B-hf",
+ DownloadSource.MODELSCOPE: "YuanLLM/Yuan2.0-102B-hf",
+ },
+ },
+ template="yuan",
+)
+
+
+register_model_group(
+ models={
+ "Zephyr-7B-Alpha-Chat": {
+ DownloadSource.DEFAULT: "HuggingFaceH4/zephyr-7b-alpha",
+ DownloadSource.MODELSCOPE: "AI-ModelScope/zephyr-7b-alpha",
+ },
+ "Zephyr-7B-Beta-Chat": {
+ DownloadSource.DEFAULT: "HuggingFaceH4/zephyr-7b-beta",
+ DownloadSource.MODELSCOPE: "modelscope/zephyr-7b-beta",
+ },
+ "Zephyr-141B-ORPO-Chat": {
+ DownloadSource.DEFAULT: "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
+ },
+ },
+ template="zephyr",
+)
diff --git a/src/train_sft/src/llamafactory/extras/env.py b/src/train_sft/src/llamafactory/extras/env.py
new file mode 100644
index 0000000000000000000000000000000000000000..527e40bb3b3d2ad56088cd65ab7bd2f929e368a5
--- /dev/null
+++ b/src/train_sft/src/llamafactory/extras/env.py
@@ -0,0 +1,93 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's transformers library.
+# https://github.com/huggingface/transformers/blob/v4.40.0/src/transformers/commands/env.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import platform
+
+import accelerate
+import datasets
+import peft
+import torch
+import transformers
+import trl
+from transformers.utils import is_torch_cuda_available, is_torch_npu_available
+
+
+VERSION = "0.9.4.dev0"
+
+
+def print_env() -> None:
+ info = {
+ "`llamafactory` version": VERSION,
+ "Platform": platform.platform(),
+ "Python version": platform.python_version(),
+ "PyTorch version": torch.__version__,
+ "Transformers version": transformers.__version__,
+ "Datasets version": datasets.__version__,
+ "Accelerate version": accelerate.__version__,
+ "PEFT version": peft.__version__,
+ "TRL version": trl.__version__,
+ }
+
+ if is_torch_cuda_available():
+ info["PyTorch version"] += " (GPU)"
+ info["GPU type"] = torch.cuda.get_device_name()
+ info["GPU number"] = torch.cuda.device_count()
+ info["GPU memory"] = f"{torch.cuda.mem_get_info()[1] / (1024**3):.2f}GB"
+
+ if is_torch_npu_available():
+ info["PyTorch version"] += " (NPU)"
+ info["NPU type"] = torch.npu.get_device_name()
+ info["CANN version"] = torch.version.cann
+
+ try:
+ import deepspeed # type: ignore
+ deepspeed.ops.op_builder.CPUAdamBuilder().load()
+
+ info["DeepSpeed version"] = deepspeed.__version__
+ except Exception:
+ pass
+
+ try:
+ import bitsandbytes # type: ignore
+
+ info["Bitsandbytes version"] = bitsandbytes.__version__
+ except Exception:
+ pass
+
+ try:
+ import vllm
+
+ info["vLLM version"] = vllm.__version__
+ except Exception:
+ pass
+
+ try:
+ import subprocess
+
+ commit_info = subprocess.run(["git", "rev-parse", "HEAD"], capture_output=True, text=True, check=True)
+ commit_hash = commit_info.stdout.strip()
+ info["Git commit"] = commit_hash
+ except Exception:
+ pass
+
+ if os.path.exists("data"):
+ info["Default data directory"] = "detected"
+ else:
+ info["Default data directory"] = "not detected"
+
+ print("\n" + "\n".join([f"- {key}: {value}" for key, value in info.items()]) + "\n")
diff --git a/src/train_sft/src/llamafactory/extras/logging.py b/src/train_sft/src/llamafactory/extras/logging.py
new file mode 100644
index 0000000000000000000000000000000000000000..f234a807ff346dd672fccf7199f62f880f0e1147
--- /dev/null
+++ b/src/train_sft/src/llamafactory/extras/logging.py
@@ -0,0 +1,159 @@
+# Copyright 2025 Optuna, HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's transformers library.
+# https://github.com/huggingface/transformers/blob/v4.40.0/src/transformers/utils/logging.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import logging
+import os
+import sys
+import threading
+from concurrent.futures import ThreadPoolExecutor
+from functools import lru_cache
+from typing import Optional
+
+from .constants import RUNNING_LOG
+
+
+_thread_lock = threading.RLock()
+_default_handler: Optional["logging.Handler"] = None
+_default_log_level: "logging._Level" = logging.INFO
+
+
+class LoggerHandler(logging.Handler):
+ r"""Redirect the logging output to the logging file for LLaMA Board."""
+
+ def __init__(self, output_dir: str) -> None:
+ super().__init__()
+ self._formatter = logging.Formatter(
+ fmt="[%(levelname)s|%(asctime)s] %(filename)s:%(lineno)s >> %(message)s",
+ datefmt="%Y-%m-%d %H:%M:%S",
+ )
+ self.setLevel(logging.INFO)
+ os.makedirs(output_dir, exist_ok=True)
+ self.running_log = os.path.join(output_dir, RUNNING_LOG)
+ if os.path.exists(self.running_log):
+ os.remove(self.running_log)
+
+ self.thread_pool = ThreadPoolExecutor(max_workers=1)
+
+ def _write_log(self, log_entry: str) -> None:
+ with open(self.running_log, "a", encoding="utf-8") as f:
+ f.write(log_entry + "\n")
+
+ def emit(self, record) -> None:
+ if record.name == "httpx":
+ return
+
+ log_entry = self._formatter.format(record)
+ self.thread_pool.submit(self._write_log, log_entry)
+
+ def close(self) -> None:
+ self.thread_pool.shutdown(wait=True)
+ return super().close()
+
+
+class _Logger(logging.Logger):
+ r"""A logger that supports rank0 logging."""
+
+ def info_rank0(self, *args, **kwargs) -> None:
+ self.info(*args, **kwargs)
+
+ def warning_rank0(self, *args, **kwargs) -> None:
+ self.warning(*args, **kwargs)
+
+ def warning_rank0_once(self, *args, **kwargs) -> None:
+ self.warning(*args, **kwargs)
+
+
+def _get_default_logging_level() -> "logging._Level":
+ r"""Return the default logging level."""
+ env_level_str = os.getenv("LLAMAFACTORY_VERBOSITY", None)
+ if env_level_str:
+ if env_level_str.upper() in logging._nameToLevel:
+ return logging._nameToLevel[env_level_str.upper()]
+ else:
+ raise ValueError(f"Unknown logging level: {env_level_str}.")
+
+ return _default_log_level
+
+
+def _get_library_name() -> str:
+ return __name__.split(".")[0]
+
+
+def _get_library_root_logger() -> "_Logger":
+ return logging.getLogger(_get_library_name())
+
+
+def _configure_library_root_logger() -> None:
+ r"""Configure root logger using a stdout stream handler with an explicit format."""
+ global _default_handler
+
+ with _thread_lock:
+ if _default_handler: # already configured
+ return
+
+ formatter = logging.Formatter(
+ fmt="[%(levelname)s|%(asctime)s] %(name)s:%(lineno)s >> %(message)s",
+ datefmt="%Y-%m-%d %H:%M:%S",
+ )
+ _default_handler = logging.StreamHandler(sys.stdout)
+ _default_handler.setFormatter(formatter)
+ library_root_logger = _get_library_root_logger()
+ library_root_logger.addHandler(_default_handler)
+ library_root_logger.setLevel(_get_default_logging_level())
+ library_root_logger.propagate = False
+
+
+def get_logger(name: Optional[str] = None) -> "_Logger":
+ r"""Return a logger with the specified name. It it not supposed to be accessed externally."""
+ if name is None:
+ name = _get_library_name()
+
+ _configure_library_root_logger()
+ return logging.getLogger(name)
+
+
+def add_handler(handler: "logging.Handler") -> None:
+ r"""Add a handler to the root logger."""
+ _configure_library_root_logger()
+ _get_library_root_logger().addHandler(handler)
+
+
+def remove_handler(handler: logging.Handler) -> None:
+ r"""Remove a handler to the root logger."""
+ _configure_library_root_logger()
+ _get_library_root_logger().removeHandler(handler)
+
+
+def info_rank0(self: "logging.Logger", *args, **kwargs) -> None:
+ if int(os.getenv("LOCAL_RANK", "0")) == 0:
+ self.info(*args, **kwargs)
+
+
+def warning_rank0(self: "logging.Logger", *args, **kwargs) -> None:
+ if int(os.getenv("LOCAL_RANK", "0")) == 0:
+ self.warning(*args, **kwargs)
+
+
+@lru_cache(None)
+def warning_rank0_once(self: "logging.Logger", *args, **kwargs) -> None:
+ if int(os.getenv("LOCAL_RANK", "0")) == 0:
+ self.warning(*args, **kwargs)
+
+
+logging.Logger.info_rank0 = info_rank0
+logging.Logger.warning_rank0 = warning_rank0
+logging.Logger.warning_rank0_once = warning_rank0_once
diff --git a/src/train_sft/src/llamafactory/extras/misc.py b/src/train_sft/src/llamafactory/extras/misc.py
new file mode 100644
index 0000000000000000000000000000000000000000..65eedd8008c2a4657b2423286807f41ffb509ef1
--- /dev/null
+++ b/src/train_sft/src/llamafactory/extras/misc.py
@@ -0,0 +1,330 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's PEFT library.
+# https://github.com/huggingface/peft/blob/v0.10.0/src/peft/peft_model.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import gc
+import os
+import socket
+from typing import TYPE_CHECKING, Any, Literal, Optional, Union
+
+import torch
+import torch.distributed as dist
+import transformers.dynamic_module_utils
+from huggingface_hub.utils import WeakFileLock
+from transformers import InfNanRemoveLogitsProcessor, LogitsProcessorList
+from transformers.dynamic_module_utils import get_relative_imports
+from transformers.utils import (
+ is_torch_bf16_gpu_available,
+ is_torch_cuda_available,
+ is_torch_mps_available,
+ is_torch_npu_available,
+ is_torch_xpu_available,
+)
+from transformers.utils.versions import require_version
+
+from . import logging
+
+
+_is_fp16_available = is_torch_npu_available() or is_torch_cuda_available()
+try:
+ _is_bf16_available = is_torch_bf16_gpu_available() or (is_torch_npu_available() and torch.npu.is_bf16_supported())
+except Exception:
+ _is_bf16_available = False
+
+
+if TYPE_CHECKING:
+ from numpy.typing import NDArray
+
+ from ..hparams import ModelArguments
+
+
+logger = logging.get_logger(__name__)
+
+
+class AverageMeter:
+ r"""Compute and store the average and current value."""
+
+ def __init__(self):
+ self.reset()
+
+ def reset(self):
+ self.val = 0
+ self.avg = 0
+ self.sum = 0
+ self.count = 0
+
+ def update(self, val, n=1):
+ self.val = val
+ self.sum += val * n
+ self.count += n
+ self.avg = self.sum / self.count
+
+
+def check_version(requirement: str, mandatory: bool = False) -> None:
+ r"""Optionally check the package version."""
+ if is_env_enabled("DISABLE_VERSION_CHECK") and not mandatory:
+ logger.warning_rank0_once("Version checking has been disabled, may lead to unexpected behaviors.")
+ return
+
+ if "gptmodel" in requirement or "autoawq" in requirement:
+ pip_command = f"pip install {requirement} --no-build-isolation"
+ else:
+ pip_command = f"pip install {requirement}"
+
+ if mandatory:
+ hint = f"To fix: run `{pip_command}`."
+ else:
+ hint = f"To fix: run `{pip_command}` or set `DISABLE_VERSION_CHECK=1` to skip this check."
+
+ require_version(requirement, hint)
+
+
+def check_dependencies() -> None:
+ r"""Check the version of the required packages."""
+ check_version("transformers>=4.49.0,<=4.55.0")
+ check_version("datasets>=2.16.0,<=3.6.0")
+ check_version("accelerate>=1.3.0,<=1.7.0")
+ check_version("peft>=0.14.0,<=0.15.2")
+ check_version("trl>=0.8.6,<=0.9.6")
+
+
+def calculate_tps(dataset: list[dict[str, Any]], metrics: dict[str, float], stage: Literal["sft", "rm"]) -> float:
+ r"""Calculate effective tokens per second."""
+ effective_token_num = 0
+ for data in dataset:
+ if stage == "sft":
+ effective_token_num += len(data["input_ids"])
+ elif stage == "rm":
+ effective_token_num += len(data["chosen_input_ids"]) + len(data["rejected_input_ids"])
+
+ result = effective_token_num * metrics["epoch"] / metrics["train_runtime"]
+ return result / dist.get_world_size() if dist.is_initialized() else result
+
+
+def count_parameters(model: "torch.nn.Module") -> tuple[int, int]:
+ r"""Return the number of trainable parameters and number of all parameters in the model."""
+ trainable_params, all_param = 0, 0
+ for param in model.parameters():
+ num_params = param.numel()
+ # if using DS Zero 3 and the weights are initialized empty
+ if num_params == 0 and hasattr(param, "ds_numel"):
+ num_params = param.ds_numel
+
+ # Due to the design of 4bit linear layers from bitsandbytes, multiply the number of parameters by itemsize
+ if param.__class__.__name__ == "Params4bit":
+ if hasattr(param, "quant_storage") and hasattr(param.quant_storage, "itemsize"):
+ num_bytes = param.quant_storage.itemsize
+ elif hasattr(param, "element_size"): # for older pytorch version
+ num_bytes = param.element_size()
+ else:
+ num_bytes = 1
+
+ num_params = num_params * 2 * num_bytes
+
+ all_param += num_params
+ if param.requires_grad:
+ trainable_params += num_params
+
+ return trainable_params, all_param
+
+
+def get_current_device() -> "torch.device":
+ r"""Get the current available device."""
+ if is_torch_xpu_available():
+ device = "xpu:{}".format(os.getenv("LOCAL_RANK", "0"))
+ elif is_torch_npu_available():
+ device = "npu:{}".format(os.getenv("LOCAL_RANK", "0"))
+ elif is_torch_mps_available():
+ device = "mps:{}".format(os.getenv("LOCAL_RANK", "0"))
+ elif is_torch_cuda_available():
+ device = "cuda:{}".format(os.getenv("LOCAL_RANK", "0"))
+ else:
+ device = "cpu"
+
+ return torch.device(device)
+
+
+def get_device_count() -> int:
+ r"""Get the number of available devices."""
+ if is_torch_xpu_available():
+ return torch.xpu.device_count()
+ elif is_torch_npu_available():
+ return torch.npu.device_count()
+ elif is_torch_mps_available():
+ return torch.mps.device_count()
+ elif is_torch_cuda_available():
+ return torch.cuda.device_count()
+ else:
+ return 0
+
+
+def get_logits_processor() -> "LogitsProcessorList":
+ r"""Get logits processor that removes NaN and Inf logits."""
+ logits_processor = LogitsProcessorList()
+ logits_processor.append(InfNanRemoveLogitsProcessor())
+ return logits_processor
+
+
+def get_current_memory() -> tuple[int, int]:
+ r"""Get the available and total memory for the current device (in Bytes)."""
+ if is_torch_xpu_available():
+ return torch.xpu.mem_get_info()
+ elif is_torch_npu_available():
+ return torch.npu.mem_get_info()
+ elif is_torch_mps_available():
+ return torch.mps.current_allocated_memory(), torch.mps.recommended_max_memory()
+ elif is_torch_cuda_available():
+ return torch.cuda.mem_get_info()
+ else:
+ return 0, -1
+
+
+def get_peak_memory() -> tuple[int, int]:
+ r"""Get the peak memory usage (allocated, reserved) for the current device (in Bytes)."""
+ if is_torch_xpu_available():
+ return torch.xpu.max_memory_allocated(), torch.xpu.max_memory_reserved()
+ elif is_torch_npu_available():
+ return torch.npu.max_memory_allocated(), torch.npu.max_memory_reserved()
+ elif is_torch_mps_available():
+ return torch.mps.current_allocated_memory(), -1
+ elif is_torch_cuda_available():
+ return torch.cuda.max_memory_allocated(), torch.cuda.max_memory_reserved()
+ else:
+ return 0, -1
+
+
+def has_tokenized_data(path: "os.PathLike") -> bool:
+ r"""Check if the path has a tokenized dataset."""
+ return os.path.isdir(path) and len(os.listdir(path)) > 0
+
+
+def infer_optim_dtype(model_dtype: Optional["torch.dtype"]) -> "torch.dtype":
+ r"""Infer the optimal dtype according to the model_dtype and device compatibility."""
+ if _is_bf16_available and (model_dtype == torch.bfloat16 or model_dtype is None):
+ return torch.bfloat16
+ elif _is_fp16_available:
+ return torch.float16
+ else:
+ return torch.float32
+
+
+def is_accelerator_available() -> bool:
+ r"""Check if the accelerator is available."""
+ return (
+ is_torch_xpu_available() or is_torch_npu_available() or is_torch_mps_available() or is_torch_cuda_available()
+ )
+
+
+def is_env_enabled(env_var: str, default: str = "0") -> bool:
+ r"""Check if the environment variable is enabled."""
+ return os.getenv(env_var, default).lower() in ["true", "y", "1"]
+
+
+def numpify(inputs: Union["NDArray", "torch.Tensor"]) -> "NDArray":
+ r"""Cast a torch tensor or a numpy array to a numpy array."""
+ if isinstance(inputs, torch.Tensor):
+ inputs = inputs.cpu()
+ if inputs.dtype == torch.bfloat16: # numpy does not support bfloat16 until 1.21.4
+ inputs = inputs.to(torch.float32)
+
+ inputs = inputs.numpy()
+
+ return inputs
+
+
+def skip_check_imports() -> None:
+ r"""Avoid flash attention import error in custom model files."""
+ if not is_env_enabled("FORCE_CHECK_IMPORTS"):
+ transformers.dynamic_module_utils.check_imports = get_relative_imports
+
+
+def torch_gc() -> None:
+ r"""Collect the device memory."""
+ gc.collect()
+ if is_torch_xpu_available():
+ torch.xpu.empty_cache()
+ elif is_torch_npu_available():
+ torch.npu.empty_cache()
+ elif is_torch_mps_available():
+ torch.mps.empty_cache()
+ elif is_torch_cuda_available():
+ torch.cuda.empty_cache()
+
+
+def try_download_model_from_other_hub(model_args: "ModelArguments") -> str:
+ if (not use_modelscope() and not use_openmind()) or os.path.exists(model_args.model_name_or_path):
+ return model_args.model_name_or_path
+
+ if use_modelscope():
+ check_version("modelscope>=1.14.0", mandatory=True)
+ from modelscope import snapshot_download # type: ignore
+ from modelscope.hub.api import HubApi # type: ignore
+
+ if model_args.ms_hub_token:
+ api = HubApi()
+ api.login(model_args.ms_hub_token)
+
+ revision = "master" if model_args.model_revision == "main" else model_args.model_revision
+ with WeakFileLock(os.path.abspath(os.path.expanduser("~/.cache/llamafactory/modelscope.lock"))):
+ model_path = snapshot_download(
+ model_args.model_name_or_path,
+ revision=revision,
+ cache_dir=model_args.cache_dir,
+ )
+
+ return model_path
+
+ if use_openmind():
+ check_version("openmind>=0.8.0", mandatory=True)
+ from openmind.utils.hub import snapshot_download # type: ignore
+
+ with WeakFileLock(os.path.abspath(os.path.expanduser("~/.cache/llamafactory/openmind.lock"))):
+ model_path = snapshot_download(
+ model_args.model_name_or_path,
+ revision=model_args.model_revision,
+ cache_dir=model_args.cache_dir,
+ )
+
+ return model_path
+
+
+def use_modelscope() -> bool:
+ return is_env_enabled("USE_MODELSCOPE_HUB")
+
+
+def use_openmind() -> bool:
+ return is_env_enabled("USE_OPENMIND_HUB")
+
+
+def use_ray() -> bool:
+ return is_env_enabled("USE_RAY")
+
+
+def find_available_port() -> int:
+ r"""Find an available port on the local machine."""
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ sock.bind(("", 0))
+ port = sock.getsockname()[1]
+ sock.close()
+ return port
+
+
+def fix_proxy(ipv6_enabled: bool = False) -> None:
+ r"""Fix proxy settings for gradio ui."""
+ os.environ["no_proxy"] = "localhost,127.0.0.1,0.0.0.0"
+ if ipv6_enabled:
+ os.environ.pop("http_proxy", None)
+ os.environ.pop("HTTP_PROXY", None)
diff --git a/src/train_sft/src/llamafactory/extras/packages.py b/src/train_sft/src/llamafactory/extras/packages.py
new file mode 100644
index 0000000000000000000000000000000000000000..6b70f4ac75566f152e468ef1ba71737b98aacb6d
--- /dev/null
+++ b/src/train_sft/src/llamafactory/extras/packages.py
@@ -0,0 +1,103 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's transformers library.
+# https://github.com/huggingface/transformers/blob/v4.40.0/src/transformers/utils/import_utils.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import importlib.metadata
+import importlib.util
+from functools import lru_cache
+from typing import TYPE_CHECKING
+
+from packaging import version
+
+
+if TYPE_CHECKING:
+ from packaging.version import Version
+
+
+def _is_package_available(name: str) -> bool:
+ return importlib.util.find_spec(name) is not None
+
+
+def _get_package_version(name: str) -> "Version":
+ try:
+ return version.parse(importlib.metadata.version(name))
+ except Exception:
+ return version.parse("0.0.0")
+
+
+def is_pyav_available():
+ return _is_package_available("av")
+
+
+def is_librosa_available():
+ return _is_package_available("librosa")
+
+
+def is_fastapi_available():
+ return _is_package_available("fastapi")
+
+
+def is_galore_available():
+ return _is_package_available("galore_torch")
+
+
+def is_apollo_available():
+ return _is_package_available("apollo_torch")
+
+
+def is_gradio_available():
+ return _is_package_available("gradio")
+
+
+def is_matplotlib_available():
+ return _is_package_available("matplotlib")
+
+
+def is_pillow_available():
+ return _is_package_available("PIL")
+
+
+def is_ray_available():
+ return _is_package_available("ray")
+
+
+def is_requests_available():
+ return _is_package_available("requests")
+
+
+def is_rouge_available():
+ return _is_package_available("rouge_chinese")
+
+
+def is_starlette_available():
+ return _is_package_available("sse_starlette")
+
+
+@lru_cache
+def is_transformers_version_greater_than(content: str):
+ return _get_package_version("transformers") >= version.parse(content)
+
+
+def is_uvicorn_available():
+ return _is_package_available("uvicorn")
+
+
+def is_vllm_available():
+ return _is_package_available("vllm")
+
+
+def is_sglang_available():
+ return _is_package_available("sglang")
diff --git a/src/train_sft/src/llamafactory/extras/ploting.py b/src/train_sft/src/llamafactory/extras/ploting.py
new file mode 100644
index 0000000000000000000000000000000000000000..be89bcc5cb30429b60e13e71eba465f83de90e74
--- /dev/null
+++ b/src/train_sft/src/llamafactory/extras/ploting.py
@@ -0,0 +1,95 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import math
+import os
+from typing import Any
+
+from transformers.trainer import TRAINER_STATE_NAME
+
+from . import logging
+from .packages import is_matplotlib_available
+
+
+if is_matplotlib_available():
+ import matplotlib.figure
+ import matplotlib.pyplot as plt
+
+
+logger = logging.get_logger(__name__)
+
+
+def smooth(scalars: list[float]) -> list[float]:
+ r"""EMA implementation according to TensorBoard."""
+ if len(scalars) == 0:
+ return []
+
+ last = scalars[0]
+ smoothed = []
+ weight = 1.8 * (1 / (1 + math.exp(-0.05 * len(scalars))) - 0.5) # a sigmoid function
+ for next_val in scalars:
+ smoothed_val = last * weight + (1 - weight) * next_val
+ smoothed.append(smoothed_val)
+ last = smoothed_val
+ return smoothed
+
+
+def gen_loss_plot(trainer_log: list[dict[str, Any]]) -> "matplotlib.figure.Figure":
+ r"""Plot loss curves in LlamaBoard."""
+ plt.close("all")
+ plt.switch_backend("agg")
+ fig = plt.figure()
+ ax = fig.add_subplot(111)
+ steps, losses = [], []
+ for log in trainer_log:
+ if log.get("loss", None):
+ steps.append(log["current_steps"])
+ losses.append(log["loss"])
+
+ ax.plot(steps, losses, color="#1f77b4", alpha=0.4, label="original")
+ ax.plot(steps, smooth(losses), color="#1f77b4", label="smoothed")
+ ax.legend()
+ ax.set_xlabel("step")
+ ax.set_ylabel("loss")
+ return fig
+
+
+def plot_loss(save_dictionary: str, keys: list[str] = ["loss"]) -> None:
+ r"""Plot loss curves and saves the image."""
+ plt.switch_backend("agg")
+ with open(os.path.join(save_dictionary, TRAINER_STATE_NAME), encoding="utf-8") as f:
+ data = json.load(f)
+
+ for key in keys:
+ steps, metrics = [], []
+ for i in range(len(data["log_history"])):
+ if key in data["log_history"][i]:
+ steps.append(data["log_history"][i]["step"])
+ metrics.append(data["log_history"][i][key])
+
+ if len(metrics) == 0:
+ logger.warning_rank0(f"No metric {key} to plot.")
+ continue
+
+ plt.figure()
+ plt.plot(steps, metrics, color="#1f77b4", alpha=0.4, label="original")
+ plt.plot(steps, smooth(metrics), color="#1f77b4", label="smoothed")
+ plt.title(f"training {key} of {save_dictionary}")
+ plt.xlabel("step")
+ plt.ylabel(key)
+ plt.legend()
+ figure_path = os.path.join(save_dictionary, "training_{}.png".format(key.replace("/", "_")))
+ plt.savefig(figure_path, format="png", dpi=100)
+ print("Figure saved at:", figure_path)
diff --git a/src/train_sft/src/llamafactory/hparams/__init__.py b/src/train_sft/src/llamafactory/hparams/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..9bcc4295ce20c431f8db209a40cfc585ae90139f
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/__init__.py
@@ -0,0 +1,37 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from .data_args import DataArguments
+from .evaluation_args import EvaluationArguments
+from .finetuning_args import FinetuningArguments
+from .generating_args import GeneratingArguments
+from .model_args import ModelArguments
+from .parser import get_eval_args, get_infer_args, get_ray_args, get_train_args, read_args
+from .training_args import RayArguments, TrainingArguments
+
+
+__all__ = [
+ "DataArguments",
+ "EvaluationArguments",
+ "FinetuningArguments",
+ "GeneratingArguments",
+ "ModelArguments",
+ "RayArguments",
+ "TrainingArguments",
+ "get_eval_args",
+ "get_infer_args",
+ "get_ray_args",
+ "get_train_args",
+ "read_args",
+]
diff --git a/src/train_sft/src/llamafactory/hparams/data_args.py b/src/train_sft/src/llamafactory/hparams/data_args.py
new file mode 100644
index 0000000000000000000000000000000000000000..662cdf03bcc6a4b1ecd64e11eab7aba733c7d2b7
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/data_args.py
@@ -0,0 +1,207 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's transformers library.
+# https://github.com/huggingface/transformers/blob/v4.40.0/examples/pytorch/language-modeling/run_clm.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import asdict, dataclass, field
+from typing import Any, Literal, Optional
+
+
+@dataclass
+class DataArguments:
+ r"""Arguments pertaining to what data we are going to input our model for training and evaluation."""
+
+ template: Optional[str] = field(
+ default=None,
+ metadata={"help": "Which template to use for constructing prompts in training and inference."},
+ )
+ dataset: Optional[str] = field(
+ default=None,
+ metadata={"help": "The name of dataset(s) to use for training. Use commas to separate multiple datasets."},
+ )
+ eval_dataset: Optional[str] = field(
+ default=None,
+ metadata={"help": "The name of dataset(s) to use for evaluation. Use commas to separate multiple datasets."},
+ )
+ dataset_dir: str = field(
+ default="data",
+ metadata={"help": "Path to the folder containing the datasets."},
+ )
+ media_dir: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to the folder containing the images, videos or audios. Defaults to `dataset_dir`."},
+ )
+ cutoff_len: int = field(
+ default=2048,
+ metadata={"help": "The cutoff length of the tokenized inputs in the dataset."},
+ )
+ train_on_prompt: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to disable the mask on the prompt."},
+ )
+ mask_history: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to mask the history and train on the last turn only."},
+ )
+ streaming: bool = field(
+ default=False,
+ metadata={"help": "Enable dataset streaming."},
+ )
+ buffer_size: int = field(
+ default=16384,
+ metadata={"help": "Size of the buffer to randomly sample examples from in dataset streaming."},
+ )
+ mix_strategy: Literal["concat", "interleave_under", "interleave_over"] = field(
+ default="concat",
+ metadata={"help": "Strategy to use in dataset mixing (concat/interleave) (undersampling/oversampling)."},
+ )
+ interleave_probs: Optional[str] = field(
+ default=None,
+ metadata={"help": "Probabilities to sample data from datasets. Use commas to separate multiple datasets."},
+ )
+ overwrite_cache: bool = field(
+ default=False,
+ metadata={"help": "Overwrite the cached training and evaluation sets."},
+ )
+ preprocessing_batch_size: int = field(
+ default=1000,
+ metadata={"help": "The number of examples in one group in pre-processing."},
+ )
+ preprocessing_num_workers: Optional[int] = field(
+ default=None,
+ metadata={"help": "The number of processes to use for the pre-processing."},
+ )
+ max_samples: Optional[int] = field(
+ default=None,
+ metadata={"help": "For debugging purposes, truncate the number of examples for each dataset."},
+ )
+ eval_num_beams: Optional[int] = field(
+ default=None,
+ metadata={"help": "Number of beams to use for evaluation. This argument will be passed to `model.generate`"},
+ )
+ ignore_pad_token_for_loss: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to ignore the tokens corresponding to the pad label in loss computation."},
+ )
+ val_size: float = field(
+ default=0.0,
+ metadata={"help": "Size of the validation set, should be an integer or a float in range `[0,1)`."},
+ )
+ eval_on_each_dataset: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to evaluate on each dataset separately."},
+ )
+ packing: Optional[bool] = field(
+ default=None,
+ metadata={"help": "Enable sequences packing in training. Will automatically enable in pre-training."},
+ )
+ neat_packing: bool = field(
+ default=False,
+ metadata={"help": "Enable sequence packing without cross-attention."},
+ )
+ tool_format: Optional[str] = field(
+ default=None,
+ metadata={"help": "Tool format to use for constructing function calling examples."},
+ )
+ default_system: Optional[str] = field(
+ default=None,
+ metadata={"help": "Override the default system message in the template."},
+ )
+ enable_thinking: Optional[bool] = field(
+ default=True,
+ metadata={"help": "Whether or not to enable thinking mode for reasoning models."},
+ )
+ mask_thinking: bool = field(
+ default=False,
+ metadata={"help": "Whether to mask the thinking/CoT part in loss computation. When True, only trains on the answer part while keeping CoT in inference."},
+ )
+ tokenized_path: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": (
+ "Path to save or load the tokenized datasets. "
+ "If tokenized_path not exists, it will save the tokenized datasets. "
+ "If tokenized_path exists, it will load the tokenized datasets."
+ )
+ },
+ )
+ data_shared_file_system: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use a shared file system for the datasets."},
+ )
+ # 数据增强相关参数
+ enable_data_augmentation: bool = field(
+ default=False,
+ metadata={"help": "Whether to enable data augmentation for group layout dataset during training."},
+ )
+ augmentation_ratio: float = field(
+ default=0.85,
+ metadata={"help": "Probability of applying data augmentation to each sample (0.0-1.0)."},
+ )
+ augmentation_types: Optional[str] = field(
+ default="random_group_shuffle,random_object_shuffle,geometric_rotation,bounds_offset,position_perturbation",
+ metadata={"help": "Comma-separated list of augmentation types to apply."},
+ )
+ augmentation_seed: int = field(
+ default=42,
+ metadata={"help": "Random seed for data augmentation reproducibility."},
+ )
+
+ def __post_init__(self):
+ def split_arg(arg):
+ if isinstance(arg, str):
+ return [item.strip() for item in arg.split(",")]
+ return arg
+
+ self.dataset = split_arg(self.dataset)
+ self.eval_dataset = split_arg(self.eval_dataset)
+
+ if self.media_dir is None:
+ self.media_dir = self.dataset_dir
+
+ if self.dataset is None and self.val_size > 1e-6:
+ raise ValueError("Cannot specify `val_size` if `dataset` is None.")
+
+ if self.eval_dataset is not None and self.val_size > 1e-6:
+ raise ValueError("Cannot specify `val_size` if `eval_dataset` is not None.")
+
+ if self.interleave_probs is not None:
+ if self.mix_strategy == "concat":
+ raise ValueError("`interleave_probs` is only valid for interleaved mixing.")
+
+ self.interleave_probs = list(map(float, split_arg(self.interleave_probs)))
+ if self.dataset is not None and len(self.dataset) != len(self.interleave_probs):
+ raise ValueError("The length of dataset and interleave probs should be identical.")
+
+ if self.eval_dataset is not None and len(self.eval_dataset) != len(self.interleave_probs):
+ raise ValueError("The length of eval dataset and interleave probs should be identical.")
+
+ if self.streaming and self.val_size > 1e-6 and self.val_size < 1:
+ raise ValueError("Streaming mode should have an integer val size.")
+
+ if self.streaming and self.max_samples is not None:
+ raise ValueError("`max_samples` is incompatible with `streaming`.")
+
+ if self.mask_history and self.train_on_prompt:
+ raise ValueError("`mask_history` is incompatible with `train_on_prompt`.")
+
+ if self.neat_packing:
+ self.packing = True
+
+ if self.packing:
+ self.cutoff_len -= 1 # avoid pad_to_multiple_of, needs improve
+
+ def to_dict(self) -> dict[str, Any]:
+ return asdict(self)
diff --git a/src/train_sft/src/llamafactory/hparams/evaluation_args.py b/src/train_sft/src/llamafactory/hparams/evaluation_args.py
new file mode 100644
index 0000000000000000000000000000000000000000..d92e8b1eaec9a93d120728db9cb9c59e125055f3
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/evaluation_args.py
@@ -0,0 +1,60 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+from dataclasses import dataclass, field
+from typing import Literal, Optional
+
+from datasets import DownloadMode
+
+
+@dataclass
+class EvaluationArguments:
+ r"""Arguments pertaining to specify the evaluation parameters."""
+
+ task: str = field(
+ metadata={"help": "Name of the evaluation task."},
+ )
+ task_dir: str = field(
+ default="evaluation",
+ metadata={"help": "Path to the folder containing the evaluation datasets."},
+ )
+ batch_size: int = field(
+ default=4,
+ metadata={"help": "The batch size per GPU for evaluation."},
+ )
+ seed: int = field(
+ default=42,
+ metadata={"help": "Random seed to be used with data loaders."},
+ )
+ lang: Literal["en", "zh"] = field(
+ default="en",
+ metadata={"help": "Language used at evaluation."},
+ )
+ n_shot: int = field(
+ default=5,
+ metadata={"help": "Number of examplars for few-shot learning."},
+ )
+ save_dir: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to save the evaluation results."},
+ )
+ download_mode: DownloadMode = field(
+ default=DownloadMode.REUSE_DATASET_IF_EXISTS,
+ metadata={"help": "Download mode used for the evaluation datasets."},
+ )
+
+ def __post_init__(self):
+ if self.save_dir is not None and os.path.exists(self.save_dir):
+ raise ValueError("`save_dir` already exists, use another one.")
diff --git a/src/train_sft/src/llamafactory/hparams/finetuning_args.py b/src/train_sft/src/llamafactory/hparams/finetuning_args.py
new file mode 100644
index 0000000000000000000000000000000000000000..43596864ca050e4013121dc0852eb8f4d144d9bf
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/finetuning_args.py
@@ -0,0 +1,516 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import asdict, dataclass, field
+from typing import Any, Literal, Optional
+
+
+@dataclass
+class FreezeArguments:
+ r"""Arguments pertaining to the freeze (partial-parameter) training."""
+
+ freeze_trainable_layers: int = field(
+ default=2,
+ metadata={
+ "help": (
+ "The number of trainable layers for freeze (partial-parameter) fine-tuning. "
+ "Positive numbers mean the last n layers are set as trainable, "
+ "negative numbers mean the first n layers are set as trainable."
+ )
+ },
+ )
+ freeze_trainable_modules: str = field(
+ default="all",
+ metadata={
+ "help": (
+ "Name(s) of trainable modules for freeze (partial-parameter) fine-tuning. "
+ "Use commas to separate multiple modules. "
+ "Use `all` to specify all the available modules."
+ )
+ },
+ )
+ freeze_extra_modules: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": (
+ "Name(s) of modules apart from hidden layers to be set as trainable "
+ "for freeze (partial-parameter) fine-tuning. "
+ "Use commas to separate multiple modules."
+ )
+ },
+ )
+
+
+@dataclass
+class LoraArguments:
+ r"""Arguments pertaining to the LoRA training."""
+
+ additional_target: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": (
+ "Name(s) of modules apart from LoRA layers to be set as trainable "
+ "and saved in the final checkpoint. "
+ "Use commas to separate multiple modules."
+ )
+ },
+ )
+ lora_alpha: Optional[int] = field(
+ default=None,
+ metadata={"help": "The scale factor for LoRA fine-tuning (default: lora_rank * 2)."},
+ )
+ lora_dropout: float = field(
+ default=0.0,
+ metadata={"help": "Dropout rate for the LoRA fine-tuning."},
+ )
+ lora_rank: int = field(
+ default=8,
+ metadata={"help": "The intrinsic dimension for LoRA fine-tuning."},
+ )
+ lora_target: str = field(
+ default="all",
+ metadata={
+ "help": (
+ "Name(s) of target modules to apply LoRA. "
+ "Use commas to separate multiple modules. "
+ "Use `all` to specify all the linear modules."
+ )
+ },
+ )
+ loraplus_lr_ratio: Optional[float] = field(
+ default=None,
+ metadata={"help": "LoRA plus learning rate ratio (lr_B / lr_A)."},
+ )
+ loraplus_lr_embedding: float = field(
+ default=1e-6,
+ metadata={"help": "LoRA plus learning rate for lora embedding layers."},
+ )
+ use_rslora: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the rank stabilization scaling factor for LoRA layer."},
+ )
+ use_dora: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the weight-decomposed lora method (DoRA)."},
+ )
+ pissa_init: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to initialize a PiSSA adapter."},
+ )
+ pissa_iter: int = field(
+ default=16,
+ metadata={"help": "The number of iteration steps performed by FSVD in PiSSA. Use -1 to disable it."},
+ )
+ pissa_convert: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to convert the PiSSA adapter to a normal LoRA adapter."},
+ )
+ create_new_adapter: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to create a new adapter with randomly initialized weight."},
+ )
+
+
+@dataclass
+class RLHFArguments:
+ r"""Arguments pertaining to the PPO, DPO and KTO training."""
+
+ pref_beta: float = field(
+ default=0.1,
+ metadata={"help": "The beta parameter in the preference loss."},
+ )
+ pref_ftx: float = field(
+ default=0.0,
+ metadata={"help": "The supervised fine-tuning loss coefficient in DPO training."},
+ )
+ pref_loss: Literal["sigmoid", "hinge", "ipo", "kto_pair", "orpo", "simpo"] = field(
+ default="sigmoid",
+ metadata={"help": "The type of DPO loss to use."},
+ )
+ dpo_label_smoothing: float = field(
+ default=0.0,
+ metadata={"help": "The robust DPO label smoothing parameter in cDPO that should be between 0 and 0.5."},
+ )
+ kto_chosen_weight: float = field(
+ default=1.0,
+ metadata={"help": "The weight factor of the desirable losses in KTO training."},
+ )
+ kto_rejected_weight: float = field(
+ default=1.0,
+ metadata={"help": "The weight factor of the undesirable losses in KTO training."},
+ )
+ simpo_gamma: float = field(
+ default=0.5,
+ metadata={"help": "The target reward margin term in SimPO loss."},
+ )
+ ppo_buffer_size: int = field(
+ default=1,
+ metadata={"help": "The number of mini-batches to make experience buffer in a PPO optimization step."},
+ )
+ ppo_epochs: int = field(
+ default=4,
+ metadata={"help": "The number of epochs to perform in a PPO optimization step."},
+ )
+ ppo_score_norm: bool = field(
+ default=False,
+ metadata={"help": "Use score normalization in PPO training."},
+ )
+ ppo_target: float = field(
+ default=6.0,
+ metadata={"help": "Target KL value for adaptive KL control in PPO training."},
+ )
+ ppo_whiten_rewards: bool = field(
+ default=False,
+ metadata={"help": "Whiten the rewards before compute advantages in PPO training."},
+ )
+ ref_model: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to the reference model used for the PPO or DPO training."},
+ )
+ ref_model_adapters: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to the adapters of the reference model."},
+ )
+ ref_model_quantization_bit: Optional[int] = field(
+ default=None,
+ metadata={"help": "The number of bits to quantize the reference model."},
+ )
+ reward_model: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to the reward model used for the PPO training."},
+ )
+ reward_model_adapters: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to the adapters of the reward model."},
+ )
+ reward_model_quantization_bit: Optional[int] = field(
+ default=None,
+ metadata={"help": "The number of bits to quantize the reward model."},
+ )
+ reward_model_type: Literal["lora", "full", "api"] = field(
+ default="lora",
+ metadata={"help": "The type of the reward model in PPO training. Lora model only supports lora training."},
+ )
+ ld_alpha: Optional[float] = field(
+ default=None,
+ metadata={
+ "help": (
+ "Alpha parameter from the LD-DPO paper, which controls the weighting of"
+ " the verbose token log-probabilities in responses."
+ )
+ },
+ )
+
+
+@dataclass
+class GaloreArguments:
+ r"""Arguments pertaining to the GaLore algorithm."""
+
+ use_galore: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the gradient low-Rank projection (GaLore)."},
+ )
+ galore_target: str = field(
+ default="all",
+ metadata={
+ "help": (
+ "Name(s) of modules to apply GaLore. Use commas to separate multiple modules. "
+ "Use `all` to specify all the linear modules."
+ )
+ },
+ )
+ galore_rank: int = field(
+ default=16,
+ metadata={"help": "The rank of GaLore gradients."},
+ )
+ galore_update_interval: int = field(
+ default=200,
+ metadata={"help": "Number of steps to update the GaLore projection."},
+ )
+ galore_scale: float = field(
+ default=2.0,
+ metadata={"help": "GaLore scaling coefficient."},
+ )
+ galore_proj_type: Literal["std", "reverse_std", "right", "left", "full"] = field(
+ default="std",
+ metadata={"help": "Type of GaLore projection."},
+ )
+ galore_layerwise: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to enable layer-wise update to further save memory."},
+ )
+
+
+@dataclass
+class ApolloArguments:
+ r"""Arguments pertaining to the APOLLO algorithm."""
+
+ use_apollo: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the APOLLO optimizer."},
+ )
+ apollo_target: str = field(
+ default="all",
+ metadata={
+ "help": (
+ "Name(s) of modules to apply APOLLO. Use commas to separate multiple modules. "
+ "Use `all` to specify all the linear modules."
+ )
+ },
+ )
+ apollo_rank: int = field(
+ default=16,
+ metadata={"help": "The rank of APOLLO gradients."},
+ )
+ apollo_update_interval: int = field(
+ default=200,
+ metadata={"help": "Number of steps to update the APOLLO projection."},
+ )
+ apollo_scale: float = field(
+ default=32.0,
+ metadata={"help": "APOLLO scaling coefficient."},
+ )
+ apollo_proj: Literal["svd", "random"] = field(
+ default="random",
+ metadata={"help": "Type of APOLLO low-rank projection algorithm (svd or random)."},
+ )
+ apollo_proj_type: Literal["std", "right", "left"] = field(
+ default="std",
+ metadata={"help": "Type of APOLLO projection."},
+ )
+ apollo_scale_type: Literal["channel", "tensor"] = field(
+ default="channel",
+ metadata={"help": "Type of APOLLO scaling (channel or tensor)."},
+ )
+ apollo_layerwise: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to enable layer-wise update to further save memory."},
+ )
+ apollo_scale_front: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the norm-growth limiter in front of gradient scaling."},
+ )
+
+
+@dataclass
+class BAdamArgument:
+ r"""Arguments pertaining to the BAdam optimizer."""
+
+ use_badam: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the BAdam optimizer."},
+ )
+ badam_mode: Literal["layer", "ratio"] = field(
+ default="layer",
+ metadata={"help": "Whether to use layer-wise or ratio-wise BAdam optimizer."},
+ )
+ badam_start_block: Optional[int] = field(
+ default=None,
+ metadata={"help": "The starting block index for layer-wise BAdam."},
+ )
+ badam_switch_mode: Optional[Literal["ascending", "descending", "random", "fixed"]] = field(
+ default="ascending",
+ metadata={"help": "the strategy of picking block to update for layer-wise BAdam."},
+ )
+ badam_switch_interval: Optional[int] = field(
+ default=50,
+ metadata={
+ "help": "Number of steps to update the block for layer-wise BAdam. Use -1 to disable the block update."
+ },
+ )
+ badam_update_ratio: float = field(
+ default=0.05,
+ metadata={"help": "The ratio of the update for ratio-wise BAdam."},
+ )
+ badam_mask_mode: Literal["adjacent", "scatter"] = field(
+ default="adjacent",
+ metadata={
+ "help": (
+ "The mode of the mask for BAdam optimizer. "
+ "`adjacent` means that the trainable parameters are adjacent to each other, "
+ "`scatter` means that trainable parameters are randomly choosed from the weight."
+ )
+ },
+ )
+ badam_verbose: int = field(
+ default=0,
+ metadata={
+ "help": (
+ "The verbosity level of BAdam optimizer. "
+ "0 for no print, 1 for print the block prefix, 2 for print trainable parameters."
+ )
+ },
+ )
+
+
+@dataclass
+class SwanLabArguments:
+ use_swanlab: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the SwanLab (an experiment tracking and visualization tool)."},
+ )
+ swanlab_project: Optional[str] = field(
+ default="llamafactory",
+ metadata={"help": "The project name in SwanLab."},
+ )
+ swanlab_workspace: Optional[str] = field(
+ default=None,
+ metadata={"help": "The workspace name in SwanLab."},
+ )
+ swanlab_run_name: Optional[str] = field(
+ default=None,
+ metadata={"help": "The experiment name in SwanLab."},
+ )
+ swanlab_mode: Literal["cloud", "local"] = field(
+ default="cloud",
+ metadata={"help": "The mode of SwanLab."},
+ )
+ swanlab_api_key: Optional[str] = field(
+ default=None,
+ metadata={"help": "The API key for SwanLab."},
+ )
+ swanlab_logdir: Optional[str] = field(
+ default=None,
+ metadata={"help": "The log directory for SwanLab."},
+ )
+ swanlab_lark_webhook_url: Optional[str] = field(
+ default=None,
+ metadata={"help": "The Lark(飞书) webhook URL for SwanLab."},
+ )
+ swanlab_lark_secret: Optional[str] = field(
+ default=None,
+ metadata={"help": "The Lark(飞书) secret for SwanLab."},
+ )
+
+
+@dataclass
+class FinetuningArguments(
+ SwanLabArguments, BAdamArgument, ApolloArguments, GaloreArguments, RLHFArguments, LoraArguments, FreezeArguments
+):
+ r"""Arguments pertaining to which techniques we are going to fine-tuning with."""
+
+ pure_bf16: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to train model in purely bf16 precision (without AMP)."},
+ )
+ stage: Literal["pt", "sft", "rm", "ppo", "dpo", "kto"] = field(
+ default="sft",
+ metadata={"help": "Which stage will be performed in training."},
+ )
+ finetuning_type: Literal["lora", "freeze", "full"] = field(
+ default="lora",
+ metadata={"help": "Which fine-tuning method to use."},
+ )
+ use_llama_pro: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to make only the parameters in the expanded blocks trainable."},
+ )
+ use_adam_mini: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the Adam-mini optimizer."},
+ )
+ use_muon: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use the Muon optimizer."},
+ )
+ freeze_vision_tower: bool = field(
+ default=True,
+ metadata={"help": "Whether ot not to freeze the vision tower in MLLM training."},
+ )
+ freeze_multi_modal_projector: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to freeze the multi modal projector in MLLM training."},
+ )
+ freeze_language_model: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to freeze the language model in MLLM training."},
+ )
+ compute_accuracy: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to compute the token-level accuracy at evaluation."},
+ )
+ disable_shuffling: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to disable the shuffling of the training set."},
+ )
+ early_stopping_steps: Optional[int] = field(
+ default=None,
+ metadata={"help": "Number of steps to stop training if the `metric_for_best_model` does not improve."},
+ )
+ plot_loss: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to save the training loss curves."},
+ )
+ include_effective_tokens_per_second: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to compute effective tokens per second."},
+ )
+
+ def __post_init__(self):
+ def split_arg(arg):
+ if isinstance(arg, str):
+ return [item.strip() for item in arg.split(",")]
+ return arg
+
+ self.freeze_trainable_modules: list[str] = split_arg(self.freeze_trainable_modules)
+ self.freeze_extra_modules: Optional[list[str]] = split_arg(self.freeze_extra_modules)
+ self.lora_alpha: int = self.lora_alpha or self.lora_rank * 2
+ self.lora_target: list[str] = split_arg(self.lora_target)
+ self.additional_target: Optional[list[str]] = split_arg(self.additional_target)
+ self.galore_target: list[str] = split_arg(self.galore_target)
+ self.apollo_target: list[str] = split_arg(self.apollo_target)
+ self.use_ref_model = self.stage == "dpo" and self.pref_loss not in ["orpo", "simpo"]
+
+ assert self.finetuning_type in ["lora", "freeze", "full"], "Invalid fine-tuning method."
+ assert self.ref_model_quantization_bit in [None, 8, 4], "We only accept 4-bit or 8-bit quantization."
+ assert self.reward_model_quantization_bit in [None, 8, 4], "We only accept 4-bit or 8-bit quantization."
+
+ if self.stage == "ppo" and self.reward_model is None:
+ raise ValueError("`reward_model` is necessary for PPO training.")
+
+ if self.stage == "ppo" and self.reward_model_type == "lora" and self.finetuning_type != "lora":
+ raise ValueError("`reward_model_type` cannot be lora for Freeze/Full PPO training.")
+
+ if self.stage == "dpo" and self.pref_loss != "sigmoid" and self.dpo_label_smoothing > 1e-6:
+ raise ValueError("`dpo_label_smoothing` is only valid for sigmoid loss function.")
+
+ if self.use_llama_pro and self.finetuning_type == "full":
+ raise ValueError("`use_llama_pro` is only valid for Freeze or LoRA training.")
+
+ if self.finetuning_type == "lora" and (self.use_galore or self.use_apollo or self.use_badam):
+ raise ValueError("Cannot use LoRA with GaLore, APOLLO or BAdam together.")
+
+ if int(self.use_galore) + int(self.use_apollo) + (self.use_badam) > 1:
+ raise ValueError("Cannot use GaLore, APOLLO or BAdam together.")
+
+ if self.pissa_init and (self.stage in ["ppo", "kto"] or self.use_ref_model):
+ raise ValueError("Cannot use PiSSA for current training stage.")
+
+ if self.finetuning_type != "lora":
+ if self.loraplus_lr_ratio is not None:
+ raise ValueError("`loraplus_lr_ratio` is only valid for LoRA training.")
+
+ if self.use_rslora:
+ raise ValueError("`use_rslora` is only valid for LoRA training.")
+
+ if self.use_dora:
+ raise ValueError("`use_dora` is only valid for LoRA training.")
+
+ if self.pissa_init:
+ raise ValueError("`pissa_init` is only valid for LoRA training.")
+
+ def to_dict(self) -> dict[str, Any]:
+ args = asdict(self)
+ args = {k: f"<{k.upper()}>" if k.endswith("api_key") else v for k, v in args.items()}
+ return args
diff --git a/src/train_sft/src/llamafactory/hparams/generating_args.py b/src/train_sft/src/llamafactory/hparams/generating_args.py
new file mode 100644
index 0000000000000000000000000000000000000000..7eacb14763ac972a08ea59b74a799e704e4f9bc9
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/generating_args.py
@@ -0,0 +1,83 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from dataclasses import asdict, dataclass, field
+from typing import Any
+
+from transformers import GenerationConfig
+
+
+@dataclass
+class GeneratingArguments:
+ r"""Arguments pertaining to specify the decoding parameters."""
+
+ do_sample: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to use sampling, use greedy decoding otherwise."},
+ )
+ temperature: float = field(
+ default=0.95,
+ metadata={"help": "The value used to modulate the next token probabilities."},
+ )
+ top_p: float = field(
+ default=0.7,
+ metadata={
+ "help": (
+ "The smallest set of most probable tokens with probabilities that add up to top_p or higher are kept."
+ )
+ },
+ )
+ top_k: int = field(
+ default=50,
+ metadata={"help": "The number of highest probability vocabulary tokens to keep for top-k filtering."},
+ )
+ num_beams: int = field(
+ default=1,
+ metadata={"help": "Number of beams for beam search. 1 means no beam search."},
+ )
+ max_length: int = field(
+ default=1024,
+ metadata={"help": "The maximum length the generated tokens can have. It can be overridden by max_new_tokens."},
+ )
+ max_new_tokens: int = field(
+ default=1024,
+ metadata={"help": "The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt."},
+ )
+ repetition_penalty: float = field(
+ default=1.0,
+ metadata={"help": "The parameter for repetition penalty. 1.0 means no penalty."},
+ )
+ length_penalty: float = field(
+ default=1.0,
+ metadata={"help": "Exponential penalty to the length that is used with beam-based generation."},
+ )
+ skip_special_tokens: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to remove special tokens in the decoding."},
+ )
+
+ def to_dict(self, obey_generation_config: bool = False) -> dict[str, Any]:
+ args = asdict(self)
+ if args.get("max_new_tokens", -1) > 0:
+ args.pop("max_length", None)
+ else:
+ args.pop("max_new_tokens", None)
+
+ if obey_generation_config:
+ generation_config = GenerationConfig()
+ for key in list(args.keys()):
+ if not hasattr(generation_config, key):
+ args.pop(key)
+
+ return args
diff --git a/src/train_sft/src/llamafactory/hparams/model_args.py b/src/train_sft/src/llamafactory/hparams/model_args.py
new file mode 100644
index 0000000000000000000000000000000000000000..d2f7cc52c9bfe59ee0ca390044c465ed138b5921
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/model_args.py
@@ -0,0 +1,435 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's transformers library.
+# https://github.com/huggingface/transformers/blob/v4.40.0/examples/pytorch/language-modeling/run_clm.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+from dataclasses import asdict, dataclass, field, fields
+from typing import Any, Literal, Optional, Union
+
+import torch
+from transformers.training_args import _convert_str_dict
+from typing_extensions import Self
+
+from ..extras.constants import AttentionFunction, EngineName, QuantizationMethod, RopeScaling
+
+
+@dataclass
+class BaseModelArguments:
+ r"""Arguments pertaining to the model."""
+
+ model_name_or_path: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": "Path to the model weight or identifier from huggingface.co/models or modelscope.cn/models."
+ },
+ )
+ adapter_name_or_path: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": (
+ "Path to the adapter weight or identifier from huggingface.co/models. "
+ "Use commas to separate multiple adapters."
+ )
+ },
+ )
+ adapter_folder: Optional[str] = field(
+ default=None,
+ metadata={"help": "The folder containing the adapter weights to load."},
+ )
+ cache_dir: Optional[str] = field(
+ default=None,
+ metadata={"help": "Where to store the pre-trained models downloaded from huggingface.co or modelscope.cn."},
+ )
+ use_fast_tokenizer: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to use one of the fast tokenizer (backed by the tokenizers library)."},
+ )
+ resize_vocab: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to resize the tokenizer vocab and the embedding layers."},
+ )
+ split_special_tokens: bool = field(
+ default=False,
+ metadata={"help": "Whether or not the special tokens should be split during the tokenization process."},
+ )
+ add_tokens: Optional[str] = field(
+ default=None,
+ metadata={
+ "help": "Non-special tokens to be added into the tokenizer. Use commas to separate multiple tokens."
+ },
+ )
+ add_special_tokens: Optional[str] = field(
+ default=None,
+ metadata={"help": "Special tokens to be added into the tokenizer. Use commas to separate multiple tokens."},
+ )
+ model_revision: str = field(
+ default="main",
+ metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."},
+ )
+ low_cpu_mem_usage: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to use memory-efficient model loading."},
+ )
+ rope_scaling: Optional[RopeScaling] = field(
+ default=None,
+ metadata={"help": "Which scaling strategy should be adopted for the RoPE embeddings."},
+ )
+ flash_attn: AttentionFunction = field(
+ default=AttentionFunction.AUTO,
+ metadata={"help": "Enable FlashAttention for faster training and inference."},
+ )
+ shift_attn: bool = field(
+ default=False,
+ metadata={"help": "Enable shift short attention (S^2-Attn) proposed by LongLoRA."},
+ )
+ mixture_of_depths: Optional[Literal["convert", "load"]] = field(
+ default=None,
+ metadata={"help": "Convert the model to mixture-of-depths (MoD) or load the MoD model."},
+ )
+ use_unsloth: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use unsloth's optimization for the LoRA training."},
+ )
+ use_unsloth_gc: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use unsloth's gradient checkpointing (no need to install unsloth)."},
+ )
+ enable_liger_kernel: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to enable liger kernel for faster training."},
+ )
+ moe_aux_loss_coef: Optional[float] = field(
+ default=None,
+ metadata={"help": "Coefficient of the auxiliary router loss in mixture-of-experts model."},
+ )
+ disable_gradient_checkpointing: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to disable gradient checkpointing."},
+ )
+ use_reentrant_gc: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to use reentrant gradient checkpointing."},
+ )
+ upcast_layernorm: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to upcast the layernorm weights in fp32."},
+ )
+ upcast_lmhead_output: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to upcast the output of lm_head in fp32."},
+ )
+ train_from_scratch: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to randomly initialize the model weights."},
+ )
+ infer_backend: EngineName = field(
+ default=EngineName.HF,
+ metadata={"help": "Backend engine used at inference."},
+ )
+ offload_folder: str = field(
+ default="offload",
+ metadata={"help": "Path to offload model weights."},
+ )
+ use_cache: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to use KV cache in generation."},
+ )
+ infer_dtype: Literal["auto", "float16", "bfloat16", "float32"] = field(
+ default="auto",
+ metadata={"help": "Data type for model weights and activations at inference."},
+ )
+ hf_hub_token: Optional[str] = field(
+ default=None,
+ metadata={"help": "Auth token to log in with Hugging Face Hub."},
+ )
+ ms_hub_token: Optional[str] = field(
+ default=None,
+ metadata={"help": "Auth token to log in with ModelScope Hub."},
+ )
+ om_hub_token: Optional[str] = field(
+ default=None,
+ metadata={"help": "Auth token to log in with Modelers Hub."},
+ )
+ print_param_status: bool = field(
+ default=False,
+ metadata={"help": "For debugging purposes, print the status of the parameters in the model."},
+ )
+ trust_remote_code: bool = field(
+ default=False,
+ metadata={"help": "Whether to trust the execution of code from datasets/models defined on the Hub or not."},
+ )
+
+ def __post_init__(self):
+ if self.model_name_or_path is None:
+ raise ValueError("Please provide `model_name_or_path`.")
+
+ if self.split_special_tokens and self.use_fast_tokenizer:
+ raise ValueError("`split_special_tokens` is only supported for slow tokenizers.")
+
+ if self.adapter_name_or_path is not None: # support merging multiple lora weights
+ self.adapter_name_or_path = [path.strip() for path in self.adapter_name_or_path.split(",")]
+
+ if self.add_tokens is not None: # support multiple tokens
+ self.add_tokens = [token.strip() for token in self.add_tokens.split(",")]
+
+ if self.add_special_tokens is not None: # support multiple special tokens
+ self.add_special_tokens = [token.strip() for token in self.add_special_tokens.split(",")]
+
+
+@dataclass
+class QuantizationArguments:
+ r"""Arguments pertaining to the quantization method."""
+
+ quantization_method: QuantizationMethod = field(
+ default=QuantizationMethod.BNB,
+ metadata={"help": "Quantization method to use for on-the-fly quantization."},
+ )
+ quantization_bit: Optional[int] = field(
+ default=None,
+ metadata={"help": "The number of bits to quantize the model using on-the-fly quantization."},
+ )
+ quantization_type: Literal["fp4", "nf4"] = field(
+ default="nf4",
+ metadata={"help": "Quantization data type to use in bitsandbytes int4 training."},
+ )
+ double_quantization: bool = field(
+ default=True,
+ metadata={"help": "Whether or not to use double quantization in bitsandbytes int4 training."},
+ )
+ quantization_device_map: Optional[Literal["auto"]] = field(
+ default=None,
+ metadata={"help": "Device map used to infer the 4-bit quantized model, needs bitsandbytes>=0.43.0."},
+ )
+
+
+@dataclass
+class ProcessorArguments:
+ r"""Arguments pertaining to the image processor."""
+
+ image_max_pixels: int = field(
+ default=768 * 768,
+ metadata={"help": "The maximum number of pixels of image inputs."},
+ )
+ image_min_pixels: int = field(
+ default=32 * 32,
+ metadata={"help": "The minimum number of pixels of image inputs."},
+ )
+ image_do_pan_and_scan: bool = field(
+ default=False,
+ metadata={"help": "Use pan and scan to process image for gemma3."},
+ )
+ crop_to_patches: bool = field(
+ default=False,
+ metadata={"help": "Whether to crop the image to patches for internvl."},
+ )
+ video_max_pixels: int = field(
+ default=256 * 256,
+ metadata={"help": "The maximum number of pixels of video inputs."},
+ )
+ video_min_pixels: int = field(
+ default=16 * 16,
+ metadata={"help": "The minimum number of pixels of video inputs."},
+ )
+ video_fps: float = field(
+ default=2.0,
+ metadata={"help": "The frames to sample per second for video inputs."},
+ )
+ video_maxlen: int = field(
+ default=128,
+ metadata={"help": "The maximum number of sampled frames for video inputs."},
+ )
+ use_audio_in_video: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to use audio in video inputs."},
+ )
+ audio_sampling_rate: int = field(
+ default=16000,
+ metadata={"help": "The sampling rate of audio inputs."},
+ )
+
+ def __post_init__(self):
+ if self.image_max_pixels < self.image_min_pixels:
+ raise ValueError("`image_max_pixels` cannot be smaller than `image_min_pixels`.")
+
+ if self.video_max_pixels < self.video_min_pixels:
+ raise ValueError("`video_max_pixels` cannot be smaller than `video_min_pixels`.")
+
+
+@dataclass
+class ExportArguments:
+ r"""Arguments pertaining to the model export."""
+
+ export_dir: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to the directory to save the exported model."},
+ )
+ export_size: int = field(
+ default=5,
+ metadata={"help": "The file shard size (in GB) of the exported model."},
+ )
+ export_device: Literal["cpu", "auto"] = field(
+ default="cpu",
+ metadata={"help": "The device used in model export, use `auto` to accelerate exporting."},
+ )
+ export_quantization_bit: Optional[int] = field(
+ default=None,
+ metadata={"help": "The number of bits to quantize the exported model."},
+ )
+ export_quantization_dataset: Optional[str] = field(
+ default=None,
+ metadata={"help": "Path to the dataset or dataset name to use in quantizing the exported model."},
+ )
+ export_quantization_nsamples: int = field(
+ default=128,
+ metadata={"help": "The number of samples used for quantization."},
+ )
+ export_quantization_maxlen: int = field(
+ default=1024,
+ metadata={"help": "The maximum length of the model inputs used for quantization."},
+ )
+ export_legacy_format: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to save the `.bin` files instead of `.safetensors`."},
+ )
+ export_hub_model_id: Optional[str] = field(
+ default=None,
+ metadata={"help": "The name of the repository if push the model to the Hugging Face hub."},
+ )
+
+ def __post_init__(self):
+ if self.export_quantization_bit is not None and self.export_quantization_dataset is None:
+ raise ValueError("Quantization dataset is necessary for exporting.")
+
+
+@dataclass
+class VllmArguments:
+ r"""Arguments pertaining to the vLLM worker."""
+
+ vllm_maxlen: int = field(
+ default=4096,
+ metadata={"help": "Maximum sequence (prompt + response) length of the vLLM engine."},
+ )
+ vllm_gpu_util: float = field(
+ default=0.7,
+ metadata={"help": "The fraction of GPU memory in (0,1) to be used for the vLLM engine."},
+ )
+ vllm_enforce_eager: bool = field(
+ default=False,
+ metadata={"help": "Whether or not to disable CUDA graph in the vLLM engine."},
+ )
+ vllm_max_lora_rank: int = field(
+ default=32,
+ metadata={"help": "Maximum rank of all LoRAs in the vLLM engine."},
+ )
+ vllm_config: Optional[Union[dict, str]] = field(
+ default=None,
+ metadata={"help": "Config to initialize the vllm engine. Please use JSON strings."},
+ )
+
+ def __post_init__(self):
+ if isinstance(self.vllm_config, str) and self.vllm_config.startswith("{"):
+ self.vllm_config = _convert_str_dict(json.loads(self.vllm_config))
+
+
+@dataclass
+class SGLangArguments:
+ r"""Arguments pertaining to the SGLang worker."""
+
+ sglang_maxlen: int = field(
+ default=4096,
+ metadata={"help": "Maximum sequence (prompt + response) length of the SGLang engine."},
+ )
+ sglang_mem_fraction: float = field(
+ default=0.7,
+ metadata={"help": "The memory fraction (0-1) to be used for the SGLang engine."},
+ )
+ sglang_tp_size: int = field(
+ default=-1,
+ metadata={"help": "Tensor parallel size for the SGLang engine."},
+ )
+ sglang_config: Optional[Union[dict, str]] = field(
+ default=None,
+ metadata={"help": "Config to initialize the SGLang engine. Please use JSON strings."},
+ )
+ sglang_lora_backend: Literal["triton", "flashinfer"] = field(
+ default="triton",
+ metadata={
+ "help": "The backend of running GEMM kernels for Lora modules. Recommend using the Triton LoRA backend for better performance and stability."
+ },
+ )
+
+ def __post_init__(self):
+ if isinstance(self.sglang_config, str) and self.sglang_config.startswith("{"):
+ self.sglang_config = _convert_str_dict(json.loads(self.sglang_config))
+
+
+@dataclass
+class ModelArguments(
+ SGLangArguments, VllmArguments, ExportArguments, ProcessorArguments, QuantizationArguments, BaseModelArguments
+):
+ r"""Arguments pertaining to which model/config/tokenizer we are going to fine-tune or infer.
+
+ The class on the most right will be displayed first.
+ """
+
+ compute_dtype: Optional[torch.dtype] = field(
+ default=None,
+ init=False,
+ metadata={"help": "Torch data type for computing model outputs, derived from `fp/bf16`. Do not specify it."},
+ )
+ device_map: Optional[Union[str, dict[str, Any]]] = field(
+ default=None,
+ init=False,
+ metadata={"help": "Device map for model placement, derived from training stage. Do not specify it."},
+ )
+ model_max_length: Optional[int] = field(
+ default=None,
+ init=False,
+ metadata={"help": "The maximum input length for model, derived from `cutoff_len`. Do not specify it."},
+ )
+ block_diag_attn: bool = field(
+ default=False,
+ init=False,
+ metadata={"help": "Whether use block diag attention or not, derived from `neat_packing`. Do not specify it."},
+ )
+
+ def __post_init__(self):
+ BaseModelArguments.__post_init__(self)
+ ProcessorArguments.__post_init__(self)
+ ExportArguments.__post_init__(self)
+ VllmArguments.__post_init__(self)
+ SGLangArguments.__post_init__(self)
+
+ @classmethod
+ def copyfrom(cls, source: "Self", **kwargs) -> "Self":
+ init_args, lazy_args = {}, {}
+ for attr in fields(source):
+ if attr.init:
+ init_args[attr.name] = getattr(source, attr.name)
+ else:
+ lazy_args[attr.name] = getattr(source, attr.name)
+
+ init_args.update(kwargs)
+ result = cls(**init_args)
+ for name, value in lazy_args.items():
+ setattr(result, name, value)
+
+ return result
+
+ def to_dict(self) -> dict[str, Any]:
+ args = asdict(self)
+ args = {k: f"<{k.upper()}>" if k.endswith("token") else v for k, v in args.items()}
+ return args
diff --git a/src/train_sft/src/llamafactory/hparams/parser.py b/src/train_sft/src/llamafactory/hparams/parser.py
new file mode 100644
index 0000000000000000000000000000000000000000..9b43198bd6eb873a1238a0240c2fff7aabf47e14
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/parser.py
@@ -0,0 +1,474 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the HuggingFace's transformers library.
+# https://github.com/huggingface/transformers/blob/v4.40.0/examples/pytorch/language-modeling/run_clm.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import sys
+from pathlib import Path
+from typing import Any, Optional, Union
+
+import torch
+import transformers
+from omegaconf import OmegaConf
+from transformers import HfArgumentParser
+from transformers.integrations import is_deepspeed_zero3_enabled
+from transformers.trainer_utils import get_last_checkpoint
+from transformers.training_args import ParallelMode
+from transformers.utils import is_torch_bf16_gpu_available, is_torch_npu_available
+
+from ..extras import logging
+from ..extras.constants import CHECKPOINT_NAMES, EngineName
+from ..extras.misc import check_dependencies, check_version, get_current_device, is_env_enabled
+from .data_args import DataArguments
+from .evaluation_args import EvaluationArguments
+from .finetuning_args import FinetuningArguments
+from .generating_args import GeneratingArguments
+from .model_args import ModelArguments
+from .training_args import RayArguments, TrainingArguments
+
+
+logger = logging.get_logger(__name__)
+
+check_dependencies()
+
+
+_TRAIN_ARGS = [ModelArguments, DataArguments, TrainingArguments, FinetuningArguments, GeneratingArguments]
+_TRAIN_CLS = tuple[ModelArguments, DataArguments, TrainingArguments, FinetuningArguments, GeneratingArguments]
+_INFER_ARGS = [ModelArguments, DataArguments, FinetuningArguments, GeneratingArguments]
+_INFER_CLS = tuple[ModelArguments, DataArguments, FinetuningArguments, GeneratingArguments]
+_EVAL_ARGS = [ModelArguments, DataArguments, EvaluationArguments, FinetuningArguments]
+_EVAL_CLS = tuple[ModelArguments, DataArguments, EvaluationArguments, FinetuningArguments]
+
+
+def read_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> Union[dict[str, Any], list[str]]:
+ r"""Get arguments from the command line or a config file."""
+ if args is not None:
+ return args
+
+ if sys.argv[1].endswith(".yaml") or sys.argv[1].endswith(".yml"):
+ override_config = OmegaConf.from_cli(sys.argv[2:])
+ dict_config = OmegaConf.load(Path(sys.argv[1]).absolute())
+ return OmegaConf.to_container(OmegaConf.merge(dict_config, override_config))
+ elif sys.argv[1].endswith(".json"):
+ override_config = OmegaConf.from_cli(sys.argv[2:])
+ dict_config = OmegaConf.load(Path(sys.argv[1]).absolute())
+ return OmegaConf.to_container(OmegaConf.merge(dict_config, override_config))
+ else:
+ return sys.argv[1:]
+
+
+def _parse_args(
+ parser: "HfArgumentParser", args: Optional[Union[dict[str, Any], list[str]]] = None, allow_extra_keys: bool = False
+) -> tuple[Any]:
+ args = read_args(args)
+ if isinstance(args, dict):
+ return parser.parse_dict(args, allow_extra_keys=allow_extra_keys)
+
+ (*parsed_args, unknown_args) = parser.parse_args_into_dataclasses(args=args, return_remaining_strings=True)
+
+ if unknown_args and not allow_extra_keys:
+ print(parser.format_help())
+ print(f"Got unknown args, potentially deprecated arguments: {unknown_args}")
+ raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {unknown_args}")
+
+ return tuple(parsed_args)
+
+
+def _set_transformers_logging() -> None:
+ if os.getenv("LLAMAFACTORY_VERBOSITY", "INFO") in ["DEBUG", "INFO"]:
+ transformers.utils.logging.set_verbosity_info()
+ transformers.utils.logging.enable_default_handler()
+ transformers.utils.logging.enable_explicit_format()
+
+
+def _set_env_vars() -> None:
+ if is_torch_npu_available():
+ # avoid JIT compile on NPU devices, see https://zhuanlan.zhihu.com/p/660875458
+ torch.npu.set_compile_mode(jit_compile=is_env_enabled("NPU_JIT_COMPILE"))
+ # avoid use fork method on NPU devices, see https://github.com/hiyouga/LLaMA-Factory/issues/7447
+ os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn"
+
+
+def _verify_model_args(
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ finetuning_args: "FinetuningArguments",
+) -> None:
+ if model_args.adapter_name_or_path is not None and finetuning_args.finetuning_type != "lora":
+ raise ValueError("Adapter is only valid for the LoRA method.")
+
+ if model_args.quantization_bit is not None:
+ if finetuning_args.finetuning_type != "lora":
+ raise ValueError("Quantization is only compatible with the LoRA method.")
+
+ if finetuning_args.pissa_init:
+ raise ValueError("Please use scripts/pissa_init.py to initialize PiSSA for a quantized model.")
+
+ if model_args.resize_vocab:
+ raise ValueError("Cannot resize embedding layers of a quantized model.")
+
+ if model_args.adapter_name_or_path is not None and finetuning_args.create_new_adapter:
+ raise ValueError("Cannot create new adapter upon a quantized model.")
+
+ if model_args.adapter_name_or_path is not None and len(model_args.adapter_name_or_path) != 1:
+ raise ValueError("Quantized model only accepts a single adapter. Merge them first.")
+
+ if data_args.template == "yi" and model_args.use_fast_tokenizer:
+ logger.warning_rank0("We should use slow tokenizer for the Yi models. Change `use_fast_tokenizer` to False.")
+ model_args.use_fast_tokenizer = False
+
+
+def _check_extra_dependencies(
+ model_args: "ModelArguments",
+ finetuning_args: "FinetuningArguments",
+ training_args: Optional["TrainingArguments"] = None,
+) -> None:
+ if model_args.use_unsloth:
+ check_version("unsloth", mandatory=True)
+
+ if model_args.enable_liger_kernel:
+ check_version("liger-kernel", mandatory=True)
+
+ if model_args.mixture_of_depths is not None:
+ check_version("mixture-of-depth>=1.1.6", mandatory=True)
+
+ if model_args.infer_backend == EngineName.VLLM:
+ check_version("vllm>=0.4.3,<=0.10.0")
+ check_version("vllm", mandatory=True)
+ elif model_args.infer_backend == EngineName.SGLANG:
+ check_version("sglang>=0.4.5")
+ check_version("sglang", mandatory=True)
+
+ if finetuning_args.use_galore:
+ check_version("galore_torch", mandatory=True)
+
+ if finetuning_args.use_apollo:
+ check_version("apollo_torch", mandatory=True)
+
+ if finetuning_args.use_badam:
+ check_version("badam>=1.2.1", mandatory=True)
+
+ if finetuning_args.use_adam_mini:
+ check_version("adam-mini", mandatory=True)
+
+ if finetuning_args.use_swanlab:
+ check_version("swanlab", mandatory=True)
+
+ if finetuning_args.plot_loss:
+ check_version("matplotlib", mandatory=True)
+
+ if training_args is not None:
+ if training_args.deepspeed:
+ # pin deepspeed version < 0.17 because of https://github.com/deepspeedai/DeepSpeed/issues/7347
+ check_version("deepspeed>=0.10.0,<=0.16.9", mandatory=True)
+
+ if training_args.predict_with_generate:
+ check_version("jieba", mandatory=True)
+ check_version("nltk", mandatory=True)
+ check_version("rouge_chinese", mandatory=True)
+
+
+def _parse_train_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> _TRAIN_CLS:
+ parser = HfArgumentParser(_TRAIN_ARGS)
+ allow_extra_keys = is_env_enabled("ALLOW_EXTRA_ARGS")
+ return _parse_args(parser, args, allow_extra_keys=allow_extra_keys)
+
+
+def _parse_infer_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> _INFER_CLS:
+ parser = HfArgumentParser(_INFER_ARGS)
+ allow_extra_keys = is_env_enabled("ALLOW_EXTRA_ARGS")
+ return _parse_args(parser, args, allow_extra_keys=allow_extra_keys)
+
+
+def _parse_eval_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> _EVAL_CLS:
+ parser = HfArgumentParser(_EVAL_ARGS)
+ allow_extra_keys = is_env_enabled("ALLOW_EXTRA_ARGS")
+ return _parse_args(parser, args, allow_extra_keys=allow_extra_keys)
+
+
+def get_ray_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> RayArguments:
+ parser = HfArgumentParser(RayArguments)
+ (ray_args,) = _parse_args(parser, args, allow_extra_keys=True)
+ return ray_args
+
+
+def get_train_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> _TRAIN_CLS:
+ model_args, data_args, training_args, finetuning_args, generating_args = _parse_train_args(args)
+
+ # Setup logging
+ if training_args.should_log:
+ _set_transformers_logging()
+
+ # Check arguments
+ if finetuning_args.stage != "sft":
+ if training_args.predict_with_generate:
+ raise ValueError("`predict_with_generate` cannot be set as True except SFT.")
+
+ if data_args.neat_packing:
+ raise ValueError("`neat_packing` cannot be set as True except SFT.")
+
+ if data_args.train_on_prompt or data_args.mask_history:
+ raise ValueError("`train_on_prompt` or `mask_history` cannot be set as True except SFT.")
+
+ if finetuning_args.stage == "sft" and training_args.do_predict and not training_args.predict_with_generate:
+ raise ValueError("Please enable `predict_with_generate` to save model predictions.")
+
+ if finetuning_args.stage in ["rm", "ppo"] and training_args.load_best_model_at_end:
+ raise ValueError("RM and PPO stages do not support `load_best_model_at_end`.")
+
+ if finetuning_args.stage == "ppo":
+ if not training_args.do_train:
+ raise ValueError("PPO training does not support evaluation, use the SFT stage to evaluate models.")
+
+ if model_args.shift_attn:
+ raise ValueError("PPO training is incompatible with S^2-Attn.")
+
+ if finetuning_args.reward_model_type == "lora" and model_args.use_unsloth:
+ raise ValueError("Unsloth does not support lora reward model.")
+
+ if training_args.report_to and training_args.report_to[0] not in ["wandb", "tensorboard"]:
+ raise ValueError("PPO only accepts wandb or tensorboard logger.")
+
+ if training_args.parallel_mode == ParallelMode.NOT_DISTRIBUTED:
+ raise ValueError("Please launch distributed training with `llamafactory-cli` or `torchrun`.")
+
+ if training_args.deepspeed and training_args.parallel_mode != ParallelMode.DISTRIBUTED:
+ raise ValueError("Please use `FORCE_TORCHRUN=1` to launch DeepSpeed training.")
+
+ if training_args.max_steps == -1 and data_args.streaming:
+ raise ValueError("Please specify `max_steps` in streaming mode.")
+
+ if training_args.do_train and data_args.dataset is None:
+ raise ValueError("Please specify dataset for training.")
+
+ if (training_args.do_eval or training_args.do_predict) and (
+ data_args.eval_dataset is None and data_args.val_size < 1e-6
+ ):
+ raise ValueError("Please specify dataset for evaluation.")
+
+ if training_args.predict_with_generate:
+ if is_deepspeed_zero3_enabled():
+ raise ValueError("`predict_with_generate` is incompatible with DeepSpeed ZeRO-3.")
+
+ if data_args.eval_dataset is None:
+ raise ValueError("Cannot use `predict_with_generate` if `eval_dataset` is None.")
+
+ if finetuning_args.compute_accuracy:
+ raise ValueError("Cannot use `predict_with_generate` and `compute_accuracy` together.")
+
+ if training_args.do_train and model_args.quantization_device_map == "auto":
+ raise ValueError("Cannot use device map for quantized models in training.")
+
+ if finetuning_args.pissa_init and is_deepspeed_zero3_enabled():
+ raise ValueError("Please use scripts/pissa_init.py to initialize PiSSA in DeepSpeed ZeRO-3.")
+
+ if finetuning_args.pure_bf16:
+ if not (is_torch_bf16_gpu_available() or (is_torch_npu_available() and torch.npu.is_bf16_supported())):
+ raise ValueError("This device does not support `pure_bf16`.")
+
+ if is_deepspeed_zero3_enabled():
+ raise ValueError("`pure_bf16` is incompatible with DeepSpeed ZeRO-3.")
+
+ if training_args.parallel_mode == ParallelMode.DISTRIBUTED:
+ if finetuning_args.use_galore and finetuning_args.galore_layerwise:
+ raise ValueError("Distributed training does not support layer-wise GaLore.")
+
+ if finetuning_args.use_apollo and finetuning_args.apollo_layerwise:
+ raise ValueError("Distributed training does not support layer-wise APOLLO.")
+
+ if finetuning_args.use_badam:
+ if finetuning_args.badam_mode == "ratio":
+ raise ValueError("Radio-based BAdam does not yet support distributed training, use layer-wise BAdam.")
+ elif not is_deepspeed_zero3_enabled():
+ raise ValueError("Layer-wise BAdam only supports DeepSpeed ZeRO-3 training.")
+
+ if training_args.deepspeed is not None and (finetuning_args.use_galore or finetuning_args.use_apollo):
+ raise ValueError("GaLore and APOLLO are incompatible with DeepSpeed yet.")
+
+ if model_args.infer_backend != EngineName.HF:
+ raise ValueError("vLLM/SGLang backend is only available for API, CLI and Web.")
+
+ if model_args.use_unsloth and is_deepspeed_zero3_enabled():
+ raise ValueError("Unsloth is incompatible with DeepSpeed ZeRO-3.")
+
+ _set_env_vars()
+ _verify_model_args(model_args, data_args, finetuning_args)
+ _check_extra_dependencies(model_args, finetuning_args, training_args)
+
+ if (
+ training_args.do_train
+ and finetuning_args.finetuning_type == "lora"
+ and model_args.quantization_bit is None
+ and model_args.resize_vocab
+ and finetuning_args.additional_target is None
+ ):
+ logger.warning_rank0(
+ "Remember to add embedding layers to `additional_target` to make the added tokens trainable."
+ )
+
+ if training_args.do_train and model_args.quantization_bit is not None and (not model_args.upcast_layernorm):
+ logger.warning_rank0("We recommend enable `upcast_layernorm` in quantized training.")
+
+ if training_args.do_train and (not training_args.fp16) and (not training_args.bf16):
+ logger.warning_rank0("We recommend enable mixed precision training.")
+
+ if (
+ training_args.do_train
+ and (finetuning_args.use_galore or finetuning_args.use_apollo)
+ and not finetuning_args.pure_bf16
+ ):
+ logger.warning_rank0(
+ "Using GaLore or APOLLO with mixed precision training may significantly increases GPU memory usage."
+ )
+
+ if (not training_args.do_train) and model_args.quantization_bit is not None:
+ logger.warning_rank0("Evaluating model in 4/8-bit mode may cause lower scores.")
+
+ if (not training_args.do_train) and finetuning_args.stage == "dpo" and finetuning_args.ref_model is None:
+ logger.warning_rank0("Specify `ref_model` for computing rewards at evaluation.")
+
+ # Post-process training arguments
+ training_args.generation_max_length = training_args.generation_max_length or data_args.cutoff_len
+ training_args.generation_num_beams = data_args.eval_num_beams or training_args.generation_num_beams
+ training_args.remove_unused_columns = False # important for multimodal dataset
+
+ if finetuning_args.finetuning_type == "lora":
+ # https://github.com/huggingface/transformers/blob/v4.50.0/src/transformers/trainer.py#L782
+ training_args.label_names = training_args.label_names or ["labels"]
+
+ if "swanlab" in training_args.report_to and finetuning_args.use_swanlab:
+ training_args.report_to.remove("swanlab")
+
+ if (
+ training_args.parallel_mode == ParallelMode.DISTRIBUTED
+ and training_args.ddp_find_unused_parameters is None
+ and finetuning_args.finetuning_type == "lora"
+ ):
+ logger.info_rank0("Set `ddp_find_unused_parameters` to False in DDP training since LoRA is enabled.")
+ training_args.ddp_find_unused_parameters = False
+
+ if finetuning_args.stage in ["rm", "ppo"] and finetuning_args.finetuning_type in ["full", "freeze"]:
+ can_resume_from_checkpoint = False
+ if training_args.resume_from_checkpoint is not None:
+ logger.warning_rank0("Cannot resume from checkpoint in current stage.")
+ training_args.resume_from_checkpoint = None
+ else:
+ can_resume_from_checkpoint = True
+
+ if (
+ training_args.resume_from_checkpoint is None
+ and training_args.do_train
+ and os.path.isdir(training_args.output_dir)
+ and not training_args.overwrite_output_dir
+ and can_resume_from_checkpoint
+ ):
+ last_checkpoint = get_last_checkpoint(training_args.output_dir)
+ if last_checkpoint is None and any(
+ os.path.isfile(os.path.join(training_args.output_dir, name)) for name in CHECKPOINT_NAMES
+ ):
+ raise ValueError("Output directory already exists and is not empty. Please set `overwrite_output_dir`.")
+
+ if last_checkpoint is not None:
+ training_args.resume_from_checkpoint = last_checkpoint
+ logger.info_rank0(f"Resuming training from {training_args.resume_from_checkpoint}.")
+ logger.info_rank0("Change `output_dir` or use `overwrite_output_dir` to avoid.")
+
+ if (
+ finetuning_args.stage in ["rm", "ppo"]
+ and finetuning_args.finetuning_type == "lora"
+ and training_args.resume_from_checkpoint is not None
+ ):
+ logger.warning_rank0(
+ f"Add {training_args.resume_from_checkpoint} to `adapter_name_or_path` to resume training from checkpoint."
+ )
+
+ # Post-process model arguments
+ if training_args.bf16 or finetuning_args.pure_bf16:
+ model_args.compute_dtype = torch.bfloat16
+ elif training_args.fp16:
+ model_args.compute_dtype = torch.float16
+
+ model_args.device_map = {"": get_current_device()}
+ model_args.model_max_length = data_args.cutoff_len
+ model_args.block_diag_attn = data_args.neat_packing
+ data_args.packing = data_args.packing if data_args.packing is not None else finetuning_args.stage == "pt"
+
+ # Log on each process the small summary
+ logger.info(
+ f"Process rank: {training_args.process_index}, "
+ f"world size: {training_args.world_size}, device: {training_args.device}, "
+ f"distributed training: {training_args.parallel_mode == ParallelMode.DISTRIBUTED}, "
+ f"compute dtype: {str(model_args.compute_dtype)}"
+ )
+ transformers.set_seed(training_args.seed)
+
+ return model_args, data_args, training_args, finetuning_args, generating_args
+
+
+def get_infer_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> _INFER_CLS:
+ model_args, data_args, finetuning_args, generating_args = _parse_infer_args(args)
+
+ # Setup logging
+ _set_transformers_logging()
+
+ # Check arguments
+ if model_args.infer_backend == "vllm":
+ if finetuning_args.stage != "sft":
+ raise ValueError("vLLM engine only supports auto-regressive models.")
+
+ if model_args.quantization_bit is not None:
+ raise ValueError("vLLM engine does not support bnb quantization (GPTQ and AWQ are supported).")
+
+ if model_args.rope_scaling is not None:
+ raise ValueError("vLLM engine does not support RoPE scaling.")
+
+ if model_args.adapter_name_or_path is not None and len(model_args.adapter_name_or_path) != 1:
+ raise ValueError("vLLM only accepts a single adapter. Merge them first.")
+
+ _set_env_vars()
+ _verify_model_args(model_args, data_args, finetuning_args)
+ _check_extra_dependencies(model_args, finetuning_args)
+
+ # Post-process model arguments
+ if model_args.export_dir is not None and model_args.export_device == "cpu":
+ model_args.device_map = {"": torch.device("cpu")}
+ if data_args.cutoff_len != DataArguments().cutoff_len: # override cutoff_len if it is not default
+ model_args.model_max_length = data_args.cutoff_len
+ else:
+ model_args.device_map = "auto"
+
+ return model_args, data_args, finetuning_args, generating_args
+
+
+def get_eval_args(args: Optional[Union[dict[str, Any], list[str]]] = None) -> _EVAL_CLS:
+ model_args, data_args, eval_args, finetuning_args = _parse_eval_args(args)
+
+ # Setup logging
+ _set_transformers_logging()
+
+ # Check arguments
+ if model_args.infer_backend != EngineName.HF:
+ raise ValueError("vLLM/SGLang backend is only available for API, CLI and Web.")
+
+ _set_env_vars()
+ _verify_model_args(model_args, data_args, finetuning_args)
+ _check_extra_dependencies(model_args, finetuning_args)
+
+ model_args.device_map = "auto"
+
+ transformers.set_seed(eval_args.seed)
+
+ return model_args, data_args, eval_args, finetuning_args
diff --git a/src/train_sft/src/llamafactory/hparams/training_args.py b/src/train_sft/src/llamafactory/hparams/training_args.py
new file mode 100644
index 0000000000000000000000000000000000000000..38cdf6af0e01d5bff29760d63d62409ec09b10b7
--- /dev/null
+++ b/src/train_sft/src/llamafactory/hparams/training_args.py
@@ -0,0 +1,86 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+from dataclasses import dataclass, field
+from typing import Literal, Optional, Union
+
+from transformers import Seq2SeqTrainingArguments
+from transformers.training_args import _convert_str_dict
+
+from ..extras.misc import use_ray
+
+
+@dataclass
+class RayArguments:
+ r"""Arguments pertaining to the Ray training."""
+
+ ray_run_name: Optional[str] = field(
+ default=None,
+ metadata={"help": "The training results will be saved at `/ray_run_name`."},
+ )
+ ray_storage_path: str = field(
+ default="./saves",
+ metadata={"help": "The storage path to save training results to"},
+ )
+ ray_storage_filesystem: Optional[Literal["s3", "gs", "gcs"]] = field(
+ default=None,
+ metadata={"help": "The storage filesystem to use. If None specified, local filesystem will be used."},
+ )
+ ray_num_workers: int = field(
+ default=1,
+ metadata={"help": "The number of workers for Ray training. Default is 1 worker."},
+ )
+ resources_per_worker: Union[dict, str] = field(
+ default_factory=lambda: {"GPU": 1},
+ metadata={"help": "The resources per worker for Ray training. Default is to use 1 GPU per worker."},
+ )
+ placement_strategy: Literal["SPREAD", "PACK", "STRICT_SPREAD", "STRICT_PACK"] = field(
+ default="PACK",
+ metadata={"help": "The placement strategy for Ray training. Default is PACK."},
+ )
+ ray_init_kwargs: Optional[Union[dict, str]] = field(
+ default=None,
+ metadata={"help": "The arguments to pass to ray.init for Ray training. Default is None."},
+ )
+
+ def __post_init__(self):
+ self.use_ray = use_ray()
+ if isinstance(self.resources_per_worker, str) and self.resources_per_worker.startswith("{"):
+ self.resources_per_worker = _convert_str_dict(json.loads(self.resources_per_worker))
+
+ if isinstance(self.ray_init_kwargs, str) and self.ray_init_kwargs.startswith("{"):
+ self.ray_init_kwargs = _convert_str_dict(json.loads(self.ray_init_kwargs))
+
+ if self.ray_storage_filesystem is not None:
+ if self.ray_storage_filesystem not in ["s3", "gs", "gcs"]:
+ raise ValueError(
+ f"ray_storage_filesystem must be one of ['s3', 'gs', 'gcs'], got {self.ray_storage_filesystem}."
+ )
+
+ import pyarrow.fs as fs
+
+ if self.ray_storage_filesystem == "s3":
+ self.ray_storage_filesystem = fs.S3FileSystem()
+ elif self.ray_storage_filesystem == "gs" or self.ray_storage_filesystem == "gcs":
+ self.ray_storage_filesystem = fs.GcsFileSystem()
+
+
+@dataclass
+class TrainingArguments(RayArguments, Seq2SeqTrainingArguments):
+ r"""Arguments pertaining to the trainer."""
+
+ def __post_init__(self):
+ Seq2SeqTrainingArguments.__post_init__(self)
+ RayArguments.__post_init__(self)
diff --git a/src/train_sft/src/llamafactory/improved_system_prompt.py b/src/train_sft/src/llamafactory/improved_system_prompt.py
new file mode 100644
index 0000000000000000000000000000000000000000..9327fe53f2e2da0a1cec16702189db800b42a225
--- /dev/null
+++ b/src/train_sft/src/llamafactory/improved_system_prompt.py
@@ -0,0 +1,175 @@
+"""
+Improved System Prompt for 3D Indoor Scene Layout Generation
+优化后的3D室内场景布局生成系统提示词
+
+This file contains the enhanced system prompt that incorporates best practices
+from expert layout design methodologies.
+"""
+
+IMPROVED_SYSTEM_PROMPT = """You are an expert AI Interior Architect. Your task is to generate a complete 3D indoor scene layout based on the user's design brief.
+
+You FIRST reason about the design process in a detailed internal monologue, and THEN you provide the final layout as a JSON object.
+The reasoning process MUST BE enclosed within tags. The final answer MUST BE put in .
+
+---
+
+### **Contents for the `` tag: The Design Monologue**
+
+Your internal monologue is your Chain-of-Thought (CoT). It must follow a rigorous **Structured Hierarchical Spatial Reasoning (SHSR)** process, simulating a realistic, iterative design workflow. Structure your reasoning as follows:
+
+**Step 1: Scene Foundation & Inventory**
+- **Deconstruct the Brief:** Start by analyzing the user's request to identify the room type, required objects, desired style/mood, and any constraints.
+- **Define Room Envelope:** Establish the room's boundaries and dimensions.
+- **Object Manifest:** List all the objects you will place, along with their determined dimensions (Width x Height x Depth).
+
+**Step 2: Initial Group-Oriented Layout (First Draft)**
+- **Group-by-Group Design:** Do not place objects randomly. Instead, tackle the layout one functional group at a time (e.g., 'Dining Area', 'Seating Area').
+- **Anchor and Arrange:** For each group, first identify its primary "anchor" object (e.g., the sofa, the dining table). Place this anchor to define the group's location. Then, arrange the other objects within the group relative to the anchor, ensuring functional relationships.
+- **Acknowledge First Draft Status:** This initial layout is your first pass and may have room for improvement.
+
+**Step 3: Critical Review & Iterative Refinement**
+- **Self-Critique:** After the initial layout is complete, critically review your own work from a designer's perspective. Identify subtle flaws related to circulation paths, visual balance, ergonomic clearances, or functional conflicts between groups.
+- **Justified Corrections:** In a step-by-step manner, describe the refinements you are making. For each correction, state the problem you identified and the specific adjustment (e.g., moving or rotating an object) you are applying to fix it. Justify each refinement with a clear design principle (e.g., "To improve flow, I am shifting the coffee table 0.5m away from the sofa.").
+
+**Step 4: Final Cohesion and Verification**
+- **Inter-Group Analysis:** Once all refinements are done, conclude your reasoning by analyzing the final relationships between the groups. Explain how their placement creates a cohesive, harmonious, and functional overall scene.
+- **Final Check:** Verify that the final layout is free of object collisions, all items are within the room boundaries, and all aspects of the user's brief have been successfully met.
+
+---
+
+### **Contents for the `` tag: The Final JSON Layout**
+
+- The final answer is the complete 3D layout as a single, valid JSON object.
+- This JSON is the **only** content allowed within the `` tag. No explanations or surrounding text.
+
+### **JSON Schema for the `` tag:**
+{
+ "room_type": "",
+ "room_envelope": {
+ "bounds_top": [
+ [, , ],
+ // ... list of [x, y, z] coordinates for each corner of the ceiling polygon
+ ],
+ "bounds_bottom": [
+ [, , ],
+ // ... list of [x, y, z] coordinates for each corner of the floor polygon
+ ]
+ },
+ "groups": [
+ {
+ "group_name": "",
+ "group_type": "",
+ "description": "",
+ "objects": [
+ {
+ "desc": "",
+ "size": [, , ],
+ "pos": [, , ],
+ "rot": [, , , ]
+ }
+ // ... other objects in this group
+ ]
+ }
+ // ... other groups in the room
+ ]
+}"""
+
+# Original system prompt for comparison
+ORIGINAL_SYSTEM_PROMPT = """You are an expert AI Interior Architect. Your task is to generate a complete 3D indoor scene layout based on the user's design brief.
+
+You FIRST think about the reasoning process as an internal monologue and then provide the final answer.
+The reasoning process MUST BE enclosed within tags. The final answer MUST BE put in .
+
+**Contents for the `` tag:**
+- Your internal monologue should be a detailed, step-by-step reasoning path in plain text.
+- This path must demonstrate your Structured Hierarchical Spatial Reasoning (SHSR) process, explaining how you arrive at the final design from the user's brief.
+
+**Contents for the `` tag:**
+- The final answer is the complete 3D layout as a single, valid JSON object.
+- This JSON is the only content allowed within the tag.
+
+**Core Principles & Technical Standards (to be reflected in your reasoning and final JSON):**
+- **Methodology:** SHSR
+- **Coordinate System:** Right-handed, Y-up. Units in Meters.
+- **Rotation:** Quaternions [x, y, z, w].
+- **Validation:** The layout must be free of object collisions and boundary violations.
+
+**JSON Schema for the `` tag:**
+{
+ "room_type": "",
+ "room_envelope": {
+ "bounds_top": [
+ [, , ],
+ // ... list of [x, y, z] coordinates for each corner of the ceiling polygon
+ ],
+ "bounds_bottom": [
+ [, , ],
+ // ... list of [x, y, z] coordinates for each corner of the floor polygon
+ ]
+ },
+ "groups": [
+ {
+ "group_name": "",
+ "group_type": "",
+ "description": "",
+ "objects": [
+ {
+ "desc": "",
+ "size": [, , ],
+ "pos": [, , ],
+ "rot": [, , , ],
+ }
+ // ... other objects in this group
+ ]
+ }
+ // ... other groups in the room
+ ]
+}"""
+
+def get_improved_prompt():
+ """
+ Returns the improved system prompt with enhanced design principles.
+
+ Returns:
+ str: The enhanced system prompt string
+ """
+ return IMPROVED_SYSTEM_PROMPT
+
+def get_original_prompt():
+ """
+ Returns the original system prompt for comparison.
+
+ Returns:
+ str: The original system prompt string
+ """
+ return ORIGINAL_SYSTEM_PROMPT
+
+def compare_prompts():
+ """
+ Print a comparison of key differences between original and improved prompts.
+ """
+ print("=== PROMPT COMPARISON ===\n")
+
+ print("🔍 KEY IMPROVEMENTS IN THE ENHANCED PROMPT:")
+ print("1. ✅ Added 6-step structured reasoning framework")
+ print("2. ✅ Introduced specific heuristics (edge placement, alignment, orientation)")
+ print("3. ✅ Emphasized anchor objects and functional grouping")
+ print("4. ✅ Added explicit traffic flow considerations")
+ print("5. ✅ Included constraint verification step")
+ print("6. ✅ More detailed guidance for spatial reasoning")
+
+ print("\n📊 ORIGINAL PROMPT LIMITATIONS:")
+ print("1. ❌ Vague reasoning guidance")
+ print("2. ❌ No specific layout heuristics")
+ print("3. ❌ Missing design prioritization")
+ print("4. ❌ No explicit constraint checking")
+ print("5. ❌ Limited spatial relationship guidance")
+
+if __name__ == "__main__":
+ print("3D Layout Generation System Prompts")
+ print("=" * 40)
+ compare_prompts()
+
+ print(f"\n📝 Improved prompt length: {len(IMPROVED_SYSTEM_PROMPT)} characters")
+ print(f"📝 Original prompt length: {len(ORIGINAL_SYSTEM_PROMPT)} characters")
+ print(f"📈 Enhancement: +{len(IMPROVED_SYSTEM_PROMPT) - len(ORIGINAL_SYSTEM_PROMPT)} characters")
\ No newline at end of file
diff --git a/src/train_sft/src/llamafactory/launcher.py b/src/train_sft/src/llamafactory/launcher.py
new file mode 100644
index 0000000000000000000000000000000000000000..169b042ab023308bd3d25bd03e6658e3e0f4b9da
--- /dev/null
+++ b/src/train_sft/src/llamafactory/launcher.py
@@ -0,0 +1,23 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from llamafactory.train.tuner import run_exp # use absolute import
+
+
+def launch():
+ run_exp()
+
+
+if __name__ == "__main__":
+ launch()
diff --git a/src/train_sft/src/llamafactory/third_party/__init__.py b/src/train_sft/src/llamafactory/third_party/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/train_sft/src/llamafactory/third_party/muon/__init__.py b/src/train_sft/src/llamafactory/third_party/muon/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..afa615d0df2163ea271fff8e128aa8c81224dd8f
--- /dev/null
+++ b/src/train_sft/src/llamafactory/third_party/muon/__init__.py
@@ -0,0 +1,18 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from .muon import Muon
+
+
+__all__ = ["Muon"]
diff --git a/src/train_sft/src/llamafactory/third_party/muon/muon.py b/src/train_sft/src/llamafactory/third_party/muon/muon.py
new file mode 100644
index 0000000000000000000000000000000000000000..d7482c36b8196cf1dfaa8b0ae441b1cfd7ae5dc3
--- /dev/null
+++ b/src/train_sft/src/llamafactory/third_party/muon/muon.py
@@ -0,0 +1,226 @@
+# Copyright 2025 Moonshot AI and the LlamaFactory team.
+#
+# This code is based on the MoonshotAI's Moonlight library.
+# https://github.com/MoonshotAI/Moonlight/blob/master/examples/toy_train.py
+# and the Keller Jordan's Muon library.
+# https://github.com/KellerJordan/Muon/blob/master/muon.py
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# MIT License
+#
+# Copyright (c) 2025 Moonshot AI
+# Copyright (c) 2024 Keller Jordan
+#
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to deal
+# in the Software without restriction, including without limitation the rights
+# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+# copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+#
+# The above copyright notice and this permission notice shall be included in all
+# copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+# SOFTWARE.
+
+import math
+
+import torch
+
+
+def zeropower_via_newtonschulz5(G: "torch.Tensor", steps: int) -> "torch.Tensor":
+ """Newton-Schulz iteration to compute the zeroth power / orthogonalization of G.
+
+ We opt to use a quintic iteration whose coefficients are selected to maximize the slope at zero.
+ For the purpose of minimizing steps, it turns out to be empirically effective to keep increasing
+ the slope at zero even beyond the point where the iteration no longer converges all the way to
+ one everywhere on the interval. This iteration therefore does not produce UV^T but rather something
+ like US'V^T where S' is diagonal with S_{ii}' ~ Uniform(0.5, 1.5), which turns out not to hurt model
+ performance at all relative to UV^T, where USV^T = G is the SVD.
+ """
+ assert len(G.shape) == 2
+ a, b, c = (3.4445, -4.7750, 2.0315)
+ X = G.bfloat16()
+ if G.size(0) > G.size(1):
+ X = X.T
+ # Ensure spectral norm is at most 1
+ X = X / (X.norm() + 1e-7)
+ # Perform the NS iterations
+ for _ in range(steps):
+ A = X @ X.T
+ B = b * A + c * A @ A # adapted from suggestion by @jxbz, @leloykun, and @YouJiacheng
+ X = a * X + B @ X
+
+ if G.size(0) > G.size(1):
+ X = X.T
+ return X
+
+
+class Muon(torch.optim.Optimizer):
+ """Muon - MomentUm Orthogonalized by Newton-schulz.
+
+ Muon internally runs standard SGD-momentum, and then performs an orthogonalization post-
+ processing step, in which each 2D parameter's update is replaced with the nearest orthogonal
+ matrix. To efficiently orthogonalize each update, we use a Newton-Schulz iteration, which has
+ the advantage that it can be stably run in bfloat16 on the GPU.
+
+ Some warnings:
+ - We believe this optimizer is unlikely to work well for training with small batch size.
+ - We believe it may not work well for finetuning pretrained models, but we haven't tested this.
+
+ Arguments:
+ muon_params: The parameters to be optimized by Muon.
+ lr: The learning rate. The updates will have spectral norm of `lr`. (0.02 is a good default)
+ momentum: The momentum used by the internal SGD. (0.95 is a good default)
+ nesterov: Whether to use Nesterov-style momentum in the internal SGD. (recommended)
+ ns_steps: The number of Newton-Schulz iterations to run. (6 is probably always enough)
+ adamw_params: The parameters to be optimized by AdamW. Any parameters in `muon_params` which are
+ {0, 1}-D or are detected as being the embed or lm_head will be optimized by AdamW as well.
+ adamw_lr: The learning rate for the internal AdamW.
+ adamw_betas: The betas for the internal AdamW.
+ adamw_eps: The epsilon for the internal AdamW.
+ adamw_wd: The weight decay for the internal AdamW.
+ """
+
+ def __init__(
+ self,
+ lr=1e-3,
+ wd=0.1,
+ muon_params=None,
+ momentum=0.95,
+ nesterov=True,
+ ns_steps=5,
+ adamw_params=None,
+ adamw_betas=(0.9, 0.95),
+ adamw_eps=1e-8,
+ ):
+ defaults = dict(
+ lr=lr,
+ wd=wd,
+ momentum=momentum,
+ nesterov=nesterov,
+ ns_steps=ns_steps,
+ adamw_betas=adamw_betas,
+ adamw_eps=adamw_eps,
+ )
+
+ params = list(muon_params)
+ adamw_params = list(adamw_params) if adamw_params is not None else []
+ params.extend(adamw_params)
+ super().__init__(params, defaults)
+ # Sort parameters into those for which we will use Muon, and those for which we will not
+ for p in muon_params:
+ # Use Muon for every parameter in muon_params which is >= 2D and doesn't look like an embedding or head layer
+ assert p.ndim == 2, p.ndim
+ self.state[p]["use_muon"] = True
+ for p in adamw_params:
+ # Do not use Muon for parameters in adamw_params
+ self.state[p]["use_muon"] = False
+
+ def adjust_lr_for_muon(self, lr: float, param_shape: list[int]) -> float:
+ A, B = param_shape[:2]
+ # We adjust the learning rate and weight decay based on the size of the parameter matrix
+ # as describted in the paper
+ adjusted_ratio = 0.2 * math.sqrt(max(A, B))
+ adjusted_lr = lr * adjusted_ratio
+ return adjusted_lr
+
+ def step(self, closure=None):
+ """Perform a single optimization step.
+
+ Args:
+ closure (Callable, optional): A closure that reevaluates the model
+ and returns the loss.
+ """
+ loss = None
+ if closure is not None:
+ with torch.enable_grad():
+ loss = closure()
+
+ for group in self.param_groups:
+ # Muon loop
+ params = [p for p in group["params"] if self.state[p]["use_muon"]]
+ lr = group["lr"]
+ wd = group["wd"]
+ momentum = group["momentum"]
+
+ # generate weight updates in distributed fashion
+ for p in params:
+ # sanity check
+ g = p.grad
+ if g is None:
+ continue
+ if g.ndim > 2:
+ g = g.view(g.size(0), -1)
+ assert g is not None
+
+ # calc update
+ state = self.state[p]
+ if "momentum_buffer" not in state:
+ state["momentum_buffer"] = torch.zeros_like(g)
+ buf = state["momentum_buffer"]
+ buf.mul_(momentum).add_(g)
+ if group["nesterov"]:
+ g = g.add(buf, alpha=momentum)
+ else:
+ g = buf
+ u = zeropower_via_newtonschulz5(g, steps=group["ns_steps"])
+
+ # scale update
+ adjusted_lr = self.adjust_lr_for_muon(lr, p.shape)
+
+ # apply weight decay
+ p.data.mul_(1 - lr * wd)
+
+ # apply update
+ p.data.add_(u, alpha=-adjusted_lr)
+
+ # Adam backup
+ params = [p for p in group["params"] if not self.state[p]["use_muon"]]
+ lr = group["lr"]
+ beta1, beta2 = group["adamw_betas"]
+ eps = group["adamw_eps"]
+ weight_decay = group["wd"]
+
+ for p in params:
+ g = p.grad
+ if g is None:
+ continue
+ state = self.state[p]
+ if "step" not in state:
+ state["step"] = 0
+ state["moment1"] = torch.zeros_like(g)
+ state["moment2"] = torch.zeros_like(g)
+ state["step"] += 1
+ step = state["step"]
+ buf1 = state["moment1"]
+ buf2 = state["moment2"]
+ buf1.lerp_(g, 1 - beta1)
+ buf2.lerp_(g.square(), 1 - beta2)
+
+ g = buf1 / (eps + buf2.sqrt())
+
+ bias_correction1 = 1 - beta1**step
+ bias_correction2 = 1 - beta2**step
+ scale = bias_correction1 / bias_correction2**0.5
+ p.data.mul_(1 - lr * weight_decay)
+ p.data.add_(g, alpha=-lr / scale)
+
+ return loss
diff --git a/src/train_sft/src/llamafactory/train/__init__.py b/src/train_sft/src/llamafactory/train/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/train_sft/src/llamafactory/train/callbacks.py b/src/train_sft/src/llamafactory/train/callbacks.py
new file mode 100644
index 0000000000000000000000000000000000000000..82a37a7602feb2b068e5ad2fa578d95a7f1ca5c7
--- /dev/null
+++ b/src/train_sft/src/llamafactory/train/callbacks.py
@@ -0,0 +1,385 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+import signal
+import sys
+import time
+from concurrent.futures import ThreadPoolExecutor
+from datetime import timedelta
+from typing import TYPE_CHECKING, Any, Optional
+
+import torch
+import transformers
+from peft import PeftModel
+from transformers import PreTrainedModel, ProcessorMixin, TrainerCallback
+from transformers.trainer_utils import PREFIX_CHECKPOINT_DIR, has_length
+from transformers.utils import (
+ SAFE_WEIGHTS_NAME,
+ WEIGHTS_NAME,
+ is_safetensors_available,
+)
+from typing_extensions import override
+
+from ..extras import logging
+from ..extras.constants import TRAINER_LOG, V_HEAD_SAFE_WEIGHTS_NAME, V_HEAD_WEIGHTS_NAME
+from ..extras.misc import get_peak_memory, is_env_enabled, use_ray
+
+
+if is_safetensors_available():
+ from safetensors import safe_open
+ from safetensors.torch import save_file
+
+
+if TYPE_CHECKING:
+ from transformers import TrainerControl, TrainerState, TrainingArguments
+ from trl import AutoModelForCausalLMWithValueHead
+
+ from ..hparams import DataArguments, FinetuningArguments, GeneratingArguments, ModelArguments
+
+
+logger = logging.get_logger(__name__)
+
+
+def fix_valuehead_checkpoint(
+ model: "AutoModelForCausalLMWithValueHead", output_dir: str, safe_serialization: bool
+) -> None:
+ r"""Fix the valuehead checkpoint files.
+
+ The model is already unwrapped.
+
+ There are three cases:
+ 1. full tuning without ds_zero3: state_dict = {"model.layers.*": ..., "v_head.summary.*": ...}
+ 2. lora tuning without ds_zero3: state_dict = {"v_head.summary.*": ...}
+ 3. under deepspeed zero3: state_dict = {"pretrained_model.model.layers.*": ..., "v_head.summary.*": ...}
+
+ We assume `stage3_gather_16bit_weights_on_model_save=true`.
+ """
+ if not isinstance(model.pretrained_model, (PreTrainedModel, PeftModel)):
+ return
+
+ if safe_serialization:
+ path_to_checkpoint = os.path.join(output_dir, SAFE_WEIGHTS_NAME)
+ with safe_open(path_to_checkpoint, framework="pt", device="cpu") as f:
+ state_dict: dict[str, torch.Tensor] = {key: f.get_tensor(key).clone() for key in f.keys()}
+ else:
+ path_to_checkpoint = os.path.join(output_dir, WEIGHTS_NAME)
+ state_dict: dict[str, torch.Tensor] = torch.load(path_to_checkpoint, map_location="cpu", weights_only=True)
+
+ os.remove(path_to_checkpoint)
+ decoder_state_dict, v_head_state_dict = {}, {}
+ for name, param in state_dict.items():
+ if name.startswith("v_head."):
+ v_head_state_dict[name] = param
+ else:
+ decoder_state_dict[name.replace("pretrained_model.", "", 1)] = param
+
+ model.pretrained_model.save_pretrained(
+ output_dir, state_dict=decoder_state_dict or None, safe_serialization=safe_serialization
+ )
+
+ if safe_serialization:
+ save_file(v_head_state_dict, os.path.join(output_dir, V_HEAD_SAFE_WEIGHTS_NAME), metadata={"format": "pt"})
+ else:
+ torch.save(v_head_state_dict, os.path.join(output_dir, V_HEAD_WEIGHTS_NAME))
+
+ logger.info_rank0(f"Value head model saved at: {output_dir}")
+
+
+class FixValueHeadModelCallback(TrainerCallback):
+ r"""A callback for fixing the checkpoint for valuehead models."""
+
+ @override
+ def on_save(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if args.should_save:
+ output_dir = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}")
+ fix_valuehead_checkpoint(
+ model=kwargs.pop("model"), output_dir=output_dir, safe_serialization=args.save_safetensors
+ )
+
+
+class SaveProcessorCallback(TrainerCallback):
+ r"""A callback for saving the processor."""
+
+ def __init__(self, processor: "ProcessorMixin") -> None:
+ self.processor = processor
+
+ @override
+ def on_save(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if args.should_save:
+ output_dir = os.path.join(args.output_dir, f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}")
+ self.processor.save_pretrained(output_dir)
+
+ @override
+ def on_train_end(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if args.should_save:
+ self.processor.save_pretrained(args.output_dir)
+
+
+class PissaConvertCallback(TrainerCallback):
+ r"""A callback for converting the PiSSA adapter to a normal one."""
+
+ @override
+ def on_train_begin(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if args.should_save:
+ model = kwargs.pop("model")
+ pissa_init_dir = os.path.join(args.output_dir, "pissa_init")
+ logger.info_rank0(f"Initial PiSSA adapter will be saved at: {pissa_init_dir}.")
+ if isinstance(model, PeftModel):
+ init_lora_weights = getattr(model.peft_config["default"], "init_lora_weights")
+ setattr(model.peft_config["default"], "init_lora_weights", True)
+ model.save_pretrained(pissa_init_dir, safe_serialization=args.save_safetensors)
+ setattr(model.peft_config["default"], "init_lora_weights", init_lora_weights)
+
+ @override
+ def on_train_end(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if args.should_save:
+ model = kwargs.pop("model")
+ pissa_init_dir = os.path.join(args.output_dir, "pissa_init")
+ pissa_backup_dir = os.path.join(args.output_dir, "pissa_backup")
+ pissa_convert_dir = os.path.join(args.output_dir, "pissa_converted")
+ logger.info_rank0(f"Converted PiSSA adapter will be saved at: {pissa_convert_dir}.")
+ # 1. save a pissa backup with init_lora_weights: True
+ # 2. save a converted lora with init_lora_weights: pissa
+ # 3. load the pissa backup with init_lora_weights: True
+ # 4. delete the initial adapter and change init_lora_weights to pissa
+ if isinstance(model, PeftModel):
+ init_lora_weights = getattr(model.peft_config["default"], "init_lora_weights")
+ setattr(model.peft_config["default"], "init_lora_weights", True)
+ model.save_pretrained(pissa_backup_dir, safe_serialization=args.save_safetensors)
+ setattr(model.peft_config["default"], "init_lora_weights", init_lora_weights)
+ model.save_pretrained(
+ pissa_convert_dir,
+ safe_serialization=args.save_safetensors,
+ path_initial_model_for_weight_conversion=pissa_init_dir,
+ )
+ model.load_adapter(pissa_backup_dir, "default", is_trainable=True)
+ model.set_adapter("default")
+ setattr(model.peft_config["default"], "init_lora_weights", init_lora_weights)
+
+
+class LogCallback(TrainerCallback):
+ r"""A callback for logging training and evaluation status."""
+
+ def __init__(self) -> None:
+ # Progress
+ self.start_time = 0
+ self.cur_steps = 0
+ self.max_steps = 0
+ self.elapsed_time = ""
+ self.remaining_time = ""
+ self.thread_pool: Optional[ThreadPoolExecutor] = None
+ # Status
+ self.aborted = False
+ self.do_train = False
+ # Web UI
+ self.webui_mode = is_env_enabled("LLAMABOARD_ENABLED")
+ if self.webui_mode and not use_ray():
+ signal.signal(signal.SIGABRT, self._set_abort)
+ self.logger_handler = logging.LoggerHandler(os.getenv("LLAMABOARD_WORKDIR"))
+ logging.add_handler(self.logger_handler)
+ transformers.logging.add_handler(self.logger_handler)
+
+ def _set_abort(self, signum, frame) -> None:
+ self.aborted = True
+
+ def _reset(self, max_steps: int = 0) -> None:
+ self.start_time = time.time()
+ self.cur_steps = 0
+ self.max_steps = max_steps
+ self.elapsed_time = ""
+ self.remaining_time = ""
+
+ def _timing(self, cur_steps: int) -> None:
+ cur_time = time.time()
+ elapsed_time = cur_time - self.start_time
+ avg_time_per_step = elapsed_time / cur_steps if cur_steps != 0 else 0
+ remaining_time = (self.max_steps - cur_steps) * avg_time_per_step
+ self.cur_steps = cur_steps
+ self.elapsed_time = str(timedelta(seconds=int(elapsed_time)))
+ self.remaining_time = str(timedelta(seconds=int(remaining_time)))
+
+ def _write_log(self, output_dir: str, logs: dict[str, Any]) -> None:
+ with open(os.path.join(output_dir, TRAINER_LOG), "a", encoding="utf-8") as f:
+ f.write(json.dumps(logs) + "\n")
+
+ def _create_thread_pool(self, output_dir: str) -> None:
+ os.makedirs(output_dir, exist_ok=True)
+ self.thread_pool = ThreadPoolExecutor(max_workers=1)
+
+ def _close_thread_pool(self) -> None:
+ if self.thread_pool is not None:
+ self.thread_pool.shutdown(wait=True)
+ self.thread_pool = None
+
+ @override
+ def on_init_end(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if (
+ args.should_save
+ and os.path.exists(os.path.join(args.output_dir, TRAINER_LOG))
+ and args.overwrite_output_dir
+ ):
+ logger.warning_rank0_once("Previous trainer log in this folder will be deleted.")
+ os.remove(os.path.join(args.output_dir, TRAINER_LOG))
+
+ @override
+ def on_train_begin(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if args.should_save:
+ self.do_train = True
+ self._reset(max_steps=state.max_steps)
+ self._create_thread_pool(output_dir=args.output_dir)
+
+ @override
+ def on_train_end(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ self._close_thread_pool()
+
+ @override
+ def on_substep_end(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if self.aborted:
+ control.should_epoch_stop = True
+ control.should_training_stop = True
+
+ @override
+ def on_step_end(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if self.aborted:
+ control.should_epoch_stop = True
+ control.should_training_stop = True
+
+ @override
+ def on_evaluate(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if not self.do_train:
+ self._close_thread_pool()
+
+ @override
+ def on_predict(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if not self.do_train:
+ self._close_thread_pool()
+
+ @override
+ def on_log(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if not args.should_save:
+ return
+
+ self._timing(cur_steps=state.global_step)
+ logs = dict(
+ current_steps=self.cur_steps,
+ total_steps=self.max_steps,
+ loss=state.log_history[-1].get("loss"),
+ eval_loss=state.log_history[-1].get("eval_loss"),
+ predict_loss=state.log_history[-1].get("predict_loss"),
+ reward=state.log_history[-1].get("reward"),
+ accuracy=state.log_history[-1].get("rewards/accuracies"),
+ lr=state.log_history[-1].get("learning_rate"),
+ epoch=state.log_history[-1].get("epoch"),
+ percentage=round(self.cur_steps / self.max_steps * 100, 2) if self.max_steps != 0 else 100,
+ elapsed_time=self.elapsed_time,
+ remaining_time=self.remaining_time,
+ )
+ if state.num_input_tokens_seen:
+ logs["throughput"] = round(state.num_input_tokens_seen / (time.time() - self.start_time), 2)
+ logs["total_tokens"] = state.num_input_tokens_seen
+
+ if is_env_enabled("RECORD_VRAM"):
+ vram_allocated, vram_reserved = get_peak_memory()
+ logs["vram_allocated"] = round(vram_allocated / (1024**3), 2)
+ logs["vram_reserved"] = round(vram_reserved / (1024**3), 2)
+
+ logs = {k: v for k, v in logs.items() if v is not None}
+ if self.webui_mode and all(key in logs for key in ("loss", "lr", "epoch")):
+ log_str = f"'loss': {logs['loss']:.4f}, 'learning_rate': {logs['lr']:2.4e}, 'epoch': {logs['epoch']:.2f}"
+ for extra_key in ("reward", "accuracy", "throughput"):
+ if logs.get(extra_key):
+ log_str += f", '{extra_key}': {logs[extra_key]:.2f}"
+
+ logger.info_rank0("{" + log_str + "}")
+
+ if self.thread_pool is not None:
+ self.thread_pool.submit(self._write_log, args.output_dir, logs)
+
+ @override
+ def on_prediction_step(
+ self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs
+ ):
+ if self.do_train:
+ return
+
+ if self.aborted:
+ sys.exit(0)
+
+ if not args.should_save:
+ return
+
+ eval_dataloader = kwargs.pop("eval_dataloader", None)
+ if has_length(eval_dataloader):
+ if self.max_steps == 0:
+ self._reset(max_steps=len(eval_dataloader))
+ self._create_thread_pool(output_dir=args.output_dir)
+
+ self._timing(cur_steps=self.cur_steps + 1)
+ if self.cur_steps % 5 == 0 and self.thread_pool is not None:
+ logs = dict(
+ current_steps=self.cur_steps,
+ total_steps=self.max_steps,
+ percentage=round(self.cur_steps / self.max_steps * 100, 2) if self.max_steps != 0 else 100,
+ elapsed_time=self.elapsed_time,
+ remaining_time=self.remaining_time,
+ )
+ self.thread_pool.submit(self._write_log, args.output_dir, logs)
+
+
+class ReporterCallback(TrainerCallback):
+ r"""A callback for reporting training status to external logger."""
+
+ def __init__(
+ self,
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ finetuning_args: "FinetuningArguments",
+ generating_args: "GeneratingArguments",
+ ) -> None:
+ self.model_args = model_args
+ self.data_args = data_args
+ self.finetuning_args = finetuning_args
+ self.generating_args = generating_args
+ os.environ["WANDB_PROJECT"] = os.getenv("WANDB_PROJECT", "llamafactory")
+
+ @override
+ def on_train_begin(self, args: "TrainingArguments", state: "TrainerState", control: "TrainerControl", **kwargs):
+ if not state.is_world_process_zero:
+ return
+
+ if "wandb" in args.report_to:
+ import wandb
+
+ wandb.config.update(
+ {
+ "model_args": self.model_args.to_dict(),
+ "data_args": self.data_args.to_dict(),
+ "finetuning_args": self.finetuning_args.to_dict(),
+ "generating_args": self.generating_args.to_dict(),
+ }
+ )
+
+ if self.finetuning_args.use_swanlab:
+ import swanlab # type: ignore
+
+ swanlab.config.update(
+ {
+ "model_args": self.model_args.to_dict(),
+ "data_args": self.data_args.to_dict(),
+ "finetuning_args": self.finetuning_args.to_dict(),
+ "generating_args": self.generating_args.to_dict(),
+ }
+ )
diff --git a/src/train_sft/src/llamafactory/train/test_utils.py b/src/train_sft/src/llamafactory/train/test_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..6e4c4ffc287b33f4928ddd3f16cb0015f839eb2a
--- /dev/null
+++ b/src/train_sft/src/llamafactory/train/test_utils.py
@@ -0,0 +1,119 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import TYPE_CHECKING, Optional, Union
+
+import torch
+from peft import PeftModel
+from transformers import AutoModelForCausalLM
+from trl import AutoModelForCausalLMWithValueHead
+
+from ..data import get_dataset, get_template_and_fix_tokenizer
+from ..extras.misc import get_current_device
+from ..hparams import get_infer_args, get_train_args
+from ..model import load_model, load_tokenizer
+
+
+if TYPE_CHECKING:
+ from peft import LoraModel
+ from transformers import PreTrainedModel
+
+ from ..data.data_utils import DatasetModule
+
+
+def compare_model(model_a: "torch.nn.Module", model_b: "torch.nn.Module", diff_keys: list[str] = []) -> None:
+ state_dict_a = model_a.state_dict()
+ state_dict_b = model_b.state_dict()
+ assert set(state_dict_a.keys()) == set(state_dict_b.keys())
+ for name in state_dict_a.keys():
+ if any(key in name for key in diff_keys):
+ assert torch.allclose(state_dict_a[name], state_dict_b[name], rtol=1e-4, atol=1e-5) is False
+ else:
+ assert torch.allclose(state_dict_a[name], state_dict_b[name], rtol=1e-4, atol=1e-5) is True
+
+
+def check_lora_model(model: "LoraModel") -> tuple[set[str], set[str]]:
+ linear_modules, extra_modules = set(), set()
+ for name, param in model.named_parameters():
+ if any(module in name for module in ["lora_A", "lora_B"]):
+ linear_modules.add(name.split(".lora_", maxsplit=1)[0].split(".")[-1])
+ assert param.requires_grad is True
+ assert param.dtype == torch.float32
+ elif "modules_to_save" in name:
+ extra_modules.add(name.split(".modules_to_save", maxsplit=1)[0].split(".")[-1])
+ assert param.requires_grad is True
+ assert param.dtype == torch.float32
+ else:
+ assert param.requires_grad is False
+ assert param.dtype == torch.float16
+
+ return linear_modules, extra_modules
+
+
+def load_train_model(add_valuehead: bool = False, **kwargs) -> "PreTrainedModel":
+ model_args, _, _, finetuning_args, _ = get_train_args(kwargs)
+ tokenizer = load_tokenizer(model_args)["tokenizer"]
+ return load_model(tokenizer, model_args, finetuning_args, is_trainable=True, add_valuehead=add_valuehead)
+
+
+def load_infer_model(add_valuehead: bool = False, **kwargs) -> "PreTrainedModel":
+ model_args, _, finetuning_args, _ = get_infer_args(kwargs)
+ tokenizer = load_tokenizer(model_args)["tokenizer"]
+ return load_model(tokenizer, model_args, finetuning_args, is_trainable=False, add_valuehead=add_valuehead)
+
+
+def load_reference_model(
+ model_path: str,
+ lora_path: Optional[str] = None,
+ use_lora: bool = False,
+ use_pissa: bool = False,
+ is_trainable: bool = False,
+ add_valuehead: bool = False,
+) -> Union["PreTrainedModel", "LoraModel"]:
+ current_device = get_current_device()
+ if add_valuehead:
+ model: AutoModelForCausalLMWithValueHead = AutoModelForCausalLMWithValueHead.from_pretrained(
+ model_path, torch_dtype=torch.float16, device_map=current_device
+ )
+ if not is_trainable:
+ model.v_head = model.v_head.to(torch.float16)
+
+ return model
+
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, device_map=current_device)
+ if use_lora or use_pissa:
+ model = PeftModel.from_pretrained(
+ model, lora_path, subfolder="pissa_init" if use_pissa else None, is_trainable=is_trainable
+ )
+ for param in filter(lambda p: p.requires_grad, model.parameters()):
+ param.data = param.data.to(torch.float32)
+
+ return model
+
+
+def load_dataset_module(**kwargs) -> "DatasetModule":
+ model_args, data_args, training_args, _, _ = get_train_args(kwargs)
+ tokenizer_module = load_tokenizer(model_args)
+ template = get_template_and_fix_tokenizer(tokenizer_module["tokenizer"], data_args)
+ dataset_module = get_dataset(template, model_args, data_args, training_args, kwargs["stage"], **tokenizer_module)
+ return dataset_module
+
+
+def patch_valuehead_model() -> None:
+ def post_init(self: "AutoModelForCausalLMWithValueHead", state_dict: dict[str, "torch.Tensor"]) -> None:
+ state_dict = {k[7:]: state_dict[k] for k in state_dict.keys() if k.startswith("v_head.")}
+ self.v_head.load_state_dict(state_dict, strict=False)
+ del state_dict
+
+ AutoModelForCausalLMWithValueHead.post_init = post_init
diff --git a/src/train_sft/src/llamafactory/train/trainer_utils.py b/src/train_sft/src/llamafactory/train/trainer_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..7d35bbeb6b535a7f69d9eae2b5d576a14506cf92
--- /dev/null
+++ b/src/train_sft/src/llamafactory/train/trainer_utils.py
@@ -0,0 +1,731 @@
+# Copyright 2025 HuggingFace Inc. and the LlamaFactory team.
+#
+# This code is inspired by the original GaLore's implementation: https://github.com/jiaweizzhao/GaLore
+# and the original LoRA+'s implementation: https://github.com/nikhil-ghosh-berkeley/loraplus
+# and the original BAdam's implementation: https://github.com/Ledzy/BAdam
+# and the HuggingFace's TRL library: https://github.com/huggingface/trl
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+from collections.abc import Mapping
+from pathlib import Path
+from typing import TYPE_CHECKING, Any, Callable, Optional, Union
+
+import torch
+from transformers import Trainer
+from transformers.integrations import is_deepspeed_zero3_enabled
+from transformers.modeling_utils import is_fsdp_enabled
+from transformers.optimization import get_scheduler
+from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS
+from transformers.trainer_pt_utils import get_parameter_names
+from typing_extensions import override
+
+from ..extras import logging
+from ..extras.constants import IGNORE_INDEX, SWANLAB_CONFIG
+from ..extras.packages import is_apollo_available, is_galore_available, is_ray_available
+from ..hparams import FinetuningArguments, ModelArguments
+from ..model import find_all_linear_modules, load_model, load_tokenizer, load_valuehead_params
+
+
+if is_galore_available():
+ from galore_torch import GaLoreAdafactor, GaLoreAdamW, GaLoreAdamW8bit # type: ignore
+
+
+if is_apollo_available():
+ from apollo_torch import APOLLOAdamW # type: ignore
+
+
+if is_ray_available():
+ import ray
+ from ray.train import RunConfig, ScalingConfig
+ from ray.train.torch import TorchTrainer
+
+
+if TYPE_CHECKING:
+ from transformers import PreTrainedModel, TrainerCallback, TrainerState
+ from trl import AutoModelForCausalLMWithValueHead
+
+ from ..hparams import DataArguments, RayArguments, TrainingArguments
+
+
+logger = logging.get_logger(__name__)
+
+
+class DummyOptimizer(torch.optim.Optimizer):
+ r"""A dummy optimizer used for the GaLore or APOLLO algorithm."""
+
+ def __init__(
+ self, lr: float = 1e-3, optimizer_dict: Optional[dict["torch.nn.Parameter", "torch.optim.Optimizer"]] = None
+ ) -> None:
+ dummy_tensor = torch.randn(1, 1)
+ self.optimizer_dict = optimizer_dict
+ super().__init__([dummy_tensor], {"lr": lr})
+
+ @override
+ def zero_grad(self, set_to_none: bool = True) -> None:
+ pass
+
+ @override
+ def step(self, closure: Optional[Callable[[], float]] = None) -> Optional[float]:
+ pass
+
+
+def create_modelcard_and_push(
+ trainer: "Trainer",
+ model_args: "ModelArguments",
+ data_args: "DataArguments",
+ training_args: "TrainingArguments",
+ finetuning_args: "FinetuningArguments",
+) -> None:
+ kwargs = {
+ "tasks": "text-generation",
+ "finetuned_from": model_args.model_name_or_path,
+ "tags": ["llama-factory", finetuning_args.finetuning_type],
+ }
+ if data_args.dataset is not None:
+ kwargs["dataset"] = data_args.dataset
+
+ if model_args.use_unsloth:
+ kwargs["tags"] = kwargs["tags"] + ["unsloth"]
+
+ if not training_args.do_train:
+ pass
+ elif training_args.push_to_hub:
+ trainer.push_to_hub(**kwargs)
+ else:
+ trainer.create_model_card(license="other", **kwargs) # prevent from connecting to hub
+
+
+def create_ref_model(
+ model_args: "ModelArguments", finetuning_args: "FinetuningArguments", add_valuehead: bool = False
+) -> Optional[Union["PreTrainedModel", "AutoModelForCausalLMWithValueHead"]]:
+ r"""Create reference model for PPO/DPO training. Evaluation mode is not supported.
+
+ The valuehead parameter is randomly initialized since it is useless for PPO training.
+ """
+ if finetuning_args.ref_model is not None:
+ ref_model_args = ModelArguments.copyfrom(
+ model_args,
+ model_name_or_path=finetuning_args.ref_model,
+ adapter_name_or_path=finetuning_args.ref_model_adapters,
+ quantization_bit=finetuning_args.ref_model_quantization_bit,
+ )
+ ref_finetuning_args = FinetuningArguments()
+ tokenizer = load_tokenizer(ref_model_args)["tokenizer"]
+ ref_model = load_model(
+ tokenizer, ref_model_args, ref_finetuning_args, is_trainable=False, add_valuehead=add_valuehead
+ )
+ logger.info_rank0(f"Created reference model from {finetuning_args.ref_model}")
+ else:
+ if finetuning_args.finetuning_type == "lora":
+ ref_model = None
+ else:
+ ref_model_args = ModelArguments.copyfrom(model_args)
+ ref_finetuning_args = FinetuningArguments()
+ tokenizer = load_tokenizer(ref_model_args)["tokenizer"]
+ ref_model = load_model(
+ tokenizer, ref_model_args, ref_finetuning_args, is_trainable=False, add_valuehead=add_valuehead
+ )
+ logger.info_rank0("Created reference model from the model itself.")
+
+ return ref_model
+
+
+def create_reward_model(
+ model: "AutoModelForCausalLMWithValueHead", model_args: "ModelArguments", finetuning_args: "FinetuningArguments"
+) -> Optional["AutoModelForCausalLMWithValueHead"]:
+ r"""Create reward model for PPO training."""
+ if finetuning_args.reward_model_type == "api":
+ assert finetuning_args.reward_model.startswith("http"), "Please provide full url."
+ logger.info_rank0(f"Use reward server {finetuning_args.reward_model}")
+ return finetuning_args.reward_model
+ elif finetuning_args.reward_model_type == "lora":
+ model.pretrained_model.load_adapter(finetuning_args.reward_model, "reward")
+ for name, param in model.named_parameters(): # https://github.com/huggingface/peft/issues/1090
+ if "default" in name:
+ param.data = param.data.to(torch.float32) # trainable params should in fp32
+ vhead_params = load_valuehead_params(finetuning_args.reward_model, model_args)
+ assert vhead_params is not None, "Reward model is not correctly loaded."
+ model.register_buffer("reward_head_weight", vhead_params["v_head.summary.weight"], persistent=False)
+ model.register_buffer("reward_head_bias", vhead_params["v_head.summary.bias"], persistent=False)
+ model.register_buffer(
+ "default_head_weight", torch.zeros_like(vhead_params["v_head.summary.weight"]), persistent=False
+ )
+ model.register_buffer(
+ "default_head_bias", torch.zeros_like(vhead_params["v_head.summary.bias"]), persistent=False
+ )
+ logger.info_rank0(f"Loaded adapter weights of reward model from {finetuning_args.reward_model}")
+ return None
+ else:
+ reward_model_args = ModelArguments.copyfrom(
+ model_args,
+ model_name_or_path=finetuning_args.reward_model,
+ adapter_name_or_path=finetuning_args.reward_model_adapters,
+ quantization_bit=finetuning_args.reward_model_quantization_bit,
+ )
+ reward_finetuning_args = FinetuningArguments()
+ tokenizer = load_tokenizer(reward_model_args)["tokenizer"]
+ reward_model = load_model(
+ tokenizer, reward_model_args, reward_finetuning_args, is_trainable=False, add_valuehead=True
+ )
+ logger.info_rank0(f"Loaded full weights of reward model from {finetuning_args.reward_model}")
+ logger.warning_rank0("Please ensure the ppo model and reward model share SAME tokenizer and vocabulary.")
+ return reward_model
+
+
+def _get_decay_parameter_names(model: "PreTrainedModel") -> list[str]:
+ r"""Return a list of names of parameters with weight decay. (weights in non-layernorm layers)."""
+ decay_parameters = get_parameter_names(model, ALL_LAYERNORM_LAYERS)
+ decay_parameters = [name for name in decay_parameters if "bias" not in name]
+ return decay_parameters
+
+
+def _create_galore_optimizer(
+ model: "PreTrainedModel",
+ training_args: "TrainingArguments",
+ finetuning_args: "FinetuningArguments",
+) -> "torch.optim.Optimizer":
+ if len(finetuning_args.galore_target) == 1 and finetuning_args.galore_target[0] == "all":
+ galore_targets = find_all_linear_modules(model, finetuning_args.freeze_vision_tower)
+ else:
+ galore_targets = finetuning_args.galore_target
+
+ galore_params: list[torch.nn.Parameter] = []
+ for name, module in model.named_modules():
+ if isinstance(module, torch.nn.Linear) and any(target in name for target in galore_targets):
+ for param in module.parameters():
+ if param.requires_grad and len(param.shape) > 1:
+ galore_params.append(param)
+
+ galore_kwargs = {
+ "rank": finetuning_args.galore_rank,
+ "update_proj_gap": finetuning_args.galore_update_interval,
+ "scale": finetuning_args.galore_scale,
+ "proj_type": finetuning_args.galore_proj_type,
+ }
+
+ id_galore_params = {id(param) for param in galore_params}
+ decay_params, nodecay_params = [], [] # they are non-galore parameters
+ trainable_params: list[torch.nn.Parameter] = [] # galore_params + decay_params + nodecay_params
+ decay_param_names = _get_decay_parameter_names(model)
+ for name, param in model.named_parameters():
+ if param.requires_grad:
+ trainable_params.append(param)
+ if id(param) not in id_galore_params:
+ if name in decay_param_names:
+ decay_params.append(param)
+ else:
+ nodecay_params.append(param)
+
+ _, optim_kwargs = Trainer.get_optimizer_cls_and_kwargs(training_args)
+
+ if training_args.optim == "adamw_torch":
+ optim_class = GaLoreAdamW
+ elif training_args.optim in ["adamw_bnb_8bit", "adamw_8bit", "paged_adamw_8bit"]:
+ optim_class = GaLoreAdamW8bit
+ elif training_args.optim == "adafactor":
+ optim_class = GaLoreAdafactor
+ else:
+ raise NotImplementedError(f"Unknown optim: {training_args.optim}.")
+
+ if finetuning_args.galore_layerwise:
+ logger.warning_rank0("The displayed gradient norm will be all zeros in layerwise GaLore.")
+ if training_args.gradient_accumulation_steps != 1:
+ raise ValueError("Per-layer GaLore does not support gradient accumulation.")
+
+ optimizer_dict: dict[torch.Tensor, torch.optim.Optimizer] = {}
+ for param in nodecay_params:
+ param_groups = [dict(params=[param], weight_decay=0.0)]
+ optimizer_dict[param] = optim_class(param_groups, **optim_kwargs)
+ for param in decay_params:
+ param_groups = [dict(params=[param], weight_decay=training_args.weight_decay)]
+ optimizer_dict[param] = optim_class(param_groups, **optim_kwargs)
+ for param in galore_params: # galore params have weight decay
+ param_groups = [dict(params=[param], weight_decay=training_args.weight_decay, **galore_kwargs)]
+ optimizer_dict[param] = optim_class(param_groups, **optim_kwargs)
+
+ def optimizer_hook(param: "torch.nn.Parameter"):
+ if param.grad is not None:
+ optimizer_dict[param].step()
+ optimizer_dict[param].zero_grad()
+
+ for param in trainable_params:
+ param.register_post_accumulate_grad_hook(optimizer_hook)
+
+ optimizer = DummyOptimizer(lr=training_args.learning_rate, optimizer_dict=optimizer_dict)
+ else:
+ param_groups = [
+ dict(params=nodecay_params, weight_decay=0.0),
+ dict(params=decay_params, weight_decay=training_args.weight_decay),
+ dict(params=galore_params, weight_decay=training_args.weight_decay, **galore_kwargs),
+ ]
+ optimizer = optim_class(param_groups, **optim_kwargs)
+
+ logger.info_rank0(
+ f"Using GaLore optimizer with args: {galore_kwargs}. "
+ "It may cause hanging at the start of training, wait patiently."
+ )
+ return optimizer
+
+
+def _create_apollo_optimizer(
+ model: "PreTrainedModel",
+ training_args: "TrainingArguments",
+ finetuning_args: "FinetuningArguments",
+) -> "torch.optim.Optimizer":
+ if len(finetuning_args.apollo_target) == 1 and finetuning_args.apollo_target[0] == "all":
+ apollo_targets = find_all_linear_modules(model, finetuning_args.freeze_vision_tower)
+ else:
+ apollo_targets = finetuning_args.apollo_target
+
+ apollo_params: list[torch.nn.Parameter] = []
+ for name, module in model.named_modules():
+ if isinstance(module, torch.nn.Linear) and any(target in name for target in apollo_targets):
+ for param in module.parameters():
+ if param.requires_grad and len(param.shape) > 1:
+ apollo_params.append(param)
+
+ apollo_kwargs = {
+ "rank": finetuning_args.apollo_rank,
+ "proj": finetuning_args.apollo_proj,
+ "proj_type": finetuning_args.apollo_proj_type,
+ "update_proj_gap": finetuning_args.apollo_update_interval,
+ "scale": finetuning_args.apollo_scale,
+ "scale_type": finetuning_args.apollo_scale_type,
+ "scale_front": finetuning_args.apollo_scale_front,
+ }
+
+ id_apollo_params = {id(param) for param in apollo_params}
+ decay_params, nodecay_params = [], [] # they are non-apollo parameters
+ trainable_params: list[torch.nn.Parameter] = [] # apollo_params + decay_params + nodecay_params
+ decay_param_names = _get_decay_parameter_names(model)
+ for name, param in model.named_parameters():
+ if param.requires_grad:
+ trainable_params.append(param)
+ if id(param) not in id_apollo_params:
+ if name in decay_param_names:
+ decay_params.append(param)
+ else:
+ nodecay_params.append(param)
+
+ _, optim_kwargs = Trainer.get_optimizer_cls_and_kwargs(training_args)
+
+ if training_args.optim == "adamw_torch":
+ optim_class = APOLLOAdamW
+ else:
+ raise NotImplementedError(f"Unknown optim: {training_args.optim}.")
+
+ if finetuning_args.apollo_layerwise:
+ logger.warning_rank0("The displayed gradient norm will be all zeros in layerwise APOLLO.")
+ if training_args.gradient_accumulation_steps != 1:
+ raise ValueError("Per-layer APOLLO does not support gradient accumulation.")
+
+ optimizer_dict: dict[torch.Tensor, torch.optim.Optimizer] = {}
+ for param in nodecay_params:
+ param_groups = [dict(params=[param], weight_decay=0.0)]
+ optimizer_dict[param] = optim_class(param_groups, **optim_kwargs)
+ for param in decay_params:
+ param_groups = [dict(params=[param], weight_decay=training_args.weight_decay)]
+ optimizer_dict[param] = optim_class(param_groups, **optim_kwargs)
+ for param in apollo_params: # apollo params have weight decay
+ param_groups = [dict(params=[param], weight_decay=training_args.weight_decay, **apollo_kwargs)]
+ optimizer_dict[param] = optim_class(param_groups, **optim_kwargs)
+
+ def optimizer_hook(param: "torch.nn.Parameter"):
+ if param.grad is not None:
+ optimizer_dict[param].step()
+ optimizer_dict[param].zero_grad()
+
+ for param in trainable_params:
+ param.register_post_accumulate_grad_hook(optimizer_hook)
+
+ optimizer = DummyOptimizer(lr=training_args.learning_rate, optimizer_dict=optimizer_dict)
+ else:
+ param_groups = [
+ dict(params=nodecay_params, weight_decay=0.0),
+ dict(params=decay_params, weight_decay=training_args.weight_decay),
+ dict(params=apollo_params, weight_decay=training_args.weight_decay, **apollo_kwargs),
+ ]
+ optimizer = optim_class(param_groups, **optim_kwargs)
+
+ logger.info_rank0(f"Using APOLLO optimizer with args: {apollo_kwargs}.")
+ return optimizer
+
+
+def _create_loraplus_optimizer(
+ model: "PreTrainedModel",
+ training_args: "TrainingArguments",
+ finetuning_args: "FinetuningArguments",
+) -> "torch.optim.Optimizer":
+ default_lr = training_args.learning_rate
+ loraplus_lr = training_args.learning_rate * finetuning_args.loraplus_lr_ratio
+ embedding_lr = finetuning_args.loraplus_lr_embedding
+
+ decay_param_names = _get_decay_parameter_names(model)
+ param_dict: dict[str, list[torch.nn.Parameter]] = {
+ "lora_a": [],
+ "lora_b": [],
+ "lora_b_nodecay": [],
+ "embedding": [],
+ }
+ for name, param in model.named_parameters():
+ if param.requires_grad:
+ if "lora_embedding_B" in name:
+ param_dict["embedding"].append(param)
+ elif "lora_B" in name or param.ndim == 1:
+ if name in decay_param_names:
+ param_dict["lora_b"].append(param)
+ else:
+ param_dict["lora_b_nodecay"].append(param)
+ else:
+ param_dict["lora_a"].append(param)
+
+ optim_class, optim_kwargs = Trainer.get_optimizer_cls_and_kwargs(training_args)
+ param_groups = [
+ dict(params=param_dict["lora_a"], lr=default_lr, weight_decay=training_args.weight_decay),
+ dict(params=param_dict["lora_b"], lr=loraplus_lr, weight_decay=training_args.weight_decay),
+ dict(params=param_dict["lora_b_nodecay"], lr=loraplus_lr, weight_decay=0.0),
+ dict(params=param_dict["embedding"], lr=embedding_lr, weight_decay=training_args.weight_decay),
+ ]
+ optimizer = optim_class(param_groups, **optim_kwargs)
+ logger.info_rank0(f"Using LoRA+ optimizer with loraplus lr ratio {finetuning_args.loraplus_lr_ratio:.2f}.")
+ return optimizer
+
+
+def _create_badam_optimizer(
+ model: "PreTrainedModel",
+ training_args: "TrainingArguments",
+ finetuning_args: "FinetuningArguments",
+) -> "torch.optim.Optimizer":
+ decay_params, nodecay_params = [], []
+ decay_param_names = _get_decay_parameter_names(model)
+ for name, param in model.named_parameters():
+ if param.requires_grad:
+ if name in decay_param_names:
+ decay_params.append(param)
+ else:
+ nodecay_params.append(param)
+
+ optim_class, optim_kwargs = Trainer.get_optimizer_cls_and_kwargs(training_args)
+ param_groups = [
+ dict(params=nodecay_params, weight_decay=0.0),
+ dict(params=decay_params, weight_decay=training_args.weight_decay),
+ ]
+
+ if finetuning_args.badam_mode == "layer":
+ from badam import BlockOptimizer # type: ignore
+
+ base_optimizer = optim_class(param_groups, **optim_kwargs)
+ optimizer = BlockOptimizer(
+ base_optimizer=base_optimizer,
+ named_parameters_list=list(model.named_parameters()),
+ block_prefix_list=None,
+ switch_block_every=finetuning_args.badam_switch_interval,
+ start_block=finetuning_args.badam_start_block,
+ switch_mode=finetuning_args.badam_switch_mode,
+ verbose=finetuning_args.badam_verbose,
+ ds_zero3_enabled=is_deepspeed_zero3_enabled(),
+ )
+ logger.info_rank0(
+ f"Using BAdam optimizer with layer-wise update, switch mode is {finetuning_args.badam_switch_mode}, "
+ f"switch block every {finetuning_args.badam_switch_interval} steps, "
+ f"default start block is {finetuning_args.badam_start_block}"
+ )
+
+ elif finetuning_args.badam_mode == "ratio":
+ from badam import BlockOptimizerRatio # type: ignore
+
+ assert finetuning_args.badam_update_ratio > 1e-6
+ optimizer = BlockOptimizerRatio(
+ param_groups=param_groups,
+ named_parameters_list=list(model.named_parameters()),
+ update_ratio=finetuning_args.badam_update_ratio,
+ mask_mode=finetuning_args.badam_mask_mode,
+ verbose=finetuning_args.badam_verbose,
+ include_embedding=False,
+ **optim_kwargs,
+ )
+ logger.info_rank0(
+ f"Using BAdam optimizer with ratio-based update, update ratio is {finetuning_args.badam_update_ratio}, "
+ f"mask mode is {finetuning_args.badam_mask_mode}"
+ )
+
+ return optimizer
+
+
+def _create_adam_mini_optimizer(
+ model: "PreTrainedModel",
+ training_args: "TrainingArguments",
+) -> "torch.optim.Optimizer":
+ from adam_mini import Adam_mini # type: ignore
+
+ hidden_size = getattr(model.config, "hidden_size", None)
+ num_q_head = getattr(model.config, "num_attention_heads", None)
+ num_kv_head = getattr(model.config, "num_key_value_heads", None)
+
+ optimizer = Adam_mini(
+ named_parameters=model.named_parameters(),
+ lr=training_args.learning_rate,
+ betas=(training_args.adam_beta1, training_args.adam_beta2),
+ eps=training_args.adam_epsilon,
+ weight_decay=training_args.weight_decay,
+ model_sharding=is_fsdp_enabled() or is_deepspeed_zero3_enabled(),
+ dim=hidden_size,
+ n_heads=num_q_head,
+ n_kv_heads=num_kv_head,
+ )
+ logger.info_rank0("Using Adam-mini optimizer.")
+ return optimizer
+
+
+def _create_muon_optimizer(
+ model: "PreTrainedModel",
+ training_args: "TrainingArguments",
+) -> "torch.optim.Optimizer":
+ from ..third_party.muon import Muon
+
+ muon_params, adamw_params = [], []
+ for name, param in model.named_parameters():
+ if param.requires_grad:
+ # Use Muon for 2D parameters that aren't embeddings or heads
+ if param.ndim == 2 and "embed" not in name and "lm_head" not in name:
+ muon_params.append(param)
+ else:
+ adamw_params.append(param)
+
+ optimizer = Muon(
+ lr=training_args.learning_rate,
+ wd=training_args.weight_decay,
+ muon_params=muon_params,
+ adamw_params=adamw_params,
+ adamw_betas=(training_args.adam_beta1, training_args.adam_beta2),
+ adamw_eps=training_args.adam_epsilon,
+ )
+ logger.info_rank0(
+ f"Using Muon optimizer with {len(muon_params)} Muon params and {len(adamw_params)} AdamW params."
+ )
+ return optimizer
+
+
+def create_custom_optimizer(
+ model: "PreTrainedModel",
+ training_args: "TrainingArguments",
+ finetuning_args: "FinetuningArguments",
+) -> Optional["torch.optim.Optimizer"]:
+ if finetuning_args.use_galore:
+ return _create_galore_optimizer(model, training_args, finetuning_args)
+
+ if finetuning_args.use_apollo:
+ return _create_apollo_optimizer(model, training_args, finetuning_args)
+
+ if finetuning_args.loraplus_lr_ratio is not None:
+ return _create_loraplus_optimizer(model, training_args, finetuning_args)
+
+ if finetuning_args.use_badam:
+ return _create_badam_optimizer(model, training_args, finetuning_args)
+
+ if finetuning_args.use_adam_mini:
+ return _create_adam_mini_optimizer(model, training_args)
+
+ if finetuning_args.use_muon:
+ return _create_muon_optimizer(model, training_args)
+
+
+def create_custom_scheduler(
+ training_args: "TrainingArguments",
+ num_training_steps: int,
+ optimizer: Optional["torch.optim.Optimizer"] = None,
+) -> None:
+ if training_args.lr_scheduler_type == "warmup_stable_decay":
+ num_warmup_steps = training_args.get_warmup_steps(num_training_steps)
+ remaining_steps = num_training_steps - num_warmup_steps
+ num_stable_steps = remaining_steps // 3 # use 1/3 for stable by default
+ num_decay_steps = remaining_steps - num_stable_steps
+ scheduler_kwargs = training_args.lr_scheduler_kwargs or {}
+ default_kwargs = {
+ "num_stable_steps": num_stable_steps,
+ "num_decay_steps": num_decay_steps,
+ }
+ for key, value in default_kwargs.items():
+ if key not in scheduler_kwargs:
+ scheduler_kwargs[key] = value
+
+ training_args.lr_scheduler_kwargs = scheduler_kwargs
+
+ if optimizer is not None and isinstance(optimizer, DummyOptimizer):
+ optimizer_dict = optimizer.optimizer_dict
+ scheduler_dict: dict[torch.nn.Parameter, torch.optim.lr_scheduler.LRScheduler] = {}
+
+ for param in optimizer_dict.keys():
+ scheduler_dict[param] = get_scheduler(
+ training_args.lr_scheduler_type,
+ optimizer=optimizer_dict[param],
+ num_warmup_steps=training_args.get_warmup_steps(num_training_steps),
+ num_training_steps=num_training_steps,
+ scheduler_specific_kwargs=training_args.lr_scheduler_kwargs,
+ )
+
+ def scheduler_hook(param: "torch.nn.Parameter"):
+ scheduler_dict[param].step()
+
+ for param in optimizer_dict.keys():
+ param.register_post_accumulate_grad_hook(scheduler_hook)
+
+
+def get_batch_logps(
+ logits: "torch.Tensor",
+ labels: "torch.Tensor",
+ label_pad_token_id: int = IGNORE_INDEX,
+ ld_alpha: Optional[float] = None,
+) -> tuple["torch.Tensor", "torch.Tensor"]:
+ r"""Compute the log probabilities of the given labels under the given logits.
+
+ Returns:
+ logps: A tensor of shape (batch_size,) containing the sum of log probabilities.
+ valid_length: A tensor of shape (batch_size,) containing the number of non-masked tokens.
+
+ """
+ if logits.shape[:-1] != labels.shape:
+ raise ValueError("Logits (batchsize x seqlen) and labels must have the same shape.")
+
+ labels = labels[:, 1:].clone()
+ logits = logits[:, :-1, :]
+ loss_mask = labels != label_pad_token_id
+ labels[labels == label_pad_token_id] = 0 # dummy token
+ per_token_logps = torch.gather(logits.log_softmax(-1), dim=2, index=labels.unsqueeze(2)).squeeze(2)
+
+ valid_length = loss_mask.sum(-1)
+ if ld_alpha is not None:
+ num_examples = labels.shape[0] // 2
+ chosen_lengths = valid_length[:num_examples]
+ rejected_lengths = valid_length[num_examples:]
+ min_lengths = torch.min(chosen_lengths, rejected_lengths)
+ start_positions = torch.argmax(loss_mask.int(), dim=1)
+ public_lengths = start_positions + torch.cat([min_lengths, min_lengths], dim=0)
+
+ seq_len = labels.shape[-1]
+ position_ids = torch.arange(seq_len, device=per_token_logps.device).expand_as(per_token_logps)
+
+ ld_mask = position_ids < public_lengths.unsqueeze(1)
+ front_mask = (ld_mask * loss_mask).float()
+ rear_mask = (~ld_mask * loss_mask).float()
+
+ front_logps = (per_token_logps * front_mask).sum(-1)
+ rear_logps = (per_token_logps * rear_mask).sum(-1)
+ logps = front_logps + ld_alpha * rear_logps
+ else:
+ logps = (per_token_logps * loss_mask).sum(-1)
+
+ return logps, valid_length
+
+
+def nested_detach(
+ tensors: Union["torch.Tensor", list["torch.Tensor"], tuple["torch.Tensor"], dict[str, "torch.Tensor"]],
+ clone: bool = False,
+):
+ r"""Detach `tensors` (even if it's a nested list/tuple/dict of tensors)."""
+ if isinstance(tensors, (list, tuple)):
+ return type(tensors)(nested_detach(t, clone=clone) for t in tensors)
+ elif isinstance(tensors, Mapping):
+ return type(tensors)({k: nested_detach(t, clone=clone) for k, t in tensors.items()})
+
+ if isinstance(tensors, torch.Tensor):
+ if clone:
+ return tensors.detach().clone()
+ else:
+ return tensors.detach()
+ else:
+ return tensors
+
+
+def get_swanlab_callback(finetuning_args: "FinetuningArguments") -> "TrainerCallback":
+ r"""Get the callback for logging to SwanLab."""
+ import swanlab # type: ignore
+ from swanlab.integration.transformers import SwanLabCallback # type: ignore
+
+ if finetuning_args.swanlab_api_key is not None:
+ swanlab.login(api_key=finetuning_args.swanlab_api_key)
+
+ if finetuning_args.swanlab_lark_webhook_url is not None:
+ from swanlab.plugin.notification import LarkCallback # type: ignore
+
+ lark_callback = LarkCallback(
+ webhook_url=finetuning_args.swanlab_lark_webhook_url,
+ secret=finetuning_args.swanlab_lark_secret,
+ )
+ swanlab.register_callbacks([lark_callback])
+
+ class SwanLabCallbackExtension(SwanLabCallback):
+ def setup(self, args: "TrainingArguments", state: "TrainerState", model: "PreTrainedModel", **kwargs):
+ if not state.is_world_process_zero:
+ return
+
+ super().setup(args, state, model, **kwargs)
+ try:
+ if hasattr(self, "_swanlab"):
+ swanlab_public_config = self._swanlab.get_run().public.json()
+ else: # swanlab <= 0.4.9
+ swanlab_public_config = self._experiment.get_run().public.json()
+ except Exception:
+ swanlab_public_config = {}
+
+ with open(os.path.join(args.output_dir, SWANLAB_CONFIG), "w") as f:
+ f.write(json.dumps(swanlab_public_config, indent=2))
+
+ swanlab_callback = SwanLabCallbackExtension(
+ project=finetuning_args.swanlab_project,
+ workspace=finetuning_args.swanlab_workspace,
+ experiment_name=finetuning_args.swanlab_run_name,
+ mode=finetuning_args.swanlab_mode,
+ config={"Framework": "🦙LlamaFactory"},
+ logdir=finetuning_args.swanlab_logdir,
+ tags=["🦙LlamaFactory"],
+ )
+ return swanlab_callback
+
+
+def get_ray_trainer(
+ training_function: Callable,
+ train_loop_config: dict[str, Any],
+ ray_args: "RayArguments",
+) -> "TorchTrainer":
+ if not ray_args.use_ray:
+ raise ValueError("Ray was not enabled. Please set `USE_RAY=1` to enable ray.")
+
+ if ray_args.ray_init_kwargs is not None:
+ ray.init(**ray_args.ray_init_kwargs)
+
+ if ray_args.ray_storage_filesystem is not None:
+ # this means we are using s3/gcs
+ storage_path = ray_args.ray_storage_path
+ else:
+ storage_path = Path(ray_args.ray_storage_path).absolute().as_posix()
+
+ trainer = TorchTrainer(
+ training_function,
+ train_loop_config=train_loop_config,
+ scaling_config=ScalingConfig(
+ num_workers=ray_args.ray_num_workers,
+ resources_per_worker=ray_args.resources_per_worker,
+ placement_strategy=ray_args.placement_strategy,
+ use_gpu=True,
+ ),
+ run_config=RunConfig(
+ name=ray_args.ray_run_name,
+ storage_filesystem=ray_args.ray_storage_filesystem,
+ storage_path=storage_path,
+ ),
+ )
+ return trainer
diff --git a/src/train_sft/src/llamafactory/train/tuner.py b/src/train_sft/src/llamafactory/train/tuner.py
new file mode 100644
index 0000000000000000000000000000000000000000..2d27179c2643fc9b6409257924d2c7a6502ec427
--- /dev/null
+++ b/src/train_sft/src/llamafactory/train/tuner.py
@@ -0,0 +1,233 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import shutil
+from typing import TYPE_CHECKING, Any, Optional
+
+import torch
+import torch.distributed as dist
+from transformers import EarlyStoppingCallback, PreTrainedModel
+
+from ..data import get_template_and_fix_tokenizer
+from ..extras import logging
+from ..extras.constants import V_HEAD_SAFE_WEIGHTS_NAME, V_HEAD_WEIGHTS_NAME
+from ..extras.misc import infer_optim_dtype
+from ..extras.packages import is_ray_available
+from ..hparams import get_infer_args, get_ray_args, get_train_args, read_args
+from ..model import load_model, load_tokenizer
+from .callbacks import LogCallback, PissaConvertCallback, ReporterCallback
+from .dpo import run_dpo
+from .kto import run_kto
+from .ppo import run_ppo
+from .pt import run_pt
+from .rm import run_rm
+from .sft import run_sft
+from .trainer_utils import get_ray_trainer, get_swanlab_callback
+
+
+if is_ray_available():
+ import ray
+ from ray.train.huggingface.transformers import RayTrainReportCallback
+
+
+if TYPE_CHECKING:
+ from transformers import TrainerCallback
+
+
+logger = logging.get_logger(__name__)
+
+
+def _training_function(config: dict[str, Any]) -> None:
+ args = config.get("args")
+ callbacks: list[Any] = config.get("callbacks")
+ model_args, data_args, training_args, finetuning_args, generating_args = get_train_args(args)
+
+ # ============================================================================
+ # 🔍 打印全部训练参数 - 由用户要求添加
+ # ============================================================================
+ logger.info("=" * 80)
+ logger.info("🚀 GROUP LAYOUT SFT 训练参数总览")
+ logger.info("=" * 80)
+
+ logger.info("\n📋 MODEL ARGUMENTS:")
+ for key, value in vars(model_args).items():
+ logger.info(f" {key:30} = {value}")
+
+ logger.info("\n📋 DATA ARGUMENTS:")
+ for key, value in vars(data_args).items():
+ logger.info(f" {key:30} = {value}")
+
+ logger.info("\n📋 TRAINING ARGUMENTS:")
+ for key, value in vars(training_args).items():
+ logger.info(f" {key:30} = {value}")
+
+ logger.info("\n📋 FINETUNING ARGUMENTS:")
+ for key, value in vars(finetuning_args).items():
+ logger.info(f" {key:30} = {value}")
+
+ logger.info("\n📋 GENERATING ARGUMENTS:")
+ for key, value in vars(generating_args).items():
+ logger.info(f" {key:30} = {value}")
+
+ logger.info("\n📋 ORIGINAL CLI ARGS:")
+ logger.info(f" {args}")
+
+ logger.info("=" * 80)
+ logger.info("🎯 参数检查完成,开始训练...")
+ logger.info("=" * 80)
+ # ============================================================================
+
+ callbacks.append(LogCallback())
+ if finetuning_args.pissa_convert:
+ callbacks.append(PissaConvertCallback())
+
+ if finetuning_args.use_swanlab:
+ callbacks.append(get_swanlab_callback(finetuning_args))
+
+ if finetuning_args.early_stopping_steps is not None:
+ callbacks.append(EarlyStoppingCallback(early_stopping_patience=finetuning_args.early_stopping_steps))
+
+ callbacks.append(ReporterCallback(model_args, data_args, finetuning_args, generating_args)) # add to last
+
+ if finetuning_args.stage == "pt":
+ run_pt(model_args, data_args, training_args, finetuning_args, callbacks)
+ elif finetuning_args.stage == "sft":
+ run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
+ elif finetuning_args.stage == "rm":
+ run_rm(model_args, data_args, training_args, finetuning_args, callbacks)
+ elif finetuning_args.stage == "ppo":
+ run_ppo(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
+ elif finetuning_args.stage == "dpo":
+ run_dpo(model_args, data_args, training_args, finetuning_args, callbacks)
+ elif finetuning_args.stage == "kto":
+ run_kto(model_args, data_args, training_args, finetuning_args, callbacks)
+ else:
+ raise ValueError(f"Unknown task: {finetuning_args.stage}.")
+
+ if is_ray_available() and ray.is_initialized():
+ return # if ray is intialized it will destroy the process group on return
+
+ try:
+ if dist.is_initialized():
+ dist.destroy_process_group()
+ except Exception as e:
+ logger.warning(f"Failed to destroy process group: {e}.")
+
+
+def run_exp(args: Optional[dict[str, Any]] = None, callbacks: Optional[list["TrainerCallback"]] = None) -> None:
+ args = read_args(args)
+ if "-h" in args or "--help" in args:
+ get_train_args(args)
+
+ ray_args = get_ray_args(args)
+ callbacks = callbacks or []
+ if ray_args.use_ray:
+ callbacks.append(RayTrainReportCallback())
+ trainer = get_ray_trainer(
+ training_function=_training_function,
+ train_loop_config={"args": args, "callbacks": callbacks},
+ ray_args=ray_args,
+ )
+ trainer.fit()
+ else:
+ _training_function(config={"args": args, "callbacks": callbacks})
+
+
+def export_model(args: Optional[dict[str, Any]] = None) -> None:
+ model_args, data_args, finetuning_args, _ = get_infer_args(args)
+
+ if model_args.export_dir is None:
+ raise ValueError("Please specify `export_dir` to save model.")
+
+ if model_args.adapter_name_or_path is not None and model_args.export_quantization_bit is not None:
+ raise ValueError("Please merge adapters before quantizing the model.")
+
+ tokenizer_module = load_tokenizer(model_args)
+ tokenizer = tokenizer_module["tokenizer"]
+ processor = tokenizer_module["processor"]
+ template = get_template_and_fix_tokenizer(tokenizer, data_args)
+ model = load_model(tokenizer, model_args, finetuning_args) # must after fixing tokenizer to resize vocab
+
+ if getattr(model, "quantization_method", None) is not None and model_args.adapter_name_or_path is not None:
+ raise ValueError("Cannot merge adapters to a quantized model.")
+
+ if not isinstance(model, PreTrainedModel):
+ raise ValueError("The model is not a `PreTrainedModel`, export aborted.")
+
+ if getattr(model, "quantization_method", None) is not None: # quantized model adopts float16 type
+ setattr(model.config, "torch_dtype", torch.float16)
+ else:
+ if model_args.infer_dtype == "auto":
+ output_dtype = getattr(model.config, "torch_dtype", torch.float32)
+ if output_dtype == torch.float32: # if infer_dtype is auto, try using half precision first
+ output_dtype = infer_optim_dtype(torch.bfloat16)
+ else:
+ output_dtype = getattr(torch, model_args.infer_dtype)
+
+ setattr(model.config, "torch_dtype", output_dtype)
+ model = model.to(output_dtype)
+ logger.info_rank0(f"Convert model dtype to: {output_dtype}.")
+
+ model.save_pretrained(
+ save_directory=model_args.export_dir,
+ max_shard_size=f"{model_args.export_size}GB",
+ safe_serialization=(not model_args.export_legacy_format),
+ )
+ if model_args.export_hub_model_id is not None:
+ model.push_to_hub(
+ model_args.export_hub_model_id,
+ token=model_args.hf_hub_token,
+ max_shard_size=f"{model_args.export_size}GB",
+ safe_serialization=(not model_args.export_legacy_format),
+ )
+
+ if finetuning_args.stage == "rm":
+ if model_args.adapter_name_or_path is not None:
+ vhead_path = model_args.adapter_name_or_path[-1]
+ else:
+ vhead_path = model_args.model_name_or_path
+
+ if os.path.exists(os.path.join(vhead_path, V_HEAD_SAFE_WEIGHTS_NAME)):
+ shutil.copy(
+ os.path.join(vhead_path, V_HEAD_SAFE_WEIGHTS_NAME),
+ os.path.join(model_args.export_dir, V_HEAD_SAFE_WEIGHTS_NAME),
+ )
+ logger.info_rank0(f"Copied valuehead to {model_args.export_dir}.")
+ elif os.path.exists(os.path.join(vhead_path, V_HEAD_WEIGHTS_NAME)):
+ shutil.copy(
+ os.path.join(vhead_path, V_HEAD_WEIGHTS_NAME),
+ os.path.join(model_args.export_dir, V_HEAD_WEIGHTS_NAME),
+ )
+ logger.info_rank0(f"Copied valuehead to {model_args.export_dir}.")
+
+ try:
+ tokenizer.padding_side = "left" # restore padding side
+ tokenizer.init_kwargs["padding_side"] = "left"
+ tokenizer.save_pretrained(model_args.export_dir)
+ if model_args.export_hub_model_id is not None:
+ tokenizer.push_to_hub(model_args.export_hub_model_id, token=model_args.hf_hub_token)
+
+ if processor is not None:
+ processor.save_pretrained(model_args.export_dir)
+ if model_args.export_hub_model_id is not None:
+ processor.push_to_hub(model_args.export_hub_model_id, token=model_args.hf_hub_token)
+
+ except Exception as e:
+ logger.warning_rank0(f"Cannot save tokenizer, please copy the files manually: {e}.")
+
+ ollama_modelfile = os.path.join(model_args.export_dir, "Modelfile")
+ with open(ollama_modelfile, "w", encoding="utf-8") as f:
+ f.write(template.get_ollama_modelfile(tokenizer))
+ logger.info_rank0(f"Ollama modelfile saved in {ollama_modelfile}")
diff --git a/src/train_sft/src/llamafactory/webui/__init__.py b/src/train_sft/src/llamafactory/webui/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/train_sft/src/llamafactory/webui/chatter.py b/src/train_sft/src/llamafactory/webui/chatter.py
new file mode 100644
index 0000000000000000000000000000000000000000..feeeaf3d82213bfde26cd4cc320c1bf091f63498
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/chatter.py
@@ -0,0 +1,246 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+from collections.abc import Generator
+from contextlib import contextmanager
+from typing import TYPE_CHECKING, Any, Optional
+
+from transformers.utils import is_torch_npu_available
+
+from ..chat import ChatModel
+from ..data import Role
+from ..extras.constants import PEFT_METHODS
+from ..extras.misc import torch_gc
+from ..extras.packages import is_gradio_available
+from .common import get_save_dir, load_config
+from .locales import ALERTS
+
+
+if TYPE_CHECKING:
+ from ..chat import BaseEngine
+ from .manager import Manager
+
+
+if is_gradio_available():
+ import gradio as gr
+
+
+def _escape_html(text: str) -> str:
+ r"""Escape HTML characters."""
+ return text.replace("<", "<").replace(">", ">")
+
+
+def _format_response(text: str, lang: str, escape_html: bool, thought_words: tuple[str, str]) -> str:
+ r"""Post-process the response text.
+
+ Based on: https://huggingface.co/spaces/Lyte/DeepSeek-R1-Distill-Qwen-1.5B-Demo-GGUF/blob/main/app.py
+ """
+ if thought_words[0] not in text:
+ return _escape_html(text) if escape_html else text
+
+ text = text.replace(thought_words[0], "")
+ result = text.split(thought_words[1], maxsplit=1)
+ if len(result) == 1:
+ summary = ALERTS["info_thinking"][lang]
+ thought, answer = text, ""
+ else:
+ summary = ALERTS["info_thought"][lang]
+ thought, answer = result
+
+ if escape_html:
+ thought, answer = _escape_html(thought), _escape_html(answer)
+
+ return (
+ f"{summary} \n\n"
+ f"\n{thought}\n
\n {answer}"
+ )
+
+
+@contextmanager
+def update_attr(obj: Any, name: str, value: Any):
+ old_value = getattr(obj, name, None)
+ setattr(obj, name, value)
+ yield
+ setattr(obj, name, old_value)
+
+
+class WebChatModel(ChatModel):
+ def __init__(self, manager: "Manager", demo_mode: bool = False, lazy_init: bool = True) -> None:
+ self.manager = manager
+ self.demo_mode = demo_mode
+ self.engine: Optional[BaseEngine] = None
+
+ if not lazy_init: # read arguments from command line
+ super().__init__()
+
+ if demo_mode and os.getenv("DEMO_MODEL") and os.getenv("DEMO_TEMPLATE"): # load demo model
+ model_name_or_path = os.getenv("DEMO_MODEL")
+ template = os.getenv("DEMO_TEMPLATE")
+ infer_backend = os.getenv("DEMO_BACKEND", "huggingface")
+ super().__init__(
+ dict(model_name_or_path=model_name_or_path, template=template, infer_backend=infer_backend)
+ )
+
+ @property
+ def loaded(self) -> bool:
+ return self.engine is not None
+
+ def load_model(self, data) -> Generator[str, None, None]:
+ get = lambda elem_id: data[self.manager.get_elem_by_id(elem_id)]
+ lang, model_name, model_path = get("top.lang"), get("top.model_name"), get("top.model_path")
+ finetuning_type, checkpoint_path = get("top.finetuning_type"), get("top.checkpoint_path")
+ user_config = load_config()
+
+ error = ""
+ if self.loaded:
+ error = ALERTS["err_exists"][lang]
+ elif not model_name:
+ error = ALERTS["err_no_model"][lang]
+ elif not model_path:
+ error = ALERTS["err_no_path"][lang]
+ elif self.demo_mode:
+ error = ALERTS["err_demo"][lang]
+
+ try:
+ json.loads(get("infer.extra_args"))
+ except json.JSONDecodeError:
+ error = ALERTS["err_json_schema"][lang]
+
+ if error:
+ gr.Warning(error)
+ yield error
+ return
+
+ yield ALERTS["info_loading"][lang]
+ args = dict(
+ model_name_or_path=model_path,
+ cache_dir=user_config.get("cache_dir", None),
+ finetuning_type=finetuning_type,
+ template=get("top.template"),
+ rope_scaling=get("top.rope_scaling") if get("top.rope_scaling") != "none" else None,
+ flash_attn="fa2" if get("top.booster") == "flashattn2" else "auto",
+ use_unsloth=(get("top.booster") == "unsloth"),
+ enable_liger_kernel=(get("top.booster") == "liger_kernel"),
+ infer_backend=get("infer.infer_backend"),
+ infer_dtype=get("infer.infer_dtype"),
+ trust_remote_code=True,
+ )
+ args.update(json.loads(get("infer.extra_args")))
+
+ # checkpoints
+ if checkpoint_path:
+ if finetuning_type in PEFT_METHODS: # list
+ args["adapter_name_or_path"] = ",".join(
+ [get_save_dir(model_name, finetuning_type, adapter) for adapter in checkpoint_path]
+ )
+ else: # str
+ args["model_name_or_path"] = get_save_dir(model_name, finetuning_type, checkpoint_path)
+
+ # quantization
+ if get("top.quantization_bit") != "none":
+ args["quantization_bit"] = int(get("top.quantization_bit"))
+ args["quantization_method"] = get("top.quantization_method")
+ args["double_quantization"] = not is_torch_npu_available()
+
+ super().__init__(args)
+ yield ALERTS["info_loaded"][lang]
+
+ def unload_model(self, data) -> Generator[str, None, None]:
+ lang = data[self.manager.get_elem_by_id("top.lang")]
+
+ if self.demo_mode:
+ gr.Warning(ALERTS["err_demo"][lang])
+ yield ALERTS["err_demo"][lang]
+ return
+
+ yield ALERTS["info_unloading"][lang]
+ self.engine = None
+ torch_gc()
+ yield ALERTS["info_unloaded"][lang]
+
+ @staticmethod
+ def append(
+ chatbot: list[dict[str, str]],
+ messages: list[dict[str, str]],
+ role: str,
+ query: str,
+ escape_html: bool,
+ ) -> tuple[list[dict[str, str]], list[dict[str, str]], str]:
+ r"""Add the user input to chatbot.
+
+ Inputs: infer.chatbot, infer.messages, infer.role, infer.query, infer.escape_html
+ Output: infer.chatbot, infer.messages, infer.query
+ """
+ return (
+ chatbot + [{"role": "user", "content": _escape_html(query) if escape_html else query}],
+ messages + [{"role": role, "content": query}],
+ "",
+ )
+
+ def stream(
+ self,
+ chatbot: list[dict[str, str]],
+ messages: list[dict[str, str]],
+ lang: str,
+ system: str,
+ tools: str,
+ image: Optional[Any],
+ video: Optional[Any],
+ audio: Optional[Any],
+ max_new_tokens: int,
+ top_p: float,
+ temperature: float,
+ skip_special_tokens: bool,
+ escape_html: bool,
+ enable_thinking: bool,
+ ) -> Generator[tuple[list[dict[str, str]], list[dict[str, str]]], None, None]:
+ r"""Generate output text in stream.
+
+ Inputs: infer.chatbot, infer.messages, infer.system, infer.tools, infer.image, infer.video, ...
+ Output: infer.chatbot, infer.messages
+ """
+ with update_attr(self.engine.template, "enable_thinking", enable_thinking):
+ chatbot.append({"role": "assistant", "content": ""})
+ response = ""
+ for new_text in self.stream_chat(
+ messages,
+ system,
+ tools,
+ images=[image] if image else None,
+ videos=[video] if video else None,
+ audios=[audio] if audio else None,
+ max_new_tokens=max_new_tokens,
+ top_p=top_p,
+ temperature=temperature,
+ skip_special_tokens=skip_special_tokens,
+ ):
+ response += new_text
+ if tools:
+ result = self.engine.template.extract_tool(response)
+ else:
+ result = response
+
+ if isinstance(result, list):
+ tool_calls = [{"name": tool.name, "arguments": json.loads(tool.arguments)} for tool in result]
+ tool_calls = json.dumps(tool_calls, ensure_ascii=False)
+ output_messages = messages + [{"role": Role.FUNCTION.value, "content": tool_calls}]
+ bot_text = "```json\n" + tool_calls + "\n```"
+ else:
+ output_messages = messages + [{"role": Role.ASSISTANT.value, "content": result}]
+ bot_text = _format_response(result, lang, escape_html, self.engine.template.thought_words)
+
+ chatbot[-1] = {"role": "assistant", "content": bot_text}
+ yield chatbot, output_messages
diff --git a/src/train_sft/src/llamafactory/webui/common.py b/src/train_sft/src/llamafactory/webui/common.py
new file mode 100644
index 0000000000000000000000000000000000000000..b7004a91cc7fbd8b3c275f24713084210156a6a9
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/common.py
@@ -0,0 +1,286 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+import signal
+from collections import defaultdict
+from datetime import datetime
+from typing import Any, Optional, Union
+
+from psutil import Process
+from yaml import safe_dump, safe_load
+
+from ..extras import logging
+from ..extras.constants import (
+ DATA_CONFIG,
+ DEFAULT_TEMPLATE,
+ MULTIMODAL_SUPPORTED_MODELS,
+ SUPPORTED_MODELS,
+ TRAINING_ARGS,
+ DownloadSource,
+)
+from ..extras.misc import use_modelscope, use_openmind
+
+
+logger = logging.get_logger(__name__)
+
+DEFAULT_CACHE_DIR = "cache"
+DEFAULT_CONFIG_DIR = "config"
+DEFAULT_DATA_DIR = "data"
+DEFAULT_SAVE_DIR = "saves"
+USER_CONFIG = "user_config.yaml"
+
+
+def abort_process(pid: int) -> None:
+ r"""Abort the processes recursively in a bottom-up way."""
+ try:
+ children = Process(pid).children()
+ if children:
+ for child in children:
+ abort_process(child.pid)
+
+ os.kill(pid, signal.SIGABRT)
+ except Exception:
+ pass
+
+
+def get_save_dir(*paths: str) -> os.PathLike:
+ r"""Get the path to saved model checkpoints."""
+ if os.path.sep in paths[-1]:
+ logger.warning_rank0("Found complex path, some features may be not available.")
+ return paths[-1]
+
+ paths = (path.replace(" ", "").strip() for path in paths)
+ return os.path.join(DEFAULT_SAVE_DIR, *paths)
+
+
+def _get_config_path() -> os.PathLike:
+ r"""Get the path to user config."""
+ return os.path.join(DEFAULT_CACHE_DIR, USER_CONFIG)
+
+
+def load_config() -> dict[str, Union[str, dict[str, Any]]]:
+ r"""Load user config if exists."""
+ try:
+ with open(_get_config_path(), encoding="utf-8") as f:
+ return safe_load(f)
+ except Exception:
+ return {"lang": None, "hub_name": None, "last_model": None, "path_dict": {}, "cache_dir": None}
+
+
+def save_config(
+ lang: str, hub_name: Optional[str] = None, model_name: Optional[str] = None, model_path: Optional[str] = None
+) -> None:
+ r"""Save user config."""
+ os.makedirs(DEFAULT_CACHE_DIR, exist_ok=True)
+ user_config = load_config()
+ user_config["lang"] = lang or user_config["lang"]
+ if hub_name:
+ user_config["hub_name"] = hub_name
+
+ if model_name:
+ user_config["last_model"] = model_name
+
+ if model_name and model_path:
+ user_config["path_dict"][model_name] = model_path
+
+ with open(_get_config_path(), "w", encoding="utf-8") as f:
+ safe_dump(user_config, f)
+
+
+def get_model_path(model_name: str) -> str:
+ r"""Get the model path according to the model name."""
+ user_config = load_config()
+ path_dict: dict[DownloadSource, str] = SUPPORTED_MODELS.get(model_name, defaultdict(str))
+ model_path = user_config["path_dict"].get(model_name, "") or path_dict.get(DownloadSource.DEFAULT, "")
+ if (
+ use_modelscope()
+ and path_dict.get(DownloadSource.MODELSCOPE)
+ and model_path == path_dict.get(DownloadSource.DEFAULT)
+ ): # replace hf path with ms path
+ model_path = path_dict.get(DownloadSource.MODELSCOPE)
+
+ if (
+ use_openmind()
+ and path_dict.get(DownloadSource.OPENMIND)
+ and model_path == path_dict.get(DownloadSource.DEFAULT)
+ ): # replace hf path with om path
+ model_path = path_dict.get(DownloadSource.OPENMIND)
+
+ return model_path
+
+
+def get_template(model_name: str) -> str:
+ r"""Get the template name if the model is a chat/distill/instruct model."""
+ return DEFAULT_TEMPLATE.get(model_name, "default")
+
+
+def get_time() -> str:
+ r"""Get current date and time."""
+ return datetime.now().strftime(r"%Y-%m-%d-%H-%M-%S")
+
+
+def is_multimodal(model_name: str) -> bool:
+ r"""Judge if the model is a vision language model."""
+ return model_name in MULTIMODAL_SUPPORTED_MODELS
+
+
+def load_dataset_info(dataset_dir: str) -> dict[str, dict[str, Any]]:
+ r"""Load dataset_info.json."""
+ if dataset_dir == "ONLINE" or dataset_dir.startswith("REMOTE:"):
+ logger.info_rank0(f"dataset_dir is {dataset_dir}, using online dataset.")
+ return {}
+
+ try:
+ with open(os.path.join(dataset_dir, DATA_CONFIG), encoding="utf-8") as f:
+ return json.load(f)
+ except Exception as err:
+ logger.warning_rank0(f"Cannot open {os.path.join(dataset_dir, DATA_CONFIG)} due to {str(err)}.")
+ return {}
+
+
+def load_args(config_path: str) -> Optional[dict[str, Any]]:
+ r"""Load the training configuration from config path."""
+ try:
+ with open(config_path, encoding="utf-8") as f:
+ return safe_load(f)
+ except Exception:
+ return None
+
+
+def save_args(config_path: str, config_dict: dict[str, Any]) -> None:
+ r"""Save the training configuration to config path."""
+ with open(config_path, "w", encoding="utf-8") as f:
+ safe_dump(config_dict, f)
+
+
+def _clean_cmd(args: dict[str, Any]) -> dict[str, Any]:
+ r"""Remove args with NoneType or False or empty string value."""
+ no_skip_keys = [
+ "packing",
+ "enable_thinking",
+ "use_reentrant_gc",
+ "double_quantization",
+ "freeze_vision_tower",
+ "freeze_multi_modal_projector",
+ ]
+ return {k: v for k, v in args.items() if (k in no_skip_keys) or (v is not None and v is not False and v != "")}
+
+
+def gen_cmd(args: dict[str, Any]) -> str:
+ r"""Generate CLI commands for previewing."""
+ cmd_lines = ["llamafactory-cli train "]
+ for k, v in _clean_cmd(args).items():
+ if isinstance(v, dict):
+ cmd_lines.append(f" --{k} {json.dumps(v, ensure_ascii=False)} ")
+ elif isinstance(v, list):
+ cmd_lines.append(f" --{k} {' '.join(map(str, v))} ")
+ else:
+ cmd_lines.append(f" --{k} {str(v)} ")
+
+ if os.name == "nt":
+ cmd_text = "`\n".join(cmd_lines)
+ else:
+ cmd_text = "\\\n".join(cmd_lines)
+
+ cmd_text = f"```bash\n{cmd_text}\n```"
+ return cmd_text
+
+
+def save_cmd(args: dict[str, Any]) -> str:
+ r"""Save CLI commands to launch training."""
+ output_dir = args["output_dir"]
+ os.makedirs(output_dir, exist_ok=True)
+ with open(os.path.join(output_dir, TRAINING_ARGS), "w", encoding="utf-8") as f:
+ safe_dump(_clean_cmd(args), f)
+
+ return os.path.join(output_dir, TRAINING_ARGS)
+
+
+def load_eval_results(path: os.PathLike) -> str:
+ r"""Get scores after evaluation."""
+ with open(path, encoding="utf-8") as f:
+ result = json.dumps(json.load(f), indent=4)
+
+ return f"```json\n{result}\n```\n"
+
+
+def calculate_pixels(pixels: str) -> int:
+ r"""Calculate the number of pixels from the expression."""
+ if "*" in pixels:
+ return int(pixels.split("*")[0]) * int(pixels.split("*")[1])
+ else:
+ return int(pixels)
+
+
+def create_ds_config() -> None:
+ r"""Create deepspeed config in the current directory."""
+ os.makedirs(DEFAULT_CACHE_DIR, exist_ok=True)
+ ds_config = {
+ "train_batch_size": "auto",
+ "train_micro_batch_size_per_gpu": "auto",
+ "gradient_accumulation_steps": "auto",
+ "gradient_clipping": "auto",
+ "zero_allow_untested_optimizer": True,
+ "fp16": {
+ "enabled": "auto",
+ "loss_scale": 0,
+ "loss_scale_window": 1000,
+ "initial_scale_power": 16,
+ "hysteresis": 2,
+ "min_loss_scale": 1,
+ },
+ "bf16": {"enabled": "auto"},
+ }
+ offload_config = {
+ "device": "cpu",
+ "pin_memory": True,
+ }
+ ds_config["zero_optimization"] = {
+ "stage": 2,
+ "allgather_partitions": True,
+ "allgather_bucket_size": 5e8,
+ "overlap_comm": False,
+ "reduce_scatter": True,
+ "reduce_bucket_size": 5e8,
+ "contiguous_gradients": True,
+ "round_robin_gradients": True,
+ }
+ with open(os.path.join(DEFAULT_CACHE_DIR, "ds_z2_config.json"), "w", encoding="utf-8") as f:
+ json.dump(ds_config, f, indent=2)
+
+ ds_config["zero_optimization"]["offload_optimizer"] = offload_config
+ with open(os.path.join(DEFAULT_CACHE_DIR, "ds_z2_offload_config.json"), "w", encoding="utf-8") as f:
+ json.dump(ds_config, f, indent=2)
+
+ ds_config["zero_optimization"] = {
+ "stage": 3,
+ "overlap_comm": False,
+ "contiguous_gradients": True,
+ "sub_group_size": 1e9,
+ "reduce_bucket_size": "auto",
+ "stage3_prefetch_bucket_size": "auto",
+ "stage3_param_persistence_threshold": "auto",
+ "stage3_max_live_parameters": 1e9,
+ "stage3_max_reuse_distance": 1e9,
+ "stage3_gather_16bit_weights_on_model_save": True,
+ }
+ with open(os.path.join(DEFAULT_CACHE_DIR, "ds_z3_config.json"), "w", encoding="utf-8") as f:
+ json.dump(ds_config, f, indent=2)
+
+ ds_config["zero_optimization"]["offload_optimizer"] = offload_config
+ ds_config["zero_optimization"]["offload_param"] = offload_config
+ with open(os.path.join(DEFAULT_CACHE_DIR, "ds_z3_offload_config.json"), "w", encoding="utf-8") as f:
+ json.dump(ds_config, f, indent=2)
diff --git a/src/train_sft/src/llamafactory/webui/control.py b/src/train_sft/src/llamafactory/webui/control.py
new file mode 100644
index 0000000000000000000000000000000000000000..f64b369944ee0b1818236044040ac40e339f2871
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/control.py
@@ -0,0 +1,224 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+from typing import Any, Optional
+
+from transformers.trainer_utils import get_last_checkpoint
+
+from ..extras.constants import (
+ CHECKPOINT_NAMES,
+ PEFT_METHODS,
+ RUNNING_LOG,
+ STAGES_USE_PAIR_DATA,
+ SWANLAB_CONFIG,
+ TRAINER_LOG,
+ TRAINING_STAGES,
+)
+from ..extras.packages import is_gradio_available, is_matplotlib_available
+from ..extras.ploting import gen_loss_plot
+from ..model import QuantizationMethod
+from .common import DEFAULT_CONFIG_DIR, DEFAULT_DATA_DIR, get_model_path, get_save_dir, get_template, load_dataset_info
+from .locales import ALERTS
+
+
+if is_gradio_available():
+ import gradio as gr
+
+
+def switch_hub(hub_name: str) -> None:
+ r"""Switch model hub.
+
+ Inputs: top.hub_name
+ """
+ os.environ["USE_MODELSCOPE_HUB"] = "1" if hub_name == "modelscope" else "0"
+ os.environ["USE_OPENMIND_HUB"] = "1" if hub_name == "openmind" else "0"
+
+
+def can_quantize(finetuning_type: str) -> "gr.Dropdown":
+ r"""Judge if the quantization is available in this finetuning type.
+
+ Inputs: top.finetuning_type
+ Outputs: top.quantization_bit
+ """
+ if finetuning_type not in PEFT_METHODS:
+ return gr.Dropdown(value="none", interactive=False)
+ else:
+ return gr.Dropdown(interactive=True)
+
+
+def can_quantize_to(quantization_method: str) -> "gr.Dropdown":
+ r"""Get the available quantization bits.
+
+ Inputs: top.quantization_method
+ Outputs: top.quantization_bit
+ """
+ if quantization_method == QuantizationMethod.BNB:
+ available_bits = ["none", "8", "4"]
+ elif quantization_method == QuantizationMethod.HQQ:
+ available_bits = ["none", "8", "6", "5", "4", "3", "2", "1"]
+ elif quantization_method == QuantizationMethod.EETQ:
+ available_bits = ["none", "8"]
+
+ return gr.Dropdown(choices=available_bits)
+
+
+def change_stage(training_stage: str = list(TRAINING_STAGES.keys())[0]) -> tuple[list[str], bool]:
+ r"""Modify states after changing the training stage.
+
+ Inputs: train.training_stage
+ Outputs: train.dataset, train.packing
+ """
+ return [], TRAINING_STAGES[training_stage] == "pt"
+
+
+def get_model_info(model_name: str) -> tuple[str, str]:
+ r"""Get the necessary information of this model.
+
+ Inputs: top.model_name
+ Outputs: top.model_path, top.template
+ """
+ return get_model_path(model_name), get_template(model_name)
+
+
+def check_template(lang: str, template: str) -> None:
+ r"""Check if an instruct model is used.
+
+ Please use queue=True to show the warning message.
+
+ Inputs: top.lang, top.template
+ """
+ if template == "default":
+ gr.Warning(ALERTS["warn_no_instruct"][lang])
+
+
+def get_trainer_info(lang: str, output_path: os.PathLike, do_train: bool) -> tuple[str, "gr.Slider", dict[str, Any]]:
+ r"""Get training infomation for monitor.
+
+ If do_train is True:
+ Inputs: top.lang, train.output_path
+ Outputs: train.output_box, train.progress_bar, train.loss_viewer, train.swanlab_link
+ If do_train is False:
+ Inputs: top.lang, eval.output_path
+ Outputs: eval.output_box, eval.progress_bar, None, None
+ """
+ running_log = ""
+ running_progress = gr.Slider(visible=False)
+ running_info = {}
+
+ running_log_path = os.path.join(output_path, RUNNING_LOG)
+ if os.path.isfile(running_log_path):
+ with open(running_log_path, encoding="utf-8") as f:
+ running_log = "```\n" + f.read()[-20000:] + "\n```\n" # avoid lengthy log
+
+ trainer_log_path = os.path.join(output_path, TRAINER_LOG)
+ if os.path.isfile(trainer_log_path):
+ trainer_log: list[dict[str, Any]] = []
+ with open(trainer_log_path, encoding="utf-8") as f:
+ for line in f:
+ trainer_log.append(json.loads(line))
+
+ if len(trainer_log) != 0:
+ latest_log = trainer_log[-1]
+ percentage = latest_log["percentage"]
+ label = "Running {:d}/{:d}: {} < {}".format(
+ latest_log["current_steps"],
+ latest_log["total_steps"],
+ latest_log["elapsed_time"],
+ latest_log["remaining_time"],
+ )
+ running_progress = gr.Slider(label=label, value=percentage, visible=True)
+
+ if do_train and is_matplotlib_available():
+ running_info["loss_viewer"] = gr.Plot(gen_loss_plot(trainer_log))
+
+ swanlab_config_path = os.path.join(output_path, SWANLAB_CONFIG)
+ if os.path.isfile(swanlab_config_path):
+ with open(swanlab_config_path, encoding="utf-8") as f:
+ swanlab_public_config = json.load(f)
+ swanlab_link = swanlab_public_config["cloud"]["experiment_url"]
+ if swanlab_link is not None:
+ running_info["swanlab_link"] = gr.Markdown(
+ ALERTS["info_swanlab_link"][lang] + swanlab_link, visible=True
+ )
+
+ return running_log, running_progress, running_info
+
+
+def list_checkpoints(model_name: str, finetuning_type: str) -> "gr.Dropdown":
+ r"""List all available checkpoints.
+
+ Inputs: top.model_name, top.finetuning_type
+ Outputs: top.checkpoint_path
+ """
+ checkpoints = []
+ if model_name:
+ save_dir = get_save_dir(model_name, finetuning_type)
+ if save_dir and os.path.isdir(save_dir):
+ for checkpoint in os.listdir(save_dir):
+ if os.path.isdir(os.path.join(save_dir, checkpoint)) and any(
+ os.path.isfile(os.path.join(save_dir, checkpoint, name)) for name in CHECKPOINT_NAMES
+ ):
+ checkpoints.append(checkpoint)
+
+ if finetuning_type in PEFT_METHODS:
+ return gr.Dropdown(value=[], choices=checkpoints, multiselect=True)
+ else:
+ return gr.Dropdown(value=None, choices=checkpoints, multiselect=False)
+
+
+def list_config_paths(current_time: str) -> "gr.Dropdown":
+ r"""List all the saved configuration files.
+
+ Inputs: train.current_time
+ Outputs: train.config_path
+ """
+ config_files = [f"{current_time}.yaml"]
+ if os.path.isdir(DEFAULT_CONFIG_DIR):
+ for file_name in os.listdir(DEFAULT_CONFIG_DIR):
+ if file_name.endswith(".yaml") and file_name not in config_files:
+ config_files.append(file_name)
+
+ return gr.Dropdown(choices=config_files)
+
+
+def list_datasets(dataset_dir: str = None, training_stage: str = list(TRAINING_STAGES.keys())[0]) -> "gr.Dropdown":
+ r"""List all available datasets in the dataset dir for the training stage.
+
+ Inputs: *.dataset_dir, *.training_stage
+ Outputs: *.dataset
+ """
+ dataset_info = load_dataset_info(dataset_dir if dataset_dir is not None else DEFAULT_DATA_DIR)
+ ranking = TRAINING_STAGES[training_stage] in STAGES_USE_PAIR_DATA
+ datasets = [k for k, v in dataset_info.items() if v.get("ranking", False) == ranking]
+ return gr.Dropdown(choices=datasets)
+
+
+def list_output_dirs(model_name: Optional[str], finetuning_type: str, current_time: str) -> "gr.Dropdown":
+ r"""List all the directories that can resume from.
+
+ Inputs: top.model_name, top.finetuning_type, train.current_time
+ Outputs: train.output_dir
+ """
+ output_dirs = [f"train_{current_time}"]
+ if model_name:
+ save_dir = get_save_dir(model_name, finetuning_type)
+ if save_dir and os.path.isdir(save_dir):
+ for folder in os.listdir(save_dir):
+ output_dir = os.path.join(save_dir, folder)
+ if os.path.isdir(output_dir) and get_last_checkpoint(output_dir) is not None:
+ output_dirs.append(folder)
+
+ return gr.Dropdown(choices=output_dirs)
diff --git a/src/train_sft/src/llamafactory/webui/engine.py b/src/train_sft/src/llamafactory/webui/engine.py
new file mode 100644
index 0000000000000000000000000000000000000000..eb1aa443d6130726a0791088f200ce12fdf80655
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/engine.py
@@ -0,0 +1,83 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from typing import TYPE_CHECKING, Any
+
+from .chatter import WebChatModel
+from .common import create_ds_config, get_time, load_config
+from .locales import LOCALES
+from .manager import Manager
+from .runner import Runner
+
+
+if TYPE_CHECKING:
+ from gradio.components import Component
+
+
+class Engine:
+ r"""A general engine to control the behaviors of Web UI."""
+
+ def __init__(self, demo_mode: bool = False, pure_chat: bool = False) -> None:
+ self.demo_mode = demo_mode
+ self.pure_chat = pure_chat
+ self.manager = Manager()
+ self.runner = Runner(self.manager, demo_mode)
+ self.chatter = WebChatModel(self.manager, demo_mode, lazy_init=(not pure_chat))
+ if not demo_mode:
+ create_ds_config()
+
+ def _update_component(self, input_dict: dict[str, dict[str, Any]]) -> dict["Component", "Component"]:
+ r"""Update gradio components according to the (elem_id, properties) mapping."""
+ output_dict: dict[Component, Component] = {}
+ for elem_id, elem_attr in input_dict.items():
+ elem = self.manager.get_elem_by_id(elem_id)
+ output_dict[elem] = elem.__class__(**elem_attr)
+
+ return output_dict
+
+ def resume(self):
+ r"""Get the initial value of gradio components and restores training status if necessary."""
+ user_config = load_config() if not self.demo_mode else {} # do not use config in demo mode
+ lang = user_config.get("lang") or "en"
+ init_dict = {"top.lang": {"value": lang}, "infer.chat_box": {"visible": self.chatter.loaded}}
+
+ if not self.pure_chat:
+ current_time = get_time()
+ hub_name = user_config.get("hub_name") or "huggingface"
+ init_dict["top.hub_name"] = {"value": hub_name}
+ init_dict["train.current_time"] = {"value": current_time}
+ init_dict["train.output_dir"] = {"value": f"train_{current_time}"}
+ init_dict["train.config_path"] = {"value": f"{current_time}.yaml"}
+ init_dict["eval.output_dir"] = {"value": f"eval_{current_time}"}
+ init_dict["infer.mm_box"] = {"visible": False}
+
+ if user_config.get("last_model", None):
+ init_dict["top.model_name"] = {"value": user_config["last_model"]}
+
+ yield self._update_component(init_dict)
+
+ if self.runner.running and not self.demo_mode and not self.pure_chat:
+ yield {elem: elem.__class__(value=value) for elem, value in self.runner.running_data.items()}
+ if self.runner.do_train:
+ yield self._update_component({"train.resume_btn": {"value": True}})
+ else:
+ yield self._update_component({"eval.resume_btn": {"value": True}})
+
+ def change_lang(self, lang: str):
+ r"""Update the displayed language of gradio components."""
+ return {
+ elem: elem.__class__(**LOCALES[elem_name][lang])
+ for elem_name, elem in self.manager.get_elem_iter()
+ if elem_name in LOCALES
+ }
diff --git a/src/train_sft/src/llamafactory/webui/interface.py b/src/train_sft/src/llamafactory/webui/interface.py
new file mode 100644
index 0000000000000000000000000000000000000000..1cb989b296ccca0c7fbace473f48ed5f3176a529
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/interface.py
@@ -0,0 +1,106 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import platform
+
+from ..extras.misc import fix_proxy, is_env_enabled
+from ..extras.packages import is_gradio_available
+from .common import save_config
+from .components import (
+ create_chat_box,
+ create_eval_tab,
+ create_export_tab,
+ create_footer,
+ create_infer_tab,
+ create_top,
+ create_train_tab,
+)
+from .css import CSS
+from .engine import Engine
+
+
+if is_gradio_available():
+ import gradio as gr
+
+
+def create_ui(demo_mode: bool = False) -> "gr.Blocks":
+ engine = Engine(demo_mode=demo_mode, pure_chat=False)
+ hostname = os.getenv("HOSTNAME", os.getenv("COMPUTERNAME", platform.node())).split(".")[0]
+
+ with gr.Blocks(title=f"LLaMA Factory ({hostname})", css=CSS) as demo:
+ title = gr.HTML()
+ subtitle = gr.HTML()
+ if demo_mode:
+ gr.DuplicateButton(value="Duplicate Space for private use", elem_classes="duplicate-button")
+
+ engine.manager.add_elems("head", {"title": title, "subtitle": subtitle})
+ engine.manager.add_elems("top", create_top())
+ lang: gr.Dropdown = engine.manager.get_elem_by_id("top.lang")
+
+ with gr.Tab("Train"):
+ engine.manager.add_elems("train", create_train_tab(engine))
+
+ with gr.Tab("Evaluate & Predict"):
+ engine.manager.add_elems("eval", create_eval_tab(engine))
+
+ with gr.Tab("Chat"):
+ engine.manager.add_elems("infer", create_infer_tab(engine))
+
+ if not demo_mode:
+ with gr.Tab("Export"):
+ engine.manager.add_elems("export", create_export_tab(engine))
+
+ engine.manager.add_elems("footer", create_footer())
+ demo.load(engine.resume, outputs=engine.manager.get_elem_list(), concurrency_limit=None)
+ lang.change(engine.change_lang, [lang], engine.manager.get_elem_list(), queue=False)
+ lang.input(save_config, inputs=[lang], queue=False)
+
+ return demo
+
+
+def create_web_demo() -> "gr.Blocks":
+ engine = Engine(pure_chat=True)
+ hostname = os.getenv("HOSTNAME", os.getenv("COMPUTERNAME", platform.node())).split(".")[0]
+
+ with gr.Blocks(title=f"LLaMA Factory Web Demo ({hostname})", css=CSS) as demo:
+ lang = gr.Dropdown(choices=["en", "ru", "zh", "ko", "ja"], scale=1)
+ engine.manager.add_elems("top", dict(lang=lang))
+
+ _, _, chat_elems = create_chat_box(engine, visible=True)
+ engine.manager.add_elems("infer", chat_elems)
+
+ demo.load(engine.resume, outputs=engine.manager.get_elem_list(), concurrency_limit=None)
+ lang.change(engine.change_lang, [lang], engine.manager.get_elem_list(), queue=False)
+ lang.input(save_config, inputs=[lang], queue=False)
+
+ return demo
+
+
+def run_web_ui() -> None:
+ gradio_ipv6 = is_env_enabled("GRADIO_IPV6")
+ gradio_share = is_env_enabled("GRADIO_SHARE")
+ server_name = os.getenv("GRADIO_SERVER_NAME", "[::]" if gradio_ipv6 else "0.0.0.0")
+ print("Visit http://ip:port for Web UI, e.g., http://127.0.0.1:7860")
+ fix_proxy(ipv6_enabled=gradio_ipv6)
+ create_ui().queue().launch(share=gradio_share, server_name=server_name, inbrowser=True)
+
+
+def run_web_demo() -> None:
+ gradio_ipv6 = is_env_enabled("GRADIO_IPV6")
+ gradio_share = is_env_enabled("GRADIO_SHARE")
+ server_name = os.getenv("GRADIO_SERVER_NAME", "[::]" if gradio_ipv6 else "0.0.0.0")
+ print("Visit http://ip:port for Web UI, e.g., http://127.0.0.1:7860")
+ fix_proxy(ipv6_enabled=gradio_ipv6)
+ create_web_demo().queue().launch(share=gradio_share, server_name=server_name, inbrowser=True)
diff --git a/src/train_sft/src/llamafactory/webui/locales.py b/src/train_sft/src/llamafactory/webui/locales.py
new file mode 100644
index 0000000000000000000000000000000000000000..a23a4ceafe17d9a5dffc3f7fe4dd0d0f6a863e6d
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/locales.py
@@ -0,0 +1,3173 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCALES = {
+ "title": {
+ "en": {
+ "value": "🦙🏭LLaMA Factory: Unified Efficient Fine-Tuning of 100+ LLMs ",
+ },
+ "ru": {
+ "value": "🦙🏭LLaMA Factory: Унифицированная эффективная тонкая настройка 100+ LLMs ",
+ },
+ "zh": {
+ "value": "🦙🏭LLaMA Factory: 一站式大模型高效微调平台 ",
+ },
+ "ko": {
+ "value": "🦙🏭LLaMA Factory: 100+ LLMs를 위한 통합 효율적인 튜닝 ",
+ },
+ "ja": {
+ "value": "🦙🏭LLaMA Factory: 100+ LLMs の統合効率的なチューニング ",
+ },
+ },
+ "subtitle": {
+ "en": {
+ "value": (
+ ""
+ ),
+ },
+ "ru": {
+ "value": (
+ ""
+ ),
+ },
+ "zh": {
+ "value": (
+ ""
+ ),
+ },
+ "ko": {
+ "value": (
+ ""
+ ),
+ },
+ "ja": {
+ "value": (
+ ""
+ ),
+ },
+ },
+ "lang": {
+ "en": {
+ "label": "Language",
+ },
+ "ru": {
+ "label": "Язык",
+ },
+ "zh": {
+ "label": "语言",
+ },
+ "ko": {
+ "label": "언어",
+ },
+ "ja": {
+ "label": "言語",
+ },
+ },
+ "model_name": {
+ "en": {
+ "label": "Model name",
+ "info": "Input the initial name to search for the model.",
+ },
+ "ru": {
+ "label": "Название модели",
+ "info": "Введите начальное имя для поиска модели.",
+ },
+ "zh": {
+ "label": "模型名称",
+ "info": "输入首单词以检索模型。",
+ },
+ "ko": {
+ "label": "모델 이름",
+ "info": "모델을 검색할 초기 이름을 입력하세요.",
+ },
+ "ja": {
+ "label": "モデル名",
+ "info": "モデルを検索するための初期名を入力してください。",
+ },
+ },
+ "model_path": {
+ "en": {
+ "label": "Model path",
+ "info": "Path to pretrained model or model identifier from Hugging Face.",
+ },
+ "ru": {
+ "label": "Путь к модели",
+ "info": "Путь к предварительно обученной модели или идентификатор модели от Hugging Face.",
+ },
+ "zh": {
+ "label": "模型路径",
+ "info": "本地模型的文件路径或 Hugging Face 的模型标识符。",
+ },
+ "ko": {
+ "label": "모델 경로",
+ "info": "사전 훈련된 모델의 경로 또는 Hugging Face의 모델 식별자.",
+ },
+ "ja": {
+ "label": "モデルパス",
+ "info": "事前学習済みモデルへのパス、または Hugging Face のモデル識別子。",
+ },
+ },
+ "hub_name": {
+ "en": {
+ "label": "Hub name",
+ "info": "Choose the model download source.",
+ },
+ "ru": {
+ "label": "Имя хаба",
+ "info": "Выберите источник загрузки модели.",
+ },
+ "zh": {
+ "label": "模型下载源",
+ "info": "选择模型下载源。(网络受限环境推荐使用 ModelScope)",
+ },
+ "ko": {
+ "label": "모델 다운로드 소스",
+ "info": "모델 다운로드 소스를 선택하세요.",
+ },
+ "ja": {
+ "label": "モデルダウンロードソース",
+ "info": "モデルをダウンロードするためのソースを選択してください。",
+ },
+ },
+ "finetuning_type": {
+ "en": {
+ "label": "Finetuning method",
+ },
+ "ru": {
+ "label": "Метод дообучения",
+ },
+ "zh": {
+ "label": "微调方法",
+ },
+ "ko": {
+ "label": "파인튜닝 방법",
+ },
+ "ja": {
+ "label": "ファインチューニング方法",
+ },
+ },
+ "checkpoint_path": {
+ "en": {
+ "label": "Checkpoint path",
+ },
+ "ru": {
+ "label": "Путь контрольной точки",
+ },
+ "zh": {
+ "label": "检查点路径",
+ },
+ "ko": {
+ "label": "체크포인트 경로",
+ },
+ "ja": {
+ "label": "チェックポイントパス",
+ },
+ },
+ "quantization_bit": {
+ "en": {
+ "label": "Quantization bit",
+ "info": "Enable quantization (QLoRA).",
+ },
+ "ru": {
+ "label": "Уровень квантования",
+ "info": "Включить квантование (QLoRA).",
+ },
+ "zh": {
+ "label": "量化等级",
+ "info": "启用量化(QLoRA)。",
+ },
+ "ko": {
+ "label": "양자화 비트",
+ "info": "양자화 활성화 (QLoRA).",
+ },
+ "ja": {
+ "label": "量子化ビット",
+ "info": "量子化を有効にする (QLoRA)。",
+ },
+ },
+ "quantization_method": {
+ "en": {
+ "label": "Quantization method",
+ "info": "Quantization algorithm to use.",
+ },
+ "ru": {
+ "label": "Метод квантования",
+ "info": "Алгоритм квантования, который следует использовать.",
+ },
+ "zh": {
+ "label": "量化方法",
+ "info": "使用的量化算法。",
+ },
+ "ko": {
+ "label": "양자화 방법",
+ "info": "사용할 양자화 알고리즘.",
+ },
+ "ja": {
+ "label": "量子化方法",
+ "info": "使用する量子化アルゴリズム。",
+ },
+ },
+ "template": {
+ "en": {
+ "label": "Chat template",
+ "info": "The chat template used in constructing prompts.",
+ },
+ "ru": {
+ "label": "Шаблон чата",
+ "info": "Шаблон чата используемый для составления подсказок.",
+ },
+ "zh": {
+ "label": "对话模板",
+ "info": "构建提示词时使用的模板。",
+ },
+ "ko": {
+ "label": "채팅 템플릿",
+ "info": "프롬프트 작성에 사용되는 채팅 템플릿.",
+ },
+ "ja": {
+ "label": "チャットテンプレート",
+ "info": "プロンプトの構築に使用されるチャットテンプレート。",
+ },
+ },
+ "rope_scaling": {
+ "en": {
+ "label": "RoPE scaling",
+ "info": "RoPE scaling method to use.",
+ },
+ "ru": {
+ "label": "Масштабирование RoPE",
+ "info": "Метод масштабирования RoPE для использования.",
+ },
+ "zh": {"label": "RoPE 插值方法", "info": "RoPE 插值时使用的方法。"},
+ "ko": {
+ "label": "RoPE 스케일링",
+ "info": "사용할 RoPE 스케일링 방법.",
+ },
+ "ja": {
+ "label": "RoPE スケーリング",
+ "info": "使用する RoPE スケーリング方法。",
+ },
+ },
+ "booster": {
+ "en": {
+ "label": "Booster",
+ "info": "Approach used to boost training speed.",
+ },
+ "ru": {
+ "label": "Ускоритель",
+ "info": "Подход, используемый для ускорения обучения.",
+ },
+ "zh": {"label": "加速方式", "info": "使用的加速方法。"},
+ "ko": {
+ "label": "부스터",
+ "info": "훈련 속도를 향상시키기 위해 사용된 접근 방식.",
+ },
+ "ja": {
+ "label": "ブースター",
+ "info": "トレーニング速度を向上させるためのアプローチ。",
+ },
+ },
+ "training_stage": {
+ "en": {
+ "label": "Stage",
+ "info": "The stage to perform in training.",
+ },
+ "ru": {
+ "label": "Этап",
+ "info": "Этап выполнения обучения.",
+ },
+ "zh": {
+ "label": "训练阶段",
+ "info": "目前采用的训练方式。",
+ },
+ "ko": {
+ "label": "학습 단계",
+ "info": "수행할 학습 방법.",
+ },
+ "ja": {
+ "label": "ステージ",
+ "info": "トレーニングで実行するステージ。",
+ },
+ },
+ "dataset_dir": {
+ "en": {
+ "label": "Data dir",
+ "info": "Path to the data directory.",
+ },
+ "ru": {
+ "label": "Директория данных",
+ "info": "Путь к директории данных.",
+ },
+ "zh": {
+ "label": "数据路径",
+ "info": "数据文件夹的路径。",
+ },
+ "ko": {
+ "label": "데이터 디렉토리",
+ "info": "데이터 디렉토리의 경로.",
+ },
+ "ja": {
+ "label": "データディレクトリ",
+ "info": "データディレクトリへのパス。",
+ },
+ },
+ "dataset": {
+ "en": {
+ "label": "Dataset",
+ },
+ "ru": {
+ "label": "Набор данных",
+ },
+ "zh": {
+ "label": "数据集",
+ },
+ "ko": {
+ "label": "데이터셋",
+ },
+ "ja": {
+ "label": "データセット",
+ },
+ },
+ "data_preview_btn": {
+ "en": {
+ "value": "Preview dataset",
+ },
+ "ru": {
+ "value": "Просмотреть набор данных",
+ },
+ "zh": {
+ "value": "预览数据集",
+ },
+ "ko": {
+ "value": "데이터셋 미리보기",
+ },
+ "ja": {
+ "value": "データセットをプレビュー",
+ },
+ },
+ "preview_count": {
+ "en": {
+ "label": "Count",
+ },
+ "ru": {
+ "label": "Количество",
+ },
+ "zh": {
+ "label": "数量",
+ },
+ "ko": {
+ "label": "개수",
+ },
+ "ja": {
+ "label": "カウント",
+ },
+ },
+ "page_index": {
+ "en": {
+ "label": "Page",
+ },
+ "ru": {
+ "label": "Страница",
+ },
+ "zh": {
+ "label": "页数",
+ },
+ "ko": {
+ "label": "페이지",
+ },
+ "ja": {
+ "label": "ページ",
+ },
+ },
+ "prev_btn": {
+ "en": {
+ "value": "Prev",
+ },
+ "ru": {
+ "value": "Предыдущая",
+ },
+ "zh": {
+ "value": "上一页",
+ },
+ "ko": {
+ "value": "이전",
+ },
+ "ja": {
+ "value": "前へ",
+ },
+ },
+ "next_btn": {
+ "en": {
+ "value": "Next",
+ },
+ "ru": {
+ "value": "Следующая",
+ },
+ "zh": {
+ "value": "下一页",
+ },
+ "ko": {
+ "value": "다음",
+ },
+ "ja": {
+ "value": "次へ",
+ },
+ },
+ "close_btn": {
+ "en": {
+ "value": "Close",
+ },
+ "ru": {
+ "value": "Закрыть",
+ },
+ "zh": {
+ "value": "关闭",
+ },
+ "ko": {
+ "value": "닫기",
+ },
+ "ja": {
+ "value": "閉じる",
+ },
+ },
+ "preview_samples": {
+ "en": {
+ "label": "Samples",
+ },
+ "ru": {
+ "label": "Примеры",
+ },
+ "zh": {
+ "label": "样例",
+ },
+ "ko": {
+ "label": "샘플",
+ },
+ "ja": {
+ "label": "サンプル",
+ },
+ },
+ "learning_rate": {
+ "en": {
+ "label": "Learning rate",
+ "info": "Initial learning rate for AdamW.",
+ },
+ "ru": {
+ "label": "Скорость обучения",
+ "info": "Начальная скорость обучения для AdamW.",
+ },
+ "zh": {
+ "label": "学习率",
+ "info": "AdamW 优化器的初始学习率。",
+ },
+ "ko": {
+ "label": "학습률",
+ "info": "AdamW의 초기 학습률.",
+ },
+ "ja": {
+ "label": "学習率",
+ "info": "AdamW の初期学習率。",
+ },
+ },
+ "num_train_epochs": {
+ "en": {
+ "label": "Epochs",
+ "info": "Total number of training epochs to perform.",
+ },
+ "ru": {
+ "label": "Эпохи",
+ "info": "Общее количество эпох обучения.",
+ },
+ "zh": {
+ "label": "训练轮数",
+ "info": "需要执行的训练总轮数。",
+ },
+ "ko": {
+ "label": "에포크",
+ "info": "수행할 총 학습 에포크 수.",
+ },
+ "ja": {
+ "label": "エポック数",
+ "info": "実行するトレーニングの総エポック数。",
+ },
+ },
+ "max_grad_norm": {
+ "en": {
+ "label": "Maximum gradient norm",
+ "info": "Norm for gradient clipping.",
+ },
+ "ru": {
+ "label": "Максимальная норма градиента",
+ "info": "Норма для обрезки градиента.",
+ },
+ "zh": {
+ "label": "最大梯度范数",
+ "info": "用于梯度裁剪的范数。",
+ },
+ "ko": {
+ "label": "최대 그레디언트 노름(norm)",
+ "info": "그레디언트 클리핑을 위한 노름(norm).",
+ },
+ "ja": {
+ "label": "最大勾配ノルム",
+ "info": "勾配クリッピングのためのノルム。",
+ },
+ },
+ "max_samples": {
+ "en": {
+ "label": "Max samples",
+ "info": "Maximum samples per dataset.",
+ },
+ "ru": {
+ "label": "Максимальное количество образцов",
+ "info": "Максимальное количество образцов на набор данных.",
+ },
+ "zh": {
+ "label": "最大样本数",
+ "info": "每个数据集的最大样本数。",
+ },
+ "ko": {
+ "label": "최대 샘플 수",
+ "info": "데이터셋 당 최대 샘플 수.",
+ },
+ "ja": {
+ "label": "最大サンプル数",
+ "info": "データセットごとの最大サンプル数。",
+ },
+ },
+ "compute_type": {
+ "en": {
+ "label": "Compute type",
+ "info": "Whether to use mixed precision training.",
+ },
+ "ru": {
+ "label": "Тип вычислений",
+ "info": "Использовать ли обучение смешанной точности.",
+ },
+ "zh": {
+ "label": "计算类型",
+ "info": "是否使用混合精度训练。",
+ },
+ "ko": {
+ "label": "연산 유형",
+ "info": "혼합 정밀도 훈련을 사용할지 여부.",
+ },
+ "ja": {
+ "label": "計算タイプ",
+ "info": "混合精度トレーニングを使用するかどうか。",
+ },
+ },
+ "cutoff_len": {
+ "en": {
+ "label": "Cutoff length",
+ "info": "Max tokens in input sequence.",
+ },
+ "ru": {
+ "label": "Длина обрезки",
+ "info": "Максимальное количество токенов во входной последовательности.",
+ },
+ "zh": {
+ "label": "截断长度",
+ "info": "输入序列分词后的最大长度。",
+ },
+ "ko": {
+ "label": "컷오프 길이",
+ "info": "입력 시퀀스의 최대 토큰 수.",
+ },
+ "ja": {
+ "label": "カットオフ長",
+ "info": "入力シーケンスの最大トークン数。",
+ },
+ },
+ "batch_size": {
+ "en": {
+ "label": "Batch size",
+ "info": "Number of samples processed on each GPU.",
+ },
+ "ru": {
+ "label": "Размер пакета",
+ "info": "Количество образцов для обработки на каждом GPU.",
+ },
+ "zh": {
+ "label": "批处理大小",
+ "info": "每个 GPU 处理的样本数量。",
+ },
+ "ko": {
+ "label": "배치 크기",
+ "info": "각 GPU에서 처리되는 샘플 수.",
+ },
+ "ja": {
+ "label": "バッチサイズ",
+ "info": "各 GPU で処理されるサンプル数。",
+ },
+ },
+ "gradient_accumulation_steps": {
+ "en": {
+ "label": "Gradient accumulation",
+ "info": "Number of steps for gradient accumulation.",
+ },
+ "ru": {
+ "label": "Накопление градиента",
+ "info": "Количество шагов накопления градиента.",
+ },
+ "zh": {
+ "label": "梯度累积",
+ "info": "梯度累积的步数。",
+ },
+ "ko": {
+ "label": "그레디언트 누적",
+ "info": "그레디언트 누적 단계 수.",
+ },
+ "ja": {
+ "label": "勾配累積",
+ "info": "勾配累積のステップ数。",
+ },
+ },
+ "val_size": {
+ "en": {
+ "label": "Val size",
+ "info": "Percentage of validation set from the entire dataset.",
+ },
+ "ru": {
+ "label": "Размер валидации",
+ "info": "Пропорция данных в наборе для разработки.",
+ },
+ "zh": {
+ "label": "验证集比例",
+ "info": "验证集占全部样本的百分比。",
+ },
+ "ko": {
+ "label": "검증 데이터셋 크기",
+ "info": "개발 데이터셋에서 검증 데이터의 비율.",
+ },
+ "ja": {
+ "label": "検証セットサイズ",
+ "info": "データセット全体に対する検証セットの割合。",
+ },
+ },
+ "lr_scheduler_type": {
+ "en": {
+ "label": "LR scheduler",
+ "info": "Name of the learning rate scheduler.",
+ },
+ "ru": {
+ "label": "Планировщик скорости обучения",
+ "info": "Название планировщика скорости обучения.",
+ },
+ "zh": {
+ "label": "学习率调节器",
+ "info": "学习率调度器的名称。",
+ },
+ "ko": {
+ "label": "LR 스케줄러",
+ "info": "학습률 스케줄러의 이름.",
+ },
+ "ja": {
+ "label": "学習率スケジューラ",
+ "info": "学習率スケジューラの名前。",
+ },
+ },
+ "extra_tab": {
+ "en": {
+ "label": "Extra configurations",
+ },
+ "ru": {
+ "label": "Дополнительные конфигурации",
+ },
+ "zh": {
+ "label": "其它参数设置",
+ },
+ "ko": {
+ "label": "추가 구성(configuration)",
+ },
+ "ja": {
+ "label": "追加設定",
+ },
+ },
+ "logging_steps": {
+ "en": {
+ "label": "Logging steps",
+ "info": "Number of steps between two logs.",
+ },
+ "ru": {
+ "label": "Шаги логирования",
+ "info": "Количество шагов между двумя записями в журнале.",
+ },
+ "zh": {
+ "label": "日志间隔",
+ "info": "每两次日志输出间的更新步数。",
+ },
+ "ko": {
+ "label": "로깅 스텝",
+ "info": "이전 로깅과 다음 로깅 간 스텝 수.",
+ },
+ "ja": {
+ "label": "ロギングステップ",
+ "info": "2 つのログ間のステップ数。",
+ },
+ },
+ "save_steps": {
+ "en": {
+ "label": "Save steps",
+ "info": "Number of steps between two checkpoints.",
+ },
+ "ru": {
+ "label": "Шаги сохранения",
+ "info": "Количество шагов между двумя контрольными точками.",
+ },
+ "zh": {
+ "label": "保存间隔",
+ "info": "每两次断点保存间的更新步数。",
+ },
+ "ko": {
+ "label": "저장 스텝",
+ "info": "이전 체크포인트와 다음 체크포인트 사이의 스텝 수.",
+ },
+ "ja": {
+ "label": "保存ステップ",
+ "info": "2 つのチェックポイント間のステップ数。",
+ },
+ },
+ "warmup_steps": {
+ "en": {
+ "label": "Warmup steps",
+ "info": "Number of steps used for warmup.",
+ },
+ "ru": {
+ "label": "Шаги прогрева",
+ "info": "Количество шагов, используемых для прогрева.",
+ },
+ "zh": {
+ "label": "预热步数",
+ "info": "学习率预热采用的步数。",
+ },
+ "ko": {
+ "label": "Warmup 스텝",
+ "info": "Warmup에 사용되는 스텝 수.",
+ },
+ "ja": {
+ "label": "ウォームアップステップ",
+ "info": "ウォームアップに使用されるステップ数。",
+ },
+ },
+ "neftune_alpha": {
+ "en": {
+ "label": "NEFTune alpha",
+ "info": "Magnitude of noise adding to embedding vectors.",
+ },
+ "ru": {
+ "label": "NEFTune alpha",
+ "info": "Величина шума, добавляемого к векторам вложений.",
+ },
+ "zh": {
+ "label": "NEFTune 噪声参数",
+ "info": "嵌入向量所添加的噪声大小。",
+ },
+ "ko": {
+ "label": "NEFTune 알파",
+ "info": "임베딩 벡터에 추가되는 노이즈의 크기.",
+ },
+ "ja": {
+ "label": "NEFTune alpha",
+ "info": "埋め込みベクトルに追加されるノイズの大きさ。",
+ },
+ },
+ "extra_args": {
+ "en": {
+ "label": "Extra arguments",
+ "info": "Extra arguments passed to the trainer in JSON format.",
+ },
+ "ru": {
+ "label": "Дополнительные аргументы",
+ "info": "Дополнительные аргументы, которые передаются тренеру в формате JSON.",
+ },
+ "zh": {
+ "label": "额外参数",
+ "info": "以 JSON 格式传递给训练器的额外参数。",
+ },
+ "ko": {
+ "label": "추가 인수",
+ "info": "JSON 형식으로 트레이너에게 전달할 추가 인수입니다.",
+ },
+ "ja": {
+ "label": "追加引数",
+ "info": "JSON 形式でトレーナーに渡される追加引数。",
+ },
+ },
+ "packing": {
+ "en": {
+ "label": "Pack sequences",
+ "info": "Pack sequences into samples of fixed length.",
+ },
+ "ru": {
+ "label": "Упаковка последовательностей",
+ "info": "Упаковка последовательностей в образцы фиксированной длины.",
+ },
+ "zh": {
+ "label": "序列打包",
+ "info": "将序列打包为等长样本。",
+ },
+ "ko": {
+ "label": "시퀀스 패킹",
+ "info": "고정된 길이의 샘플로 시퀀스를 패킹합니다.",
+ },
+ "ja": {
+ "label": "シーケンスパッキング",
+ "info": "シーケンスを固定長のサンプルにパッキングします。",
+ },
+ },
+ "neat_packing": {
+ "en": {
+ "label": "Use neat packing",
+ "info": "Avoid cross-attention between packed sequences.",
+ },
+ "ru": {
+ "label": "Используйте аккуратную упаковку",
+ "info": "избегайте перекрестного внимания между упакованными последовательностями.",
+ },
+ "zh": {
+ "label": "使用无污染打包",
+ "info": "避免打包后的序列产生交叉注意力。",
+ },
+ "ko": {
+ "label": "니트 패킹 사용",
+ "info": "패킹된 시퀀스 간의 크로스 어텐션을 피합니다.",
+ },
+ "ja": {
+ "label": "無汚染パッキングを使用",
+ "info": "パッキング後のシーケンス間のクロスアテンションを避けます。",
+ },
+ },
+ "train_on_prompt": {
+ "en": {
+ "label": "Train on prompt",
+ "info": "Disable the label mask on the prompt (only for SFT).",
+ },
+ "ru": {
+ "label": "Тренировка на подсказке",
+ "info": "Отключить маску меток на подсказке (только для SFT).",
+ },
+ "zh": {
+ "label": "学习提示词",
+ "info": "不在提示词的部分添加掩码(仅适用于 SFT)。",
+ },
+ "ko": {
+ "label": "프롬프트도 학습",
+ "info": "프롬프트에서 라벨 마스킹을 비활성화합니다 (SFT에만 해당).",
+ },
+ "ja": {
+ "label": "プロンプトで学習",
+ "info": "プロンプト部分にマスクを追加しない(SFT のみ)。",
+ },
+ },
+ "mask_history": {
+ "en": {
+ "label": "Mask history",
+ "info": "Train on the last turn only (only for SFT).",
+ },
+ "ru": {
+ "label": "История масок",
+ "info": "Тренироваться только на последнем шаге (только для SFT).",
+ },
+ "zh": {
+ "label": "不学习历史对话",
+ "info": "仅学习最后一轮对话(仅适用于 SFT)。",
+ },
+ "ko": {
+ "label": "히스토리 마스킹",
+ "info": "대화 데이터의 마지막 턴만 학습합니다 (SFT에만 해당).",
+ },
+ "ja": {
+ "label": "履歴をマスク",
+ "info": "最後のターンのみを学習する(SFT のみ)。",
+ },
+ },
+ "resize_vocab": {
+ "en": {
+ "label": "Resize token embeddings",
+ "info": "Resize the tokenizer vocab and the embedding layers.",
+ },
+ "ru": {
+ "label": "Изменение размера токенных эмбеддингов",
+ "info": "Изменить размер словаря токенизатора и слоев эмбеддинга.",
+ },
+ "zh": {
+ "label": "更改词表大小",
+ "info": "更改分词器词表和嵌入层的大小。",
+ },
+ "ko": {
+ "label": "토큰 임베딩의 사이즈 조정",
+ "info": "토크나이저 어휘와 임베딩 레이어의 크기를 조정합니다.",
+ },
+ "ja": {
+ "label": "トークン埋め込みのサイズ変更",
+ "info": "トークナイザーの語彙と埋め込み層のサイズを変更します。",
+ },
+ },
+ "use_llama_pro": {
+ "en": {
+ "label": "Enable LLaMA Pro",
+ "info": "Make the parameters in the expanded blocks trainable.",
+ },
+ "ru": {
+ "label": "Включить LLaMA Pro",
+ "info": "Сделать параметры в расширенных блоках обучаемыми.",
+ },
+ "zh": {
+ "label": "使用 LLaMA Pro",
+ "info": "仅训练块扩展后的参数。",
+ },
+ "ko": {
+ "label": "LLaMA Pro 사용",
+ "info": "확장된 블록의 매개변수를 학습 가능하게 만듭니다.",
+ },
+ "ja": {
+ "label": "LLaMA Pro を有効化",
+ "info": "拡張ブロックのパラメータのみをトレーニングします。",
+ },
+ },
+ "enable_thinking": {
+ "en": {
+ "label": "Enable thinking",
+ "info": "Whether or not to enable thinking mode for reasoning models.",
+ },
+ "ru": {
+ "label": "Включить мысли",
+ "info": "Включить режим мысли для моделей решающего характера.",
+ },
+ "zh": {
+ "label": "启用思考模式",
+ "info": "是否启用推理模型的思考模式。",
+ },
+ "ko": {
+ "label": "생각 모드 활성화",
+ "info": "추론 모델의 생각 모드를 활성화할지 여부.",
+ },
+ "ja": {
+ "label": "思考モードを有効化",
+ "info": "推論モデルの思考モードを有効にするかどうか。",
+ },
+ },
+ "report_to": {
+ "en": {
+ "label": "Enable external logger",
+ "info": "Use TensorBoard or wandb to log experiment.",
+ },
+ "ru": {
+ "label": "Включить внешний регистратор",
+ "info": "Использовать TensorBoard или wandb для ведения журнала экспериментов.",
+ },
+ "zh": {
+ "label": "启用外部记录面板",
+ "info": "使用 TensorBoard 或 wandb 记录实验。",
+ },
+ "ko": {
+ "label": "외부 logger 활성화",
+ "info": "TensorBoard 또는 wandb를 사용하여 실험을 기록합니다.",
+ },
+ "ja": {
+ "label": "外部ロガーを有効化",
+ "info": "TensorBoard または wandb を使用して実験を記録します。",
+ },
+ },
+ "freeze_tab": {
+ "en": {
+ "label": "Freeze tuning configurations",
+ },
+ "ru": {
+ "label": "конфигурации для настройки заморозки",
+ },
+ "zh": {
+ "label": "部分参数微调设置",
+ },
+ "ko": {
+ "label": "Freeze tuning 설정",
+ },
+ "ja": {
+ "label": "フリーズチューニング設定",
+ },
+ },
+ "freeze_trainable_layers": {
+ "en": {
+ "label": "Trainable layers",
+ "info": "Number of the last(+)/first(-) hidden layers to be set as trainable.",
+ },
+ "ru": {
+ "label": "Обучаемые слои",
+ "info": "Количество последних (+)/первых (-) скрытых слоев, которые будут установлены как обучаемые.",
+ },
+ "zh": {
+ "label": "可训练层数",
+ "info": "最末尾(+)/最前端(-)可训练隐藏层的数量。",
+ },
+ "ko": {
+ "label": "학습 가능한 레이어",
+ "info": "학습 가능하게 설정할 마지막(+)/처음(-) 히든 레이어의 수.",
+ },
+ "ja": {
+ "label": "学習可能なレイヤー",
+ "info": "最後(+)/最初(-)の学習可能な隠れ層の数。",
+ },
+ },
+ "freeze_trainable_modules": {
+ "en": {
+ "label": "Trainable modules",
+ "info": "Name(s) of trainable modules. Use commas to separate multiple modules.",
+ },
+ "ru": {
+ "label": "Обучаемые модули",
+ "info": "Название обучаемых модулей. Используйте запятые для разделения нескольких модулей.",
+ },
+ "zh": {
+ "label": "可训练模块",
+ "info": "可训练模块的名称。使用英文逗号分隔多个名称。",
+ },
+ "ko": {
+ "label": "학습 가능한 모듈",
+ "info": "학습 가능한 모듈의 이름. 여러 모듈을 구분하려면 쉼표(,)를 사용하세요.",
+ },
+ "ja": {
+ "label": "学習可能なモジュール",
+ "info": "学習可能なモジュールの名前。複数のモジュールを区切るにはカンマを使用します。",
+ },
+ },
+ "freeze_extra_modules": {
+ "en": {
+ "label": "Extra modules (optional)",
+ "info": (
+ "Name(s) of modules apart from hidden layers to be set as trainable. "
+ "Use commas to separate multiple modules."
+ ),
+ },
+ "ru": {
+ "label": "Дополнительные модули (опционально)",
+ "info": (
+ "Имена модулей, кроме скрытых слоев, которые следует установить в качестве обучаемых. "
+ "Используйте запятые для разделения нескольких модулей."
+ ),
+ },
+ "zh": {
+ "label": "额外模块(非必填)",
+ "info": "除隐藏层以外的可训练模块名称。使用英文逗号分隔多个名称。",
+ },
+ "ko": {
+ "label": "추가 모듈 (선택 사항)",
+ "info": "히든 레이어 외에 학습 가능하게 설정할 모듈의 이름. 모듈 간에는 쉼표(,)로 구분하십시오.",
+ },
+ "ja": {
+ "label": "追加モジュール(オプション)",
+ "info": "隠れ層以外の学習可能なモジュールの名前。複数のモジュールを区切るにはカンマを使用します。",
+ },
+ },
+ "lora_tab": {
+ "en": {
+ "label": "LoRA configurations",
+ },
+ "ru": {
+ "label": "Конфигурации LoRA",
+ },
+ "zh": {
+ "label": "LoRA 参数设置",
+ },
+ "ko": {
+ "label": "LoRA 구성",
+ },
+ "ja": {
+ "label": "LoRA 設定",
+ },
+ },
+ "lora_rank": {
+ "en": {
+ "label": "LoRA rank",
+ "info": "The rank of LoRA matrices.",
+ },
+ "ru": {
+ "label": "Ранг матриц LoRA",
+ "info": "Ранг матриц LoRA.",
+ },
+ "zh": {
+ "label": "LoRA 秩",
+ "info": "LoRA 矩阵的秩大小。",
+ },
+ "ko": {
+ "label": "LoRA 랭크",
+ "info": "LoRA 행렬의 랭크.",
+ },
+ "ja": {
+ "label": "LoRA ランク",
+ "info": "LoRA 行列のランク。",
+ },
+ },
+ "lora_alpha": {
+ "en": {
+ "label": "LoRA alpha",
+ "info": "Lora scaling coefficient.",
+ },
+ "ru": {
+ "label": "LoRA alpha",
+ "info": "Коэффициент масштабирования LoRA.",
+ },
+ "zh": {
+ "label": "LoRA 缩放系数",
+ "info": "LoRA 缩放系数大小。",
+ },
+ "ko": {
+ "label": "LoRA 알파",
+ "info": "LoRA 스케일링 계수.",
+ },
+ "ja": {
+ "label": "LoRA alpha",
+ "info": "LoRA スケーリング係数。",
+ },
+ },
+ "lora_dropout": {
+ "en": {
+ "label": "LoRA dropout",
+ "info": "Dropout ratio of LoRA weights.",
+ },
+ "ru": {
+ "label": "Вероятность отсева LoRA",
+ "info": "Вероятность отсева весов LoRA.",
+ },
+ "zh": {
+ "label": "LoRA 随机丢弃",
+ "info": "LoRA 权重随机丢弃的概率。",
+ },
+ "ko": {
+ "label": "LoRA 드롭아웃",
+ "info": "LoRA 가중치의 드롭아웃 비율.",
+ },
+ "ja": {
+ "label": "LoRA ドロップアウト",
+ "info": "LoRA 重みのドロップアウト確率。",
+ },
+ },
+ "loraplus_lr_ratio": {
+ "en": {
+ "label": "LoRA+ LR ratio",
+ "info": "The LR ratio of the B matrices in LoRA.",
+ },
+ "ru": {
+ "label": "LoRA+ LR коэффициент",
+ "info": "Коэффициент LR матриц B в LoRA.",
+ },
+ "zh": {
+ "label": "LoRA+ 学习率比例",
+ "info": "LoRA+ 中 B 矩阵的学习率倍数。",
+ },
+ "ko": {
+ "label": "LoRA+ LR 비율",
+ "info": "LoRA에서 B 행렬의 LR 비율.",
+ },
+ "ja": {
+ "label": "LoRA+ LR 比率",
+ "info": "LoRA+ の B 行列の学習率倍率。",
+ },
+ },
+ "create_new_adapter": {
+ "en": {
+ "label": "Create new adapter",
+ "info": "Create a new adapter with randomly initialized weight upon the existing one.",
+ },
+ "ru": {
+ "label": "Создать новый адаптер",
+ "info": "Создать новый адаптер с случайной инициализацией веса на основе существующего.",
+ },
+ "zh": {
+ "label": "新建适配器",
+ "info": "在现有的适配器上创建一个随机初始化后的新适配器。",
+ },
+ "ko": {
+ "label": "새 어댑터 생성",
+ "info": "기존 어댑터 위에 무작위로 초기화된 가중치를 가진 새 어댑터를 생성합니다.",
+ },
+ "ja": {
+ "label": "新しいアダプターを作成",
+ "info": "既存のアダプター上にランダムに初期化された新しいアダプターを作成します。",
+ },
+ },
+ "use_rslora": {
+ "en": {
+ "label": "Use rslora",
+ "info": "Use the rank stabilization scaling factor for LoRA layer.",
+ },
+ "ru": {
+ "label": "Использовать rslora",
+ "info": "Использовать коэффициент масштабирования стабилизации ранга для слоя LoRA.",
+ },
+ "zh": {
+ "label": "使用 rslora",
+ "info": "对 LoRA 层使用秩稳定缩放方法。",
+ },
+ "ko": {
+ "label": "rslora 사용",
+ "info": "LoRA 레이어에 랭크 안정화 스케일링 계수를 사용합니다.",
+ },
+ "ja": {
+ "label": "rslora を使用",
+ "info": "LoRA 層にランク安定化スケーリング方法を使用します。",
+ },
+ },
+ "use_dora": {
+ "en": {
+ "label": "Use DoRA",
+ "info": "Use weight-decomposed LoRA.",
+ },
+ "ru": {
+ "label": "Используйте DoRA",
+ "info": "Используйте LoRA с декомпозицией весов.",
+ },
+ "zh": {
+ "label": "使用 DoRA",
+ "info": "使用权重分解的 LoRA。",
+ },
+ "ko": {
+ "label": "DoRA 사용",
+ "info": "가중치-분해 LoRA를 사용합니다.",
+ },
+ "ja": {
+ "label": "DoRA を使用",
+ "info": "重み分解された LoRA を使用します。",
+ },
+ },
+ "use_pissa": {
+ "en": {
+ "label": "Use PiSSA",
+ "info": "Use PiSSA method.",
+ },
+ "ru": {
+ "label": "используйте PiSSA",
+ "info": "Используйте метод PiSSA.",
+ },
+ "zh": {
+ "label": "使用 PiSSA",
+ "info": "使用 PiSSA 方法。",
+ },
+ "ko": {
+ "label": "PiSSA 사용",
+ "info": "PiSSA 방법을 사용합니다.",
+ },
+ "ja": {
+ "label": "PiSSA を使用",
+ "info": "PiSSA メソッドを使用します。",
+ },
+ },
+ "lora_target": {
+ "en": {
+ "label": "LoRA modules (optional)",
+ "info": "Name(s) of modules to apply LoRA. Use commas to separate multiple modules.",
+ },
+ "ru": {
+ "label": "Модули LoRA (опционально)",
+ "info": "Имена модулей для применения LoRA. Используйте запятые для разделения нескольких модулей.",
+ },
+ "zh": {
+ "label": "LoRA 作用模块(非必填)",
+ "info": "应用 LoRA 的模块名称。使用英文逗号分隔多个名称。",
+ },
+ "ko": {
+ "label": "LoRA 모듈 (선택 사항)",
+ "info": "LoRA를 적용할 모듈의 이름. 모듈 간에는 쉼표(,)로 구분하십시오.",
+ },
+ "ja": {
+ "label": "LoRA モジュール(オプション)",
+ "info": "LoRA を適用するモジュールの名前。複数のモジュールを区切るにはカンマを使用します。",
+ },
+ },
+ "additional_target": {
+ "en": {
+ "label": "Additional modules (optional)",
+ "info": (
+ "Name(s) of modules apart from LoRA layers to be set as trainable. "
+ "Use commas to separate multiple modules."
+ ),
+ },
+ "ru": {
+ "label": "Дополнительные модули (опционально)",
+ "info": (
+ "Имена модулей, кроме слоев LoRA, которые следует установить в качестве обучаемых. "
+ "Используйте запятые для разделения нескольких модулей."
+ ),
+ },
+ "zh": {
+ "label": "附加模块(非必填)",
+ "info": "除 LoRA 层以外的可训练模块名称。使用英文逗号分隔多个名称。",
+ },
+ "ko": {
+ "label": "추가 모듈 (선택 사항)",
+ "info": "LoRA 레이어 외에 학습 가능하게 설정할 모듈의 이름. 모듈 간에는 쉼표(,)로 구분하십시오.",
+ },
+ "ja": {
+ "label": "追加モジュール(オプション)",
+ "info": "LoRA 層以外の学習可能なモジュールの名前。複数のモジュールを区切るにはカンマを使用します。",
+ },
+ },
+ "rlhf_tab": {
+ "en": {
+ "label": "RLHF configurations",
+ },
+ "ru": {
+ "label": "Конфигурации RLHF",
+ },
+ "zh": {
+ "label": "RLHF 参数设置",
+ },
+ "ko": {
+ "label": "RLHF 구성",
+ },
+ "ja": {
+ "label": "RLHF 設定",
+ },
+ },
+ "pref_beta": {
+ "en": {
+ "label": "Beta value",
+ "info": "Value of the beta parameter in the loss.",
+ },
+ "ru": {
+ "label": "Бета значение",
+ "info": "Значение параметра бета в функции потерь.",
+ },
+ "zh": {
+ "label": "Beta 参数",
+ "info": "损失函数中 beta 超参数大小。",
+ },
+ "ko": {
+ "label": "베타 값",
+ "info": "손실 함수에서 베타 매개 변수의 값.",
+ },
+ "ja": {
+ "label": "Beta 値",
+ "info": "損失関数における beta ハイパーパラメータの値。",
+ },
+ },
+ "pref_ftx": {
+ "en": {
+ "label": "Ftx gamma",
+ "info": "The weight of SFT loss in the final loss.",
+ },
+ "ru": {
+ "label": "Ftx гамма",
+ "info": "Вес потери SFT в итоговой потере.",
+ },
+ "zh": {
+ "label": "Ftx gamma",
+ "info": "损失函数中 SFT 损失的权重大小。",
+ },
+ "ko": {
+ "label": "Ftx 감마",
+ "info": "최종 로스 함수에서 SFT 로스의 가중치.",
+ },
+ "ja": {
+ "label": "Ftx gamma",
+ "info": "損失関数における SFT 損失の重み。",
+ },
+ },
+ "pref_loss": {
+ "en": {
+ "label": "Loss type",
+ "info": "The type of the loss function.",
+ },
+ "ru": {
+ "label": "Тип потерь",
+ "info": "Тип функции потерь.",
+ },
+ "zh": {
+ "label": "损失类型",
+ "info": "损失函数的类型。",
+ },
+ "ko": {
+ "label": "로스 유형",
+ "info": "로스 함수의 유형.",
+ },
+ "ja": {
+ "label": "損失タイプ",
+ "info": "損失関数のタイプ。",
+ },
+ },
+ "reward_model": {
+ "en": {
+ "label": "Reward model",
+ "info": "Adapter of the reward model in PPO training.",
+ },
+ "ru": {
+ "label": "Модель вознаграждения",
+ "info": "Адаптер модели вознаграждения для обучения PPO.",
+ },
+ "zh": {
+ "label": "奖励模型",
+ "info": "PPO 训练中奖励模型的适配器路径。",
+ },
+ "ko": {
+ "label": "리워드 모델",
+ "info": "PPO 학습에서 사용할 리워드 모델의 어댑터.",
+ },
+ "ja": {
+ "label": "報酬モデル",
+ "info": "PPO トレーニングにおける報酬モデルのアダプター。",
+ },
+ },
+ "ppo_score_norm": {
+ "en": {
+ "label": "Score norm",
+ "info": "Normalizing scores in PPO training.",
+ },
+ "ru": {
+ "label": "Норма оценок",
+ "info": "Нормализация оценок в тренировке PPO.",
+ },
+ "zh": {
+ "label": "归一化分数",
+ "info": "PPO 训练中归一化奖励分数。",
+ },
+ "ko": {
+ "label": "스코어 정규화",
+ "info": "PPO 학습에서 스코어를 정규화합니다.",
+ },
+ "ja": {
+ "label": "スコア正規化",
+ "info": "PPO トレーニングにおける報酬スコアの正規化。",
+ },
+ },
+ "ppo_whiten_rewards": {
+ "en": {
+ "label": "Whiten rewards",
+ "info": "Whiten the rewards in PPO training.",
+ },
+ "ru": {
+ "label": "Белые вознаграждения",
+ "info": "Осветлите вознаграждения в обучении PPO.",
+ },
+ "zh": {
+ "label": "白化奖励",
+ "info": "PPO 训练中将奖励分数做白化处理。",
+ },
+ "ko": {
+ "label": "보상 백화",
+ "info": "PPO 훈련에서 보상을 백화(Whiten)합니다.",
+ },
+ "ja": {
+ "label": "報酬のホワイトニング",
+ "info": "PPO トレーニングにおいて報酬スコアをホワイトニング処理します。",
+ },
+ },
+ "mm_tab": {
+ "en": {
+ "label": "Multimodal configurations",
+ },
+ "ru": {
+ "label": "Конфигурации мультимедиа",
+ },
+ "zh": {
+ "label": "多模态参数设置",
+ },
+ "ko": {
+ "label": "멀티모달 구성",
+ },
+ "ja": {
+ "label": "多モーダル設定",
+ },
+ },
+ "freeze_vision_tower": {
+ "en": {
+ "label": "Freeze vision tower",
+ "info": "Freeze the vision tower in the model.",
+ },
+ "ru": {
+ "label": "Заморозить башню визиона",
+ "info": "Заморозить башню визиона в модели.",
+ },
+ "zh": {
+ "label": "冻结视觉编码器",
+ "info": "冻结模型中的视觉编码器。",
+ },
+ "ko": {
+ "label": "비전 타워 고정",
+ "info": "모델의 비전 타워를 고정합니다.",
+ },
+ "ja": {
+ "label": "ビジョンタワーの固定",
+ "info": "モデルのビジョンタワーを固定します。",
+ },
+ },
+ "freeze_multi_modal_projector": {
+ "en": {
+ "label": "Freeze multi-modal projector",
+ "info": "Freeze the multi-modal projector in the model.",
+ },
+ "ru": {
+ "label": "Заморозить мультимодальный проектор",
+ "info": "Заморозить мультимодальный проектор в модели.",
+ },
+ "zh": {
+ "label": "冻结多模态投影器",
+ "info": "冻结模型中的多模态投影器。",
+ },
+ "ko": {
+ "label": "멀티모달 프로젝터 고정",
+ "info": "모델의 멀티모달 프로젝터를 고정합니다.",
+ },
+ "ja": {
+ "label": "多モーダルプロジェクターの固定",
+ "info": "モデルの多モーダルプロジェクターを固定します。",
+ },
+ },
+ "freeze_language_model": {
+ "en": {
+ "label": "Freeze language model",
+ "info": "Freeze the language model in the model.",
+ },
+ "ru": {
+ "label": "Заморозить язык модели",
+ "info": "Заморозить язык модели в модели.",
+ },
+ "zh": {
+ "label": "冻结语言模型",
+ "info": "冻结模型中的语言模型。",
+ },
+ "ko": {
+ "label": "언어 모델 고정",
+ "info": "모델의 언어 모델을 고정합니다.",
+ },
+ "ja": {
+ "label": "言語モデルの固定",
+ "info": "モデルの言語モデルを固定します。",
+ },
+ },
+ "image_max_pixels": {
+ "en": {
+ "label": "Image max pixels",
+ "info": "The maximum number of pixels of image inputs.",
+ },
+ "ru": {
+ "label": "Максимальное количество пикселей изображения",
+ "info": "Максимальное количество пикселей изображения.",
+ },
+ "zh": {
+ "label": "图像最大像素",
+ "info": "输入图像的最大像素数。",
+ },
+ "ko": {
+ "label": "이미지 최대 픽셀",
+ "info": "이미지 입력의 최대 픽셀 수입니다.",
+ },
+ "ja": {
+ "label": "画像最大ピクセル",
+ "info": "画像入力の最大ピクセル数です。",
+ },
+ },
+ "image_min_pixels": {
+ "en": {
+ "label": "Image min pixels",
+ "info": "The minimum number of pixels of image inputs.",
+ },
+ "ru": {
+ "label": "Минимальное количество пикселей изображения",
+ "info": "Минимальное количество пикселей изображения.",
+ },
+ "zh": {
+ "label": "图像最小像素",
+ "info": "输入图像的最小像素数。",
+ },
+ "ko": {
+ "label": "이미지 최소 픽셀",
+ "info": "이미지 입력의 최소 픽셀 수입니다.",
+ },
+ "ja": {
+ "label": "画像最小ピクセル",
+ "info": "画像入力の最小ピクセル数です。",
+ },
+ },
+ "video_max_pixels": {
+ "en": {
+ "label": "Video max pixels",
+ "info": "The maximum number of pixels of video inputs.",
+ },
+ "ru": {
+ "label": "Максимальное количество пикселей видео",
+ "info": "Максимальное количество пикселей видео.",
+ },
+ "zh": {
+ "label": "视频最大像素",
+ "info": "输入视频的最大像素数。",
+ },
+ "ko": {
+ "label": "비디오 최대 픽셀",
+ "info": "비디오 입력의 최대 픽셀 수입니다.",
+ },
+ "ja": {
+ "label": "ビデオ最大ピクセル",
+ "info": "ビデオ入力の最大ピクセル数です。",
+ },
+ },
+ "video_min_pixels": {
+ "en": {
+ "label": "Video min pixels",
+ "info": "The minimum number of pixels of video inputs.",
+ },
+ "ru": {
+ "label": "Минимальное количество пикселей видео",
+ "info": "Минимальное количество пикселей видео.",
+ },
+ "zh": {
+ "label": "视频最小像素",
+ "info": "输入视频的最小像素数。",
+ },
+ "ko": {
+ "label": "비디오 최소 픽셀",
+ "info": "비디오 입력의 최소 픽셀 수입니다.",
+ },
+ "ja": {
+ "label": "ビデオ最小ピクセル",
+ "info": "ビデオ入力の最小ピクセル数です。",
+ },
+ },
+ "galore_tab": {
+ "en": {
+ "label": "GaLore configurations",
+ },
+ "ru": {
+ "label": "Конфигурации GaLore",
+ },
+ "zh": {
+ "label": "GaLore 参数设置",
+ },
+ "ko": {
+ "label": "GaLore 구성",
+ },
+ "ja": {
+ "label": "GaLore 設定",
+ },
+ },
+ "use_galore": {
+ "en": {
+ "label": "Use GaLore",
+ "info": "Use [GaLore](https://github.com/jiaweizzhao/GaLore) optimizer.",
+ },
+ "ru": {
+ "label": "Использовать GaLore",
+ "info": "Используйте оптимизатор [GaLore](https://github.com/jiaweizzhao/GaLore).",
+ },
+ "zh": {
+ "label": "使用 GaLore",
+ "info": "使用 [GaLore](https://github.com/jiaweizzhao/GaLore) 优化器。",
+ },
+ "ko": {
+ "label": "GaLore 사용",
+ "info": "[GaLore](https://github.com/jiaweizzhao/GaLore) 최적화를 사용하세요.",
+ },
+ "ja": {
+ "label": "GaLore を使用",
+ "info": "[GaLore](https://github.com/jiaweizzhao/GaLore) オプティマイザーを使用します。",
+ },
+ },
+ "galore_rank": {
+ "en": {
+ "label": "GaLore rank",
+ "info": "The rank of GaLore gradients.",
+ },
+ "ru": {
+ "label": "Ранг GaLore",
+ "info": "Ранг градиентов GaLore.",
+ },
+ "zh": {
+ "label": "GaLore 秩",
+ "info": "GaLore 梯度的秩大小。",
+ },
+ "ko": {
+ "label": "GaLore 랭크",
+ "info": "GaLore 그레디언트의 랭크.",
+ },
+ "ja": {
+ "label": "GaLore ランク",
+ "info": "GaLore 勾配のランク。",
+ },
+ },
+ "galore_update_interval": {
+ "en": {
+ "label": "Update interval",
+ "info": "Number of steps to update the GaLore projection.",
+ },
+ "ru": {
+ "label": "Интервал обновления",
+ "info": "Количество шагов для обновления проекции GaLore.",
+ },
+ "zh": {
+ "label": "更新间隔",
+ "info": "相邻两次投影更新的步数。",
+ },
+ "ko": {
+ "label": "업데이트 간격",
+ "info": "GaLore 프로젝션을 업데이트할 간격의 스텝 수.",
+ },
+ "ja": {
+ "label": "更新間隔",
+ "info": "隣接する 2 回の投影更新間のステップ数。",
+ },
+ },
+ "galore_scale": {
+ "en": {
+ "label": "GaLore scale",
+ "info": "GaLore scaling coefficient.",
+ },
+ "ru": {
+ "label": "LoRA Alpha",
+ "info": "Коэффициент масштабирования GaLore.",
+ },
+ "zh": {
+ "label": "GaLore 缩放系数",
+ "info": "GaLore 缩放系数大小。",
+ },
+ "ko": {
+ "label": "GaLore 스케일",
+ "info": "GaLore 스케일링 계수.",
+ },
+ "ja": {
+ "label": "GaLore スケール",
+ "info": "GaLore スケーリング係数。",
+ },
+ },
+ "galore_target": {
+ "en": {
+ "label": "GaLore modules",
+ "info": "Name(s) of modules to apply GaLore. Use commas to separate multiple modules.",
+ },
+ "ru": {
+ "label": "Модули GaLore",
+ "info": "Имена модулей для применения GaLore. Используйте запятые для разделения нескольких модулей.",
+ },
+ "zh": {
+ "label": "GaLore 作用模块",
+ "info": "应用 GaLore 的模块名称。使用英文逗号分隔多个名称。",
+ },
+ "ko": {
+ "label": "GaLore 모듈",
+ "info": "GaLore를 적용할 모듈의 이름. 모듈 간에는 쉼표(,)로 구분하십시오.",
+ },
+ "ja": {
+ "label": "GaLore モジュール",
+ "info": "GaLore を適用するモジュールの名前。複数のモジュールを区切るにはカンマを使用します。",
+ },
+ },
+ "apollo_tab": {
+ "en": {
+ "label": "APOLLO configurations",
+ },
+ "ru": {
+ "label": "Конфигурации APOLLO",
+ },
+ "zh": {
+ "label": "APOLLO 参数设置",
+ },
+ "ko": {
+ "label": "APOLLO 구성",
+ },
+ "ja": {
+ "label": "APOLLO 設定",
+ },
+ },
+ "use_apollo": {
+ "en": {
+ "label": "Use APOLLO",
+ "info": "Use [APOLLO](https://github.com/zhuhanqing/APOLLO) optimizer.",
+ },
+ "ru": {
+ "label": "Использовать APOLLO",
+ "info": "Используйте оптимизатор [APOLLO](https://github.com/zhuhanqing/APOLLO).",
+ },
+ "zh": {
+ "label": "使用 APOLLO",
+ "info": "使用 [APOLLO](https://github.com/zhuhanqing/APOLLO) 优化器。",
+ },
+ "ko": {
+ "label": "APOLLO 사용",
+ "info": "[APOLLO](https://github.com/zhuhanqing/APOLLO) 최적화를 사용하세요.",
+ },
+ "ja": {
+ "label": "APOLLO を使用",
+ "info": "[APOLLO](https://github.com/zhuhanqing/APOLLO) オプティマイザーを使用します。",
+ },
+ },
+ "apollo_rank": {
+ "en": {
+ "label": "APOLLO rank",
+ "info": "The rank of APOLLO gradients.",
+ },
+ "ru": {
+ "label": "Ранг APOLLO",
+ "info": "Ранг градиентов APOLLO.",
+ },
+ "zh": {
+ "label": "APOLLO 秩",
+ "info": "APOLLO 梯度的秩大小。",
+ },
+ "ko": {
+ "label": "APOLLO 랭크",
+ "info": "APOLLO 그레디언트의 랭크.",
+ },
+ "ja": {
+ "label": "APOLLO ランク",
+ "info": "APOLLO 勾配のランク。",
+ },
+ },
+ "apollo_update_interval": {
+ "en": {
+ "label": "Update interval",
+ "info": "Number of steps to update the APOLLO projection.",
+ },
+ "ru": {
+ "label": "Интервал обновления",
+ "info": "Количество шагов для обновления проекции APOLLO.",
+ },
+ "zh": {
+ "label": "更新间隔",
+ "info": "相邻两次投影更新的步数。",
+ },
+ "ko": {
+ "label": "업데이트 간격",
+ "info": "APOLLO 프로젝션을 업데이트할 간격의 스텝 수.",
+ },
+ "ja": {
+ "label": "更新間隔",
+ "info": "隣接する 2 回の投影更新間のステップ数。",
+ },
+ },
+ "apollo_scale": {
+ "en": {
+ "label": "APOLLO scale",
+ "info": "APOLLO scaling coefficient.",
+ },
+ "ru": {
+ "label": "LoRA Alpha",
+ "info": "Коэффициент масштабирования APOLLO.",
+ },
+ "zh": {
+ "label": "APOLLO 缩放系数",
+ "info": "APOLLO 缩放系数大小。",
+ },
+ "ko": {
+ "label": "APOLLO 스케일",
+ "info": "APOLLO 스케일링 계수.",
+ },
+ "ja": {
+ "label": "APOLLO スケール",
+ "info": "APOLLO スケーリング係数。",
+ },
+ },
+ "apollo_target": {
+ "en": {
+ "label": "APOLLO modules",
+ "info": "Name(s) of modules to apply APOLLO. Use commas to separate multiple modules.",
+ },
+ "ru": {
+ "label": "Модули APOLLO",
+ "info": "Имена модулей для применения APOLLO. Используйте запятые для разделения нескольких модулей.",
+ },
+ "zh": {
+ "label": "APOLLO 作用模块",
+ "info": "应用 APOLLO 的模块名称。使用英文逗号分隔多个名称。",
+ },
+ "ko": {
+ "label": "APOLLO 모듈",
+ "info": "APOLLO를 적용할 모듈의 이름. 모듈 간에는 쉼표(,)로 구분하십시오.",
+ },
+ "ja": {
+ "label": "APOLLO モジュール",
+ "info": "APOLLO を適用するモジュールの名前。複数のモジュールを区切るにはカンマを使用します。",
+ },
+ },
+ "badam_tab": {
+ "en": {
+ "label": "BAdam configurations",
+ },
+ "ru": {
+ "label": "Конфигурации BAdam",
+ },
+ "zh": {
+ "label": "BAdam 参数设置",
+ },
+ "ko": {
+ "label": "BAdam 설정",
+ },
+ "ja": {
+ "label": "BAdam 設定",
+ },
+ },
+ "use_badam": {
+ "en": {
+ "label": "Use BAdam",
+ "info": "Enable the [BAdam](https://github.com/Ledzy/BAdam) optimizer.",
+ },
+ "ru": {
+ "label": "Использовать BAdam",
+ "info": "Включите оптимизатор [BAdam](https://github.com/Ledzy/BAdam).",
+ },
+ "zh": {
+ "label": "使用 BAdam",
+ "info": "使用 [BAdam](https://github.com/Ledzy/BAdam) 优化器。",
+ },
+ "ko": {
+ "label": "BAdam 사용",
+ "info": "[BAdam](https://github.com/Ledzy/BAdam) 옵티마이저를 사용합니다.",
+ },
+ "ja": {
+ "label": "BAdam を使用",
+ "info": "[BAdam](https://github.com/Ledzy/BAdam) オプティマイザーを使用します。",
+ },
+ },
+ "badam_mode": {
+ "en": {
+ "label": "BAdam mode",
+ "info": "Whether to use layer-wise or ratio-wise BAdam optimizer.",
+ },
+ "ru": {
+ "label": "Режим BAdam",
+ "info": "Использовать ли оптимизатор BAdam с послоевой или пропорциональной настройкой.",
+ },
+ "zh": {
+ "label": "BAdam 模式",
+ "info": "使用 layer-wise 或 ratio-wise BAdam 优化器。",
+ },
+ "ko": {
+ "label": "BAdam 모드",
+ "info": "레이어-BAdam 옵티마이저인지 비율-BAdam 옵티마이저인지.",
+ },
+ "ja": {
+ "label": "BAdam モード",
+ "info": "layer-wise または ratio-wise BAdam オプティマイザーを使用します。",
+ },
+ },
+ "badam_switch_mode": {
+ "en": {
+ "label": "Switch mode",
+ "info": "The strategy of picking block to update for layer-wise BAdam.",
+ },
+ "ru": {
+ "label": "Режим переключения",
+ "info": "Стратегия выбора блока для обновления для послойного BAdam.",
+ },
+ "zh": {
+ "label": "切换策略",
+ "info": "Layer-wise BAdam 优化器的块切换策略。",
+ },
+ "ko": {
+ "label": "스위치 모드",
+ "info": "레이어-BAdam을 위한 블록 선택 전략.",
+ },
+ "ja": {
+ "label": "切り替え戦略",
+ "info": "Layer-wise BAdam オプティマイザーのブロック切り替え戦略。",
+ },
+ },
+ "badam_switch_interval": {
+ "en": {
+ "label": "Switch interval",
+ "info": "Number of steps to update the block for layer-wise BAdam.",
+ },
+ "ru": {
+ "label": "Интервал переключения",
+ "info": "количество шагов для обновления блока для пошагового BAdam.",
+ },
+ "zh": {
+ "label": "切换频率",
+ "info": "Layer-wise BAdam 优化器的块切换频率。",
+ },
+ "ko": {
+ "label": "전환 간격",
+ "info": "레이어-BAdam을 위한 블록 업데이트 간 스텝 수.",
+ },
+ "ja": {
+ "label": "切り替え頻度",
+ "info": "Layer-wise BAdam オプティマイザーのブロック切り替え頻度。",
+ },
+ },
+ "badam_update_ratio": {
+ "en": {
+ "label": "Update ratio",
+ "info": "The ratio of the update for ratio-wise BAdam.",
+ },
+ "ru": {
+ "label": "Коэффициент обновления",
+ "info": "Коэффициент обновления для BAdam с учётом соотношений.",
+ },
+ "zh": {
+ "label": "Block 更新比例",
+ "info": "Ratio-wise BAdam 优化器的更新比例。",
+ },
+ "ko": {
+ "label": "업데이트 비율",
+ "info": "비율-BAdam의 업데이트 비율.",
+ },
+ "ja": {
+ "label": "ブロック更新比率",
+ "info": "Ratio-wise BAdam オプティマイザーの更新比率。",
+ },
+ },
+ "swanlab_tab": {
+ "en": {
+ "label": "SwanLab configurations",
+ },
+ "ru": {
+ "label": "Конфигурации SwanLab",
+ },
+ "zh": {
+ "label": "SwanLab 参数设置",
+ },
+ "ko": {
+ "label": "SwanLab 설정",
+ },
+ "ja": {
+ "label": "SwanLab 設定",
+ },
+ },
+ "use_swanlab": {
+ "en": {
+ "label": "Use SwanLab",
+ "info": "Enable [SwanLab](https://swanlab.cn/) for experiment tracking and visualization.",
+ },
+ "ru": {
+ "label": "Использовать SwanLab",
+ "info": "Включить [SwanLab](https://swanlab.cn/) для отслеживания и визуализации экспериментов.",
+ },
+ "zh": {
+ "label": "使用 SwanLab",
+ "info": "启用 [SwanLab](https://swanlab.cn/) 进行实验跟踪和可视化。",
+ },
+ "ko": {
+ "label": "SwanLab 사용",
+ "info": "[SwanLab](https://swanlab.cn/) 를 사용하여 실험을 추적하고 시각화합니다.",
+ },
+ "ja": {
+ "label": "SwanLab を使用",
+ "info": "[SwanLab](https://swanlab.cn/) を有効にして実験の追跡と可視化を行います。",
+ },
+ },
+ "swanlab_project": {
+ "en": {
+ "label": "SwanLab project",
+ },
+ "ru": {
+ "label": "SwanLab Проект",
+ },
+ "zh": {
+ "label": "SwanLab 项目名",
+ },
+ "ko": {
+ "label": "SwanLab 프로젝트",
+ },
+ "ja": {
+ "label": "SwanLab プロジェクト",
+ },
+ },
+ "swanlab_run_name": {
+ "en": {
+ "label": "SwanLab experiment name (optional)",
+ },
+ "ru": {
+ "label": "SwanLab Имя эксперимента (опционально)",
+ },
+ "zh": {
+ "label": "SwanLab 实验名(非必填)",
+ },
+ "ko": {
+ "label": "SwanLab 실험 이름 (선택 사항)",
+ },
+ "ja": {
+ "label": "SwanLab 実験名(オプション)",
+ },
+ },
+ "swanlab_workspace": {
+ "en": {
+ "label": "SwanLab workspace (optional)",
+ "info": "Workspace for SwanLab. Defaults to the personal workspace.",
+ },
+ "ru": {
+ "label": "SwanLab Рабочая область (опционально)",
+ "info": "Рабочая область SwanLab, если не заполнено, то по умолчанию в личной рабочей области.",
+ },
+ "zh": {
+ "label": "SwanLab 工作区(非必填)",
+ "info": "SwanLab 的工作区,默认在个人工作区下。",
+ },
+ "ko": {
+ "label": "SwanLab 작업 영역 (선택 사항)",
+ "info": "SwanLab 조직의 작업 영역, 비어 있으면 기본적으로 개인 작업 영역에 있습니다.",
+ },
+ "ja": {
+ "label": "SwanLab ワークスペース(オプション)",
+ "info": "SwanLab のワークスペース。デフォルトでは個人ワークスペースです。",
+ },
+ },
+ "swanlab_api_key": {
+ "en": {
+ "label": "SwanLab API key (optional)",
+ "info": "API key for SwanLab.",
+ },
+ "ru": {
+ "label": "SwanLab API ключ (опционально)",
+ "info": "API ключ для SwanLab.",
+ },
+ "zh": {
+ "label": "SwanLab API 密钥(非必填)",
+ "info": "用于在编程环境登录 SwanLab,已登录则无需填写。",
+ },
+ "ko": {
+ "label": "SwanLab API 키 (선택 사항)",
+ "info": "SwanLab의 API 키.",
+ },
+ "ja": {
+ "label": "SwanLab API キー(オプション)",
+ "info": "SwanLab の API キー。",
+ },
+ },
+ "swanlab_mode": {
+ "en": {
+ "label": "SwanLab mode",
+ "info": "Cloud or offline version.",
+ },
+ "ru": {
+ "label": "SwanLab Режим",
+ "info": "Версия в облаке или локальная версия.",
+ },
+ "zh": {
+ "label": "SwanLab 模式",
+ "info": "使用云端版或离线版 SwanLab。",
+ },
+ "ko": {
+ "label": "SwanLab 모드",
+ "info": "클라우드 버전 또는 오프라인 버전.",
+ },
+ "ja": {
+ "label": "SwanLab モード",
+ "info": "クラウド版またはオフライン版 SwanLab を使用します。",
+ },
+ },
+ "swanlab_logdir": {
+ "en": {
+ "label": "SwanLab log directory",
+ "info": "The log directory for SwanLab.",
+ },
+ "ru": {
+ "label": "SwanLab 로그 디렉토리",
+ "info": "SwanLab의 로그 디렉토리.",
+ },
+ "zh": {
+ "label": "SwanLab 日志目录",
+ "info": "SwanLab 的日志目录。",
+ },
+ "ko": {
+ "label": "SwanLab 로그 디렉토리",
+ "info": "SwanLab의 로그 디렉토리.",
+ },
+ "ja": {
+ "label": "SwanLab ログ ディレクトリ",
+ "info": "SwanLab のログ ディレクトリ。",
+ },
+ },
+ "cmd_preview_btn": {
+ "en": {
+ "value": "Preview command",
+ },
+ "ru": {
+ "value": "Просмотр команды",
+ },
+ "zh": {
+ "value": "预览命令",
+ },
+ "ko": {
+ "value": "명령어 미리보기",
+ },
+ "ja": {
+ "value": "コマンドをプレビュー",
+ },
+ },
+ "arg_save_btn": {
+ "en": {
+ "value": "Save arguments",
+ },
+ "ru": {
+ "value": "Сохранить аргументы",
+ },
+ "zh": {
+ "value": "保存训练参数",
+ },
+ "ko": {
+ "value": "Argument 저장",
+ },
+ "ja": {
+ "value": "引数を保存",
+ },
+ },
+ "arg_load_btn": {
+ "en": {
+ "value": "Load arguments",
+ },
+ "ru": {
+ "value": "Загрузить аргументы",
+ },
+ "zh": {
+ "value": "载入训练参数",
+ },
+ "ko": {
+ "value": "Argument 불러오기",
+ },
+ "ja": {
+ "value": "引数を読み込む",
+ },
+ },
+ "start_btn": {
+ "en": {
+ "value": "Start",
+ },
+ "ru": {
+ "value": "Начать",
+ },
+ "zh": {
+ "value": "开始",
+ },
+ "ko": {
+ "value": "시작",
+ },
+ "ja": {
+ "value": "開始",
+ },
+ },
+ "stop_btn": {
+ "en": {
+ "value": "Abort",
+ },
+ "ru": {
+ "value": "Прервать",
+ },
+ "zh": {
+ "value": "中断",
+ },
+ "ko": {
+ "value": "중단",
+ },
+ "ja": {
+ "value": "中断",
+ },
+ },
+ "output_dir": {
+ "en": {
+ "label": "Output dir",
+ "info": "Directory for saving results.",
+ },
+ "ru": {
+ "label": "Выходной каталог",
+ "info": "Каталог для сохранения результатов.",
+ },
+ "zh": {
+ "label": "输出目录",
+ "info": "保存结果的路径。",
+ },
+ "ko": {
+ "label": "출력 디렉토리",
+ "info": "결과를 저장할 디렉토리.",
+ },
+ "ja": {
+ "label": "出力ディレクトリ",
+ "info": "結果を保存するパス。",
+ },
+ },
+ "config_path": {
+ "en": {
+ "label": "Config path",
+ "info": "Path to config saving arguments.",
+ },
+ "ru": {
+ "label": "Путь к конфигурации",
+ "info": "Путь для сохранения аргументов конфигурации.",
+ },
+ "zh": {
+ "label": "配置路径",
+ "info": "保存训练参数的配置文件路径。",
+ },
+ "ko": {
+ "label": "설정 경로",
+ "info": "Arguments 저장 파일 경로.",
+ },
+ "ja": {
+ "label": "設定パス",
+ "info": "トレーニングパラメータを保存する設定ファイルのパス。",
+ },
+ },
+ "device_count": {
+ "en": {
+ "label": "Device count",
+ "info": "Number of devices available.",
+ },
+ "ru": {
+ "label": "Количество устройств",
+ "info": "Количество доступных устройств.",
+ },
+ "zh": {
+ "label": "设备数量",
+ "info": "当前可用的运算设备数。",
+ },
+ "ko": {
+ "label": "디바이스 수",
+ "info": "사용 가능한 디바이스 수.",
+ },
+ "ja": {
+ "label": "デバイス数",
+ "info": "現在利用可能な演算デバイス数。",
+ },
+ },
+ "ds_stage": {
+ "en": {
+ "label": "DeepSpeed stage",
+ "info": "DeepSpeed stage for distributed training.",
+ },
+ "ru": {
+ "label": "Этап DeepSpeed",
+ "info": "Этап DeepSpeed для распределенного обучения.",
+ },
+ "zh": {
+ "label": "DeepSpeed stage",
+ "info": "多卡训练的 DeepSpeed stage。",
+ },
+ "ko": {
+ "label": "DeepSpeed 단계",
+ "info": "분산 학습을 위한 DeepSpeed 단계.",
+ },
+ "ja": {
+ "label": "DeepSpeed stage",
+ "info": "マルチ GPU トレーニングの DeepSpeed stage。",
+ },
+ },
+ "ds_offload": {
+ "en": {
+ "label": "Enable offload",
+ "info": "Enable DeepSpeed offload (slow down training).",
+ },
+ "ru": {
+ "label": "Включить выгрузку",
+ "info": "включить выгрузку DeepSpeed (замедлит обучение).",
+ },
+ "zh": {
+ "label": "使用 offload",
+ "info": "使用 DeepSpeed offload(会减慢速度)。",
+ },
+ "ko": {
+ "label": "오프로딩 활성화",
+ "info": "DeepSpeed 오프로딩 활성화 (훈련 속도 느려짐).",
+ },
+ "ja": {
+ "label": "オフロードを使用",
+ "info": "DeepSpeed オフロードを使用します(速度が遅くなります)。",
+ },
+ },
+ "output_box": {
+ "en": {
+ "value": "Ready.",
+ },
+ "ru": {
+ "value": "Готово.",
+ },
+ "zh": {
+ "value": "准备就绪。",
+ },
+ "ko": {
+ "value": "준비 완료.",
+ },
+ "ja": {
+ "value": "準備完了。",
+ },
+ },
+ "loss_viewer": {
+ "en": {
+ "label": "Loss",
+ },
+ "ru": {
+ "label": "Потери",
+ },
+ "zh": {
+ "label": "损失",
+ },
+ "ko": {
+ "label": "손실",
+ },
+ "ja": {
+ "label": "損失",
+ },
+ },
+ "predict": {
+ "en": {
+ "label": "Save predictions",
+ },
+ "ru": {
+ "label": "Сохранить предсказания",
+ },
+ "zh": {
+ "label": "保存预测结果",
+ },
+ "ko": {
+ "label": "예측 결과 저장",
+ },
+ "ja": {
+ "label": "予測結果を保存",
+ },
+ },
+ "infer_backend": {
+ "en": {
+ "label": "Inference engine",
+ },
+ "ru": {
+ "label": "Инференс движок",
+ },
+ "zh": {
+ "label": "推理引擎",
+ },
+ "ko": {
+ "label": "추론 엔진",
+ },
+ "ja": {
+ "label": "推論エンジン",
+ },
+ },
+ "infer_dtype": {
+ "en": {
+ "label": "Inference data type",
+ },
+ "ru": {
+ "label": "Тип данных для вывода",
+ },
+ "zh": {
+ "label": "推理数据类型",
+ },
+ "ko": {
+ "label": "추론 데이터 유형",
+ },
+ "ja": {
+ "label": "推論データタイプ",
+ },
+ },
+ "load_btn": {
+ "en": {
+ "value": "Load model",
+ },
+ "ru": {
+ "value": "Загрузить модель",
+ },
+ "zh": {
+ "value": "加载模型",
+ },
+ "ko": {
+ "value": "모델 불러오기",
+ },
+ "ja": {
+ "value": "モデルを読み込む",
+ },
+ },
+ "unload_btn": {
+ "en": {
+ "value": "Unload model",
+ },
+ "ru": {
+ "value": "Выгрузить модель",
+ },
+ "zh": {
+ "value": "卸载模型",
+ },
+ "ko": {
+ "value": "모델 언로드",
+ },
+ "ja": {
+ "value": "モデルをアンロード",
+ },
+ },
+ "info_box": {
+ "en": {
+ "value": "Model unloaded, please load a model first.",
+ },
+ "ru": {
+ "value": "Модель не загружена, загрузите модель сначала.",
+ },
+ "zh": {
+ "value": "模型未加载,请先加载模型。",
+ },
+ "ko": {
+ "value": "모델이 언로드되었습니다. 모델을 먼저 불러오십시오.",
+ },
+ "ja": {
+ "value": "モデルがロードされていません。最初にモデルをロードしてください。",
+ },
+ },
+ "role": {
+ "en": {
+ "label": "Role",
+ },
+ "ru": {
+ "label": "Роль",
+ },
+ "zh": {
+ "label": "角色",
+ },
+ "ko": {
+ "label": "역할",
+ },
+ "ja": {
+ "label": "役割",
+ },
+ },
+ "system": {
+ "en": {
+ "placeholder": "System prompt (optional)",
+ },
+ "ru": {
+ "placeholder": "Системный запрос (по желанию)",
+ },
+ "zh": {
+ "placeholder": "系统提示词(非必填)",
+ },
+ "ko": {
+ "placeholder": "시스템 프롬프트 (선택 사항)",
+ },
+ "ja": {
+ "placeholder": "システムプロンプト(オプション)",
+ },
+ },
+ "tools": {
+ "en": {
+ "placeholder": "Tools (optional)",
+ },
+ "ru": {
+ "placeholder": "Инструменты (по желанию)",
+ },
+ "zh": {
+ "placeholder": "工具列表(非必填)",
+ },
+ "ko": {
+ "placeholder": "툴 (선택 사항)",
+ },
+ "ja": {
+ "placeholder": "ツールリスト(オプション)",
+ },
+ },
+ "image": {
+ "en": {
+ "label": "Image (optional)",
+ },
+ "ru": {
+ "label": "Изображение (по желанию)",
+ },
+ "zh": {
+ "label": "图像(非必填)",
+ },
+ "ko": {
+ "label": "이미지 (선택 사항)",
+ },
+ "ja": {
+ "label": "画像(オプション)",
+ },
+ },
+ "video": {
+ "en": {
+ "label": "Video (optional)",
+ },
+ "ru": {
+ "label": "Видео (по желанию)",
+ },
+ "zh": {
+ "label": "视频(非必填)",
+ },
+ "ko": {
+ "label": "비디오 (선택 사항)",
+ },
+ "ja": {
+ "label": "動画(オプション)",
+ },
+ },
+ "query": {
+ "en": {
+ "placeholder": "Input...",
+ },
+ "ru": {
+ "placeholder": "Ввод...",
+ },
+ "zh": {
+ "placeholder": "输入...",
+ },
+ "ko": {
+ "placeholder": "입력...",
+ },
+ "ja": {
+ "placeholder": "入力...",
+ },
+ },
+ "submit_btn": {
+ "en": {
+ "value": "Submit",
+ },
+ "ru": {
+ "value": "Отправить",
+ },
+ "zh": {
+ "value": "提交",
+ },
+ "ko": {
+ "value": "제출",
+ },
+ "ja": {
+ "value": "送信",
+ },
+ },
+ "max_length": {
+ "en": {
+ "label": "Maximum length",
+ },
+ "ru": {
+ "label": "Максимальная длина",
+ },
+ "zh": {
+ "label": "最大长度",
+ },
+ "ko": {
+ "label": "최대 길이",
+ },
+ "ja": {
+ "label": "最大長",
+ },
+ },
+ "max_new_tokens": {
+ "en": {
+ "label": "Maximum new tokens",
+ },
+ "ru": {
+ "label": "Максимальное количество новых токенов",
+ },
+ "zh": {
+ "label": "最大生成长度",
+ },
+ "ko": {
+ "label": "응답의 최대 길이",
+ },
+ "ja": {
+ "label": "最大生成長",
+ },
+ },
+ "top_p": {
+ "en": {
+ "label": "Top-p",
+ },
+ "ru": {
+ "label": "Лучшие-p",
+ },
+ "zh": {
+ "label": "Top-p 采样值",
+ },
+ "ko": {
+ "label": "Top-p",
+ },
+ "ja": {
+ "label": "Top-p",
+ },
+ },
+ "temperature": {
+ "en": {
+ "label": "Temperature",
+ },
+ "ru": {
+ "label": "Температура",
+ },
+ "zh": {
+ "label": "温度系数",
+ },
+ "ko": {
+ "label": "온도",
+ },
+ "ja": {
+ "label": "温度",
+ },
+ },
+ "skip_special_tokens": {
+ "en": {
+ "label": "Skip special tokens",
+ },
+ "ru": {
+ "label": "Пропустить специальные токены",
+ },
+ "zh": {
+ "label": "跳过特殊 token",
+ },
+ "ko": {
+ "label": "스페셜 토큰을 건너뛰기",
+ },
+ "ja": {
+ "label": "スペシャルトークンをスキップ",
+ },
+ },
+ "escape_html": {
+ "en": {
+ "label": "Escape HTML tags",
+ },
+ "ru": {
+ "label": "Исключить HTML теги",
+ },
+ "zh": {
+ "label": "转义 HTML 标签",
+ },
+ "ko": {
+ "label": "HTML 태그 이스케이프",
+ },
+ "ja": {
+ "label": "HTML タグをエスケープ",
+ },
+ },
+ "clear_btn": {
+ "en": {
+ "value": "Clear history",
+ },
+ "ru": {
+ "value": "Очистить историю",
+ },
+ "zh": {
+ "value": "清空历史",
+ },
+ "ko": {
+ "value": "기록 지우기",
+ },
+ "ja": {
+ "value": "履歴をクリア",
+ },
+ },
+ "export_size": {
+ "en": {
+ "label": "Max shard size (GB)",
+ "info": "The maximum size for a model file.",
+ },
+ "ru": {
+ "label": "Максимальный размер фрагмента (ГБ)",
+ "info": "Максимальный размер файла модели.",
+ },
+ "zh": {
+ "label": "最大分块大小(GB)",
+ "info": "单个模型文件的最大大小。",
+ },
+ "ko": {
+ "label": "최대 샤드 크기 (GB)",
+ "info": "모델 파일의 최대 크기.",
+ },
+ "ja": {
+ "label": "最大シャードサイズ(GB)",
+ "info": "単一のモデルファイルの最大サイズ。",
+ },
+ },
+ "export_quantization_bit": {
+ "en": {
+ "label": "Export quantization bit.",
+ "info": "Quantizing the exported model.",
+ },
+ "ru": {
+ "label": "Экспорт бита квантования",
+ "info": "Квантование экспортируемой модели.",
+ },
+ "zh": {
+ "label": "导出量化等级",
+ "info": "量化导出模型。",
+ },
+ "ko": {
+ "label": "양자화 비트 내보내기",
+ "info": "내보낸 모델의 양자화.",
+ },
+ "ja": {
+ "label": "量子化ビットをエクスポート",
+ "info": "エクスポートするモデルを量子化します。",
+ },
+ },
+ "export_quantization_dataset": {
+ "en": {
+ "label": "Export quantization dataset",
+ "info": "The calibration dataset used for quantization.",
+ },
+ "ru": {
+ "label": "Экспорт набора данных для квантования",
+ "info": "Набор данных калибровки, используемый для квантования.",
+ },
+ "zh": {
+ "label": "导出量化数据集",
+ "info": "量化过程中使用的校准数据集。",
+ },
+ "ko": {
+ "label": "양자화 데이터셋 내보내기",
+ "info": "양자화에 사용되는 교정 데이터셋.",
+ },
+ "ja": {
+ "label": "量子化データセットをエクスポート",
+ "info": "量子化プロセスで使用されるキャリブレーションデータセット。",
+ },
+ },
+ "export_device": {
+ "en": {
+ "label": "Export device",
+ "info": "Which device should be used to export model.",
+ },
+ "ru": {
+ "label": "Экспорт устройство",
+ "info": "Какое устройство следует использовать для экспорта модели.",
+ },
+ "zh": {
+ "label": "导出设备",
+ "info": "导出模型使用的设备类型。",
+ },
+ "ko": {
+ "label": "내보낼 장치",
+ "info": "모델을 내보내는 데 사용할 장치.",
+ },
+ "ja": {
+ "label": "エクスポートデバイス",
+ "info": "モデルをエクスポートするために使用するデバイスタイプ。",
+ },
+ },
+ "export_legacy_format": {
+ "en": {
+ "label": "Export legacy format",
+ "info": "Do not use safetensors to save the model.",
+ },
+ "ru": {
+ "label": "Экспорт в устаревший формат",
+ "info": "Не использовать safetensors для сохранения модели.",
+ },
+ "zh": {
+ "label": "导出旧格式",
+ "info": "不使用 safetensors 格式保存模型。",
+ },
+ "ko": {
+ "label": "레거시 형식 내보내기",
+ "info": "모델을 저장하는 데 safetensors를 사용하지 않습니다.",
+ },
+ "ja": {
+ "label": "レガシーフォーマットをエクスポート",
+ "info": "safetensors フォーマットを使用せずにモデルを保存します。",
+ },
+ },
+ "export_dir": {
+ "en": {
+ "label": "Export dir",
+ "info": "Directory to save exported model.",
+ },
+ "ru": {
+ "label": "Каталог экспорта",
+ "info": "Каталог для сохранения экспортированной модели.",
+ },
+ "zh": {
+ "label": "导出目录",
+ "info": "保存导出模型的文件夹路径。",
+ },
+ "ko": {
+ "label": "내보내기 디렉토리",
+ "info": "내보낸 모델을 저장할 디렉토리.",
+ },
+ "ja": {
+ "label": "エクスポートディレクトリ",
+ "info": "エクスポートしたモデルを保存するフォルダのパス。",
+ },
+ },
+ "export_hub_model_id": {
+ "en": {
+ "label": "HF Hub ID (optional)",
+ "info": "Repo ID for uploading model to Hugging Face hub.",
+ },
+ "ru": {
+ "label": "HF Hub ID (опционально)",
+ "info": "Идентификатор репозитория для загрузки модели на Hugging Face hub.",
+ },
+ "zh": {
+ "label": "HF Hub ID(非必填)",
+ "info": "用于将模型上传至 Hugging Face Hub 的仓库 ID。",
+ },
+ "ko": {
+ "label": "HF 허브 ID (선택 사항)",
+ "info": "모델을 Hugging Face 허브에 업로드하기 위한 레포 ID.",
+ },
+ "ja": {
+ "label": "HF Hub ID(オプション)",
+ "info": "Hugging Face Hub にモデルをアップロードするためのリポジトリ ID。",
+ },
+ },
+ "export_btn": {
+ "en": {
+ "value": "Export",
+ },
+ "ru": {
+ "value": "Экспорт",
+ },
+ "zh": {
+ "value": "开始导出",
+ },
+ "ko": {
+ "value": "내보내기",
+ },
+ "ja": {
+ "value": "エクスポート",
+ },
+ },
+ "device_memory": {
+ "en": {
+ "label": "Device memory",
+ "info": "Current memory usage of the device (GB).",
+ },
+ "ru": {
+ "label": "Память устройства",
+ "info": "Текущая память на устройстве (GB).",
+ },
+ "zh": {
+ "label": "设备显存",
+ "info": "当前设备的显存(GB)。",
+ },
+ "ko": {
+ "label": "디바이스 메모리",
+ "info": "지금 사용 중인 기기 메모리 (GB).",
+ },
+ "ja": {
+ "label": "デバイスメモリ",
+ "info": "現在のデバイスのメモリ(GB)。",
+ },
+ },
+}
+
+
+ALERTS = {
+ "err_conflict": {
+ "en": "A process is in running, please abort it first.",
+ "ru": "Процесс уже запущен, пожалуйста, сначала прервите его.",
+ "zh": "任务已存在,请先中断训练。",
+ "ko": "프로세스가 실행 중입니다. 먼저 중단하십시오.",
+ "ja": "プロセスが実行中です。最初に中断してください。",
+ },
+ "err_exists": {
+ "en": "You have loaded a model, please unload it first.",
+ "ru": "Вы загрузили модель, сначала разгрузите ее.",
+ "zh": "模型已存在,请先卸载模型。",
+ "ko": "모델이 로드되었습니다. 먼저 언로드하십시오.",
+ "ja": "モデルがロードされています。最初にアンロードしてください。",
+ },
+ "err_no_model": {
+ "en": "Please select a model.",
+ "ru": "Пожалуйста, выберите модель.",
+ "zh": "请选择模型。",
+ "ko": "모델을 선택하십시오.",
+ "ja": "モデルを選択してください。",
+ },
+ "err_no_path": {
+ "en": "Model not found.",
+ "ru": "Модель не найдена.",
+ "zh": "模型未找到。",
+ "ko": "모델을 찾을 수 없습니다.",
+ "ja": "モデルが見つかりません。",
+ },
+ "err_no_dataset": {
+ "en": "Please choose a dataset.",
+ "ru": "Пожалуйста, выберите набор данных.",
+ "zh": "请选择数据集。",
+ "ko": "데이터 세트를 선택하십시오.",
+ "ja": "データセットを選択してください。",
+ },
+ "err_no_adapter": {
+ "en": "Please select an adapter.",
+ "ru": "Пожалуйста, выберите адаптер.",
+ "zh": "请选择适配器。",
+ "ko": "어댑터를 선택하십시오.",
+ "ja": "アダプターを選択してください。",
+ },
+ "err_no_output_dir": {
+ "en": "Please provide output dir.",
+ "ru": "Пожалуйста, укажите выходную директорию.",
+ "zh": "请填写输出目录。",
+ "ko": "출력 디렉토리를 제공하십시오.",
+ "ja": "出力ディレクトリを入力してください。",
+ },
+ "err_no_reward_model": {
+ "en": "Please select a reward model.",
+ "ru": "Пожалуйста, выберите модель вознаграждения.",
+ "zh": "请选择奖励模型。",
+ "ko": "리워드 모델을 선택하십시오.",
+ "ja": "報酬モデルを選択してください。",
+ },
+ "err_no_export_dir": {
+ "en": "Please provide export dir.",
+ "ru": "Пожалуйста, укажите каталог для экспорта.",
+ "zh": "请填写导出目录。",
+ "ko": "Export 디렉토리를 제공하십시오.",
+ "ja": "エクスポートディレクトリを入力してください。",
+ },
+ "err_gptq_lora": {
+ "en": "Please merge adapters before quantizing the model.",
+ "ru": "Пожалуйста, объедините адаптеры перед квантованием модели.",
+ "zh": "量化模型前请先合并适配器。",
+ "ko": "모델을 양자화하기 전에 어댑터를 병합하십시오.",
+ "ja": "モデルを量子化する前にアダプターをマージしてください。",
+ },
+ "err_failed": {
+ "en": "Failed.",
+ "ru": "Ошибка.",
+ "zh": "训练出错。",
+ "ko": "실패했습니다.",
+ "ja": "失敗しました。",
+ },
+ "err_demo": {
+ "en": "Training is unavailable in demo mode, duplicate the space to a private one first.",
+ "ru": "Обучение недоступно в демонстрационном режиме, сначала скопируйте пространство в частное.",
+ "zh": "展示模式不支持训练,请先复制到私人空间。",
+ "ko": "데모 모드에서는 훈련을 사용할 수 없습니다. 먼저 프라이빗 레포지토리로 작업 공간을 복제하십시오.",
+ "ja": "デモモードではトレーニングは利用できません。最初にプライベートスペースに複製してください。",
+ },
+ "err_tool_name": {
+ "en": "Tool name not found.",
+ "ru": "Имя инструмента не найдено.",
+ "zh": "工具名称未找到。",
+ "ko": "툴 이름을 찾을 수 없습니다.",
+ "ja": "ツール名が見つかりません。",
+ },
+ "err_json_schema": {
+ "en": "Invalid JSON schema.",
+ "ru": "Неверная схема JSON.",
+ "zh": "Json 格式错误。",
+ "ko": "잘못된 JSON 스키마입니다.",
+ "ja": "JSON スキーマが無効です。",
+ },
+ "err_config_not_found": {
+ "en": "Config file is not found.",
+ "ru": "Файл конфигурации не найден.",
+ "zh": "未找到配置文件。",
+ "ko": "Config 파일을 찾을 수 없습니다.",
+ "ja": "設定ファイルが見つかりません。",
+ },
+ "warn_no_cuda": {
+ "en": "CUDA environment was not detected.",
+ "ru": "Среда CUDA не обнаружена.",
+ "zh": "未检测到 CUDA 环境。",
+ "ko": "CUDA 환경이 감지되지 않았습니다.",
+ "ja": "CUDA 環境が検出されませんでした。",
+ },
+ "warn_output_dir_exists": {
+ "en": "Output dir already exists, will resume training from here.",
+ "ru": "Выходной каталог уже существует, обучение будет продолжено отсюда.",
+ "zh": "输出目录已存在,将从该断点恢复训练。",
+ "ko": "출력 디렉토리가 이미 존재합니다. 위 출력 디렉토리에 저장된 학습을 재개합니다.",
+ "ja": "出力ディレクトリが既に存在します。このチェックポイントからトレーニングを再開します。",
+ },
+ "warn_no_instruct": {
+ "en": "You are using a non-instruct model, please fine-tune it first.",
+ "ru": "Вы используете модель без инструкции, пожалуйста, primeros выполните донастройку этой модели.",
+ "zh": "您正在使用非指令模型,请先对其进行微调。",
+ "ko": "당신은 지시하지 않은 모델을 사용하고 있습니다. 먼저 이를 미세 조정해 주세요.",
+ "ja": "インストラクションモデルを使用していません。まずモデルをアダプターに適合させてください。",
+ },
+ "info_aborting": {
+ "en": "Aborted, wait for terminating...",
+ "ru": "Прервано, ожидание завершения...",
+ "zh": "训练中断,正在等待进程结束……",
+ "ko": "중단되었습니다. 종료를 기다리십시오...",
+ "ja": "トレーニングが中断されました。プロセスの終了を待っています...",
+ },
+ "info_aborted": {
+ "en": "Ready.",
+ "ru": "Готово.",
+ "zh": "准备就绪。",
+ "ko": "준비되었습니다.",
+ "ja": "準備完了。",
+ },
+ "info_finished": {
+ "en": "Finished.",
+ "ru": "Завершено.",
+ "zh": "训练完毕。",
+ "ko": "완료되었습니다.",
+ "ja": "トレーニングが完了しました。",
+ },
+ "info_config_saved": {
+ "en": "Arguments have been saved at: ",
+ "ru": "Аргументы были сохранены по адресу: ",
+ "zh": "训练参数已保存至:",
+ "ko": "매개변수가 저장되었습니다: ",
+ "ja": "トレーニングパラメータが保存されました: ",
+ },
+ "info_config_loaded": {
+ "en": "Arguments have been restored.",
+ "ru": "Аргументы были восстановлены.",
+ "zh": "训练参数已载入。",
+ "ko": "매개변수가 복원되었습니다.",
+ "ja": "トレーニングパラメータが読み込まれました。",
+ },
+ "info_loading": {
+ "en": "Loading model...",
+ "ru": "Загрузка модели...",
+ "zh": "加载中……",
+ "ko": "모델 로딩 중...",
+ "ja": "モデルをロード中...",
+ },
+ "info_unloading": {
+ "en": "Unloading model...",
+ "ru": "Выгрузка модели...",
+ "zh": "卸载中……",
+ "ko": "모델 언로딩 중...",
+ "ja": "モデルをアンロード中...",
+ },
+ "info_loaded": {
+ "en": "Model loaded, now you can chat with your model!",
+ "ru": "Модель загружена, теперь вы можете общаться с вашей моделью!",
+ "zh": "模型已加载,可以开始聊天了!",
+ "ko": "모델이 로드되었습니다. 이제 모델과 채팅할 수 있습니다!",
+ "ja": "モデルがロードされました。チャットを開始できます!",
+ },
+ "info_unloaded": {
+ "en": "Model unloaded.",
+ "ru": "Модель выгружена.",
+ "zh": "模型已卸载。",
+ "ko": "모델이 언로드되었습니다.",
+ "ja": "モデルがアンロードされました。",
+ },
+ "info_thinking": {
+ "en": "🌀 Thinking...",
+ "ru": "🌀 Думаю...",
+ "zh": "🌀 思考中...",
+ "ko": "🌀 생각 중...",
+ "ja": "🌀 考えています...",
+ },
+ "info_thought": {
+ "en": "✅ Thought",
+ "ru": "✅ Думать закончено",
+ "zh": "✅ 思考完成",
+ "ko": "✅ 생각이 완료되었습니다",
+ "ja": "✅ 思考完了",
+ },
+ "info_exporting": {
+ "en": "Exporting model...",
+ "ru": "Экспорт модели...",
+ "zh": "正在导出模型……",
+ "ko": "모델 내보내기 중...",
+ "ja": "モデルをエクスポート中...",
+ },
+ "info_exported": {
+ "en": "Model exported.",
+ "ru": "Модель экспортирована.",
+ "zh": "模型导出完成。",
+ "ko": "모델이 내보내졌습니다.",
+ "ja": "モデルのエクスポートが完了しました。",
+ },
+ "info_swanlab_link": {
+ "en": "### SwanLab Link\n",
+ "ru": "### SwanLab ссылка\n",
+ "zh": "### SwanLab 链接\n",
+ "ko": "### SwanLab 링크\n",
+ "ja": "### SwanLab リンク\n",
+ },
+}
diff --git a/src/train_sft/src/llamafactory/webui/manager.py b/src/train_sft/src/llamafactory/webui/manager.py
new file mode 100644
index 0000000000000000000000000000000000000000..e762fa6b5e427a5b0a77e4faa7e28f413c243863
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/manager.py
@@ -0,0 +1,70 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from collections.abc import Generator
+from typing import TYPE_CHECKING
+
+
+if TYPE_CHECKING:
+ from gradio.components import Component
+
+
+class Manager:
+ r"""A class to manage all the gradio components in Web UI."""
+
+ def __init__(self) -> None:
+ self._id_to_elem: dict[str, Component] = {}
+ self._elem_to_id: dict[Component, str] = {}
+
+ def add_elems(self, tab_name: str, elem_dict: dict[str, "Component"]) -> None:
+ r"""Add elements to manager."""
+ for elem_name, elem in elem_dict.items():
+ elem_id = f"{tab_name}.{elem_name}"
+ self._id_to_elem[elem_id] = elem
+ self._elem_to_id[elem] = elem_id
+
+ def get_elem_list(self) -> list["Component"]:
+ r"""Return the list of all elements."""
+ return list(self._id_to_elem.values())
+
+ def get_elem_iter(self) -> Generator[tuple[str, "Component"], None, None]:
+ r"""Return an iterator over all elements with their names."""
+ for elem_id, elem in self._id_to_elem.items():
+ yield elem_id.split(".")[-1], elem
+
+ def get_elem_by_id(self, elem_id: str) -> "Component":
+ r"""Get element by id.
+
+ Example: top.lang, train.dataset
+ """
+ return self._id_to_elem[elem_id]
+
+ def get_id_by_elem(self, elem: "Component") -> str:
+ r"""Get id by element."""
+ return self._elem_to_id[elem]
+
+ def get_base_elems(self) -> set["Component"]:
+ r"""Get the base elements that are commonly used."""
+ return {
+ self._id_to_elem["top.lang"],
+ self._id_to_elem["top.model_name"],
+ self._id_to_elem["top.model_path"],
+ self._id_to_elem["top.finetuning_type"],
+ self._id_to_elem["top.checkpoint_path"],
+ self._id_to_elem["top.quantization_bit"],
+ self._id_to_elem["top.quantization_method"],
+ self._id_to_elem["top.template"],
+ self._id_to_elem["top.rope_scaling"],
+ self._id_to_elem["top.booster"],
+ }
diff --git a/src/train_sft/src/llamafactory/webui/runner.py b/src/train_sft/src/llamafactory/webui/runner.py
new file mode 100644
index 0000000000000000000000000000000000000000..0a6fc7c9aaaed9756c3d7ccb7f638260322abbf5
--- /dev/null
+++ b/src/train_sft/src/llamafactory/webui/runner.py
@@ -0,0 +1,505 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import json
+import os
+from collections.abc import Generator
+from copy import deepcopy
+from subprocess import PIPE, Popen, TimeoutExpired
+from typing import TYPE_CHECKING, Any, Optional
+
+from transformers.utils import is_torch_npu_available
+
+from ..extras.constants import LLAMABOARD_CONFIG, MULTIMODAL_SUPPORTED_MODELS, PEFT_METHODS, TRAINING_STAGES
+from ..extras.misc import is_accelerator_available, torch_gc
+from ..extras.packages import is_gradio_available
+from .common import (
+ DEFAULT_CACHE_DIR,
+ DEFAULT_CONFIG_DIR,
+ abort_process,
+ calculate_pixels,
+ gen_cmd,
+ get_save_dir,
+ load_args,
+ load_config,
+ load_eval_results,
+ save_args,
+ save_cmd,
+)
+from .control import get_trainer_info
+from .locales import ALERTS, LOCALES
+
+
+if is_gradio_available():
+ import gradio as gr
+
+
+if TYPE_CHECKING:
+ from gradio.components import Component
+
+ from .manager import Manager
+
+
+class Runner:
+ r"""A class to manage the running status of the trainers."""
+
+ def __init__(self, manager: "Manager", demo_mode: bool = False) -> None:
+ r"""Init a runner."""
+ self.manager = manager
+ self.demo_mode = demo_mode
+ """ Resume """
+ self.trainer: Optional[Popen] = None
+ self.do_train = True
+ self.running_data: dict[Component, Any] = None
+ """ State """
+ self.aborted = False
+ self.running = False
+
+ def set_abort(self) -> None:
+ self.aborted = True
+ if self.trainer is not None:
+ abort_process(self.trainer.pid)
+
+ def _initialize(self, data: dict["Component", Any], do_train: bool, from_preview: bool) -> str:
+ r"""Validate the configuration."""
+ get = lambda elem_id: data[self.manager.get_elem_by_id(elem_id)]
+ lang, model_name, model_path = get("top.lang"), get("top.model_name"), get("top.model_path")
+ dataset = get("train.dataset") if do_train else get("eval.dataset")
+
+ if self.running:
+ return ALERTS["err_conflict"][lang]
+
+ if not model_name:
+ return ALERTS["err_no_model"][lang]
+
+ if not model_path:
+ return ALERTS["err_no_path"][lang]
+
+ if not dataset:
+ return ALERTS["err_no_dataset"][lang]
+
+ if not from_preview and self.demo_mode:
+ return ALERTS["err_demo"][lang]
+
+ if do_train:
+ if not get("train.output_dir"):
+ return ALERTS["err_no_output_dir"][lang]
+
+ try:
+ json.loads(get("train.extra_args"))
+ except json.JSONDecodeError:
+ return ALERTS["err_json_schema"][lang]
+
+ stage = TRAINING_STAGES[get("train.training_stage")]
+ if stage == "ppo" and not get("train.reward_model"):
+ return ALERTS["err_no_reward_model"][lang]
+ else:
+ if not get("eval.output_dir"):
+ return ALERTS["err_no_output_dir"][lang]
+
+ if not from_preview and not is_accelerator_available():
+ gr.Warning(ALERTS["warn_no_cuda"][lang])
+
+ return ""
+
+ def _finalize(self, lang: str, finish_info: str) -> None:
+ r"""Clean the cached memory and resets the runner."""
+ finish_info = ALERTS["info_aborted"][lang] if self.aborted else finish_info
+ gr.Info(finish_info)
+ self.trainer = None
+ self.aborted = False
+ self.running = False
+ self.running_data = None
+ torch_gc()
+
+ def _parse_train_args(self, data: dict["Component", Any]) -> dict[str, Any]:
+ r"""Build and validate the training arguments."""
+ get = lambda elem_id: data[self.manager.get_elem_by_id(elem_id)]
+ model_name, finetuning_type = get("top.model_name"), get("top.finetuning_type")
+ user_config = load_config()
+
+ args = dict(
+ stage=TRAINING_STAGES[get("train.training_stage")],
+ do_train=True,
+ model_name_or_path=get("top.model_path"),
+ cache_dir=user_config.get("cache_dir", None),
+ preprocessing_num_workers=16,
+ finetuning_type=finetuning_type,
+ template=get("top.template"),
+ rope_scaling=get("top.rope_scaling") if get("top.rope_scaling") != "none" else None,
+ flash_attn="fa2" if get("top.booster") == "flashattn2" else "auto",
+ use_unsloth=(get("top.booster") == "unsloth"),
+ enable_liger_kernel=(get("top.booster") == "liger_kernel"),
+ dataset_dir=get("train.dataset_dir"),
+ dataset=",".join(get("train.dataset")),
+ cutoff_len=get("train.cutoff_len"),
+ learning_rate=float(get("train.learning_rate")),
+ num_train_epochs=float(get("train.num_train_epochs")),
+ max_samples=int(get("train.max_samples")),
+ per_device_train_batch_size=get("train.batch_size"),
+ gradient_accumulation_steps=get("train.gradient_accumulation_steps"),
+ lr_scheduler_type=get("train.lr_scheduler_type"),
+ max_grad_norm=float(get("train.max_grad_norm")),
+ logging_steps=get("train.logging_steps"),
+ save_steps=get("train.save_steps"),
+ warmup_steps=get("train.warmup_steps"),
+ neftune_noise_alpha=get("train.neftune_alpha") or None,
+ packing=get("train.packing") or get("train.neat_packing"),
+ neat_packing=get("train.neat_packing"),
+ train_on_prompt=get("train.train_on_prompt"),
+ mask_history=get("train.mask_history"),
+ resize_vocab=get("train.resize_vocab"),
+ use_llama_pro=get("train.use_llama_pro"),
+ enable_thinking=get("train.enable_thinking"),
+ report_to=get("train.report_to"),
+ use_galore=get("train.use_galore"),
+ use_apollo=get("train.use_apollo"),
+ use_badam=get("train.use_badam"),
+ use_swanlab=get("train.use_swanlab"),
+ output_dir=get_save_dir(model_name, finetuning_type, get("train.output_dir")),
+ fp16=(get("train.compute_type") == "fp16"),
+ bf16=(get("train.compute_type") == "bf16"),
+ pure_bf16=(get("train.compute_type") == "pure_bf16"),
+ plot_loss=True,
+ trust_remote_code=True,
+ ddp_timeout=180000000,
+ include_num_input_tokens_seen=True,
+ )
+ args.update(json.loads(get("train.extra_args")))
+
+ # checkpoints
+ if get("top.checkpoint_path"):
+ if finetuning_type in PEFT_METHODS: # list
+ args["adapter_name_or_path"] = ",".join(
+ [get_save_dir(model_name, finetuning_type, adapter) for adapter in get("top.checkpoint_path")]
+ )
+ else: # str
+ args["model_name_or_path"] = get_save_dir(model_name, finetuning_type, get("top.checkpoint_path"))
+
+ # quantization
+ if get("top.quantization_bit") != "none":
+ args["quantization_bit"] = int(get("top.quantization_bit"))
+ args["quantization_method"] = get("top.quantization_method")
+ args["double_quantization"] = not is_torch_npu_available()
+
+ # freeze config
+ if args["finetuning_type"] == "freeze":
+ args["freeze_trainable_layers"] = get("train.freeze_trainable_layers")
+ args["freeze_trainable_modules"] = get("train.freeze_trainable_modules")
+ args["freeze_extra_modules"] = get("train.freeze_extra_modules") or None
+
+ # lora config
+ if args["finetuning_type"] == "lora":
+ args["lora_rank"] = get("train.lora_rank")
+ args["lora_alpha"] = get("train.lora_alpha")
+ args["lora_dropout"] = get("train.lora_dropout")
+ args["loraplus_lr_ratio"] = get("train.loraplus_lr_ratio") or None
+ args["create_new_adapter"] = get("train.create_new_adapter")
+ args["use_rslora"] = get("train.use_rslora")
+ args["use_dora"] = get("train.use_dora")
+ args["pissa_init"] = get("train.use_pissa")
+ args["pissa_convert"] = get("train.use_pissa")
+ args["lora_target"] = get("train.lora_target") or "all"
+ args["additional_target"] = get("train.additional_target") or None
+
+ if args["use_llama_pro"]:
+ args["freeze_trainable_layers"] = get("train.freeze_trainable_layers")
+
+ # rlhf config
+ if args["stage"] == "ppo":
+ if finetuning_type in PEFT_METHODS:
+ args["reward_model"] = ",".join(
+ [get_save_dir(model_name, finetuning_type, adapter) for adapter in get("train.reward_model")]
+ )
+ else:
+ args["reward_model"] = get_save_dir(model_name, finetuning_type, get("train.reward_model"))
+
+ args["reward_model_type"] = "lora" if finetuning_type == "lora" else "full"
+ args["ppo_score_norm"] = get("train.ppo_score_norm")
+ args["ppo_whiten_rewards"] = get("train.ppo_whiten_rewards")
+ args["top_k"] = 0
+ args["top_p"] = 0.9
+ elif args["stage"] in ["dpo", "kto"]:
+ args["pref_beta"] = get("train.pref_beta")
+ args["pref_ftx"] = get("train.pref_ftx")
+ args["pref_loss"] = get("train.pref_loss")
+
+ # multimodal config
+ if model_name in MULTIMODAL_SUPPORTED_MODELS:
+ args["freeze_vision_tower"] = get("train.freeze_vision_tower")
+ args["freeze_multi_modal_projector"] = get("train.freeze_multi_modal_projector")
+ args["freeze_language_model"] = get("train.freeze_language_model")
+ args["image_max_pixels"] = calculate_pixels(get("train.image_max_pixels"))
+ args["image_min_pixels"] = calculate_pixels(get("train.image_min_pixels"))
+ args["video_max_pixels"] = calculate_pixels(get("train.video_max_pixels"))
+ args["video_min_pixels"] = calculate_pixels(get("train.video_min_pixels"))
+
+ # galore config
+ if args["use_galore"]:
+ args["galore_rank"] = get("train.galore_rank")
+ args["galore_update_interval"] = get("train.galore_update_interval")
+ args["galore_scale"] = get("train.galore_scale")
+ args["galore_target"] = get("train.galore_target")
+
+ # apollo config
+ if args["use_apollo"]:
+ args["apollo_rank"] = get("train.apollo_rank")
+ args["apollo_update_interval"] = get("train.apollo_update_interval")
+ args["apollo_scale"] = get("train.apollo_scale")
+ args["apollo_target"] = get("train.apollo_target")
+
+ # badam config
+ if args["use_badam"]:
+ args["badam_mode"] = get("train.badam_mode")
+ args["badam_switch_mode"] = get("train.badam_switch_mode")
+ args["badam_switch_interval"] = get("train.badam_switch_interval")
+ args["badam_update_ratio"] = get("train.badam_update_ratio")
+
+ # swanlab config
+ if get("train.use_swanlab"):
+ args["swanlab_project"] = get("train.swanlab_project")
+ args["swanlab_run_name"] = get("train.swanlab_run_name")
+ args["swanlab_workspace"] = get("train.swanlab_workspace")
+ args["swanlab_api_key"] = get("train.swanlab_api_key")
+ args["swanlab_mode"] = get("train.swanlab_mode")
+
+ # eval config
+ if get("train.val_size") > 1e-6 and args["stage"] != "ppo":
+ args["val_size"] = get("train.val_size")
+ args["eval_strategy"] = "steps"
+ args["eval_steps"] = args["save_steps"]
+ args["per_device_eval_batch_size"] = args["per_device_train_batch_size"]
+
+ # ds config
+ if get("train.ds_stage") != "none":
+ ds_stage = get("train.ds_stage")
+ ds_offload = "offload_" if get("train.ds_offload") else ""
+ args["deepspeed"] = os.path.join(DEFAULT_CACHE_DIR, f"ds_z{ds_stage}_{ds_offload}config.json")
+
+ return args
+
+ def _parse_eval_args(self, data: dict["Component", Any]) -> dict[str, Any]:
+ r"""Build and validate the evaluation arguments."""
+ get = lambda elem_id: data[self.manager.get_elem_by_id(elem_id)]
+ model_name, finetuning_type = get("top.model_name"), get("top.finetuning_type")
+ user_config = load_config()
+
+ args = dict(
+ stage="sft",
+ model_name_or_path=get("top.model_path"),
+ cache_dir=user_config.get("cache_dir", None),
+ preprocessing_num_workers=16,
+ finetuning_type=finetuning_type,
+ quantization_method=get("top.quantization_method"),
+ template=get("top.template"),
+ rope_scaling=get("top.rope_scaling") if get("top.rope_scaling") != "none" else None,
+ flash_attn="fa2" if get("top.booster") == "flashattn2" else "auto",
+ use_unsloth=(get("top.booster") == "unsloth"),
+ dataset_dir=get("eval.dataset_dir"),
+ eval_dataset=",".join(get("eval.dataset")),
+ cutoff_len=get("eval.cutoff_len"),
+ max_samples=int(get("eval.max_samples")),
+ per_device_eval_batch_size=get("eval.batch_size"),
+ predict_with_generate=True,
+ report_to="none",
+ max_new_tokens=get("eval.max_new_tokens"),
+ top_p=get("eval.top_p"),
+ temperature=get("eval.temperature"),
+ output_dir=get_save_dir(model_name, finetuning_type, get("eval.output_dir")),
+ trust_remote_code=True,
+ ddp_timeout=180000000,
+ )
+
+ if get("eval.predict"):
+ args["do_predict"] = True
+ else:
+ args["do_eval"] = True
+
+ # checkpoints
+ if get("top.checkpoint_path"):
+ if finetuning_type in PEFT_METHODS: # list
+ args["adapter_name_or_path"] = ",".join(
+ [get_save_dir(model_name, finetuning_type, adapter) for adapter in get("top.checkpoint_path")]
+ )
+ else: # str
+ args["model_name_or_path"] = get_save_dir(model_name, finetuning_type, get("top.checkpoint_path"))
+
+ # quantization
+ if get("top.quantization_bit") != "none":
+ args["quantization_bit"] = int(get("top.quantization_bit"))
+ args["quantization_method"] = get("top.quantization_method")
+ args["double_quantization"] = not is_torch_npu_available()
+
+ return args
+
+ def _preview(self, data: dict["Component", Any], do_train: bool) -> Generator[dict["Component", str], None, None]:
+ r"""Preview the training commands."""
+ output_box = self.manager.get_elem_by_id("{}.output_box".format("train" if do_train else "eval"))
+ error = self._initialize(data, do_train, from_preview=True)
+ if error:
+ gr.Warning(error)
+ yield {output_box: error}
+ else:
+ args = self._parse_train_args(data) if do_train else self._parse_eval_args(data)
+ yield {output_box: gen_cmd(args)}
+
+ def _launch(self, data: dict["Component", Any], do_train: bool) -> Generator[dict["Component", Any], None, None]:
+ r"""Start the training process."""
+ output_box = self.manager.get_elem_by_id("{}.output_box".format("train" if do_train else "eval"))
+ error = self._initialize(data, do_train, from_preview=False)
+ if error:
+ gr.Warning(error)
+ yield {output_box: error}
+ else:
+ self.do_train, self.running_data = do_train, data
+ args = self._parse_train_args(data) if do_train else self._parse_eval_args(data)
+
+ os.makedirs(args["output_dir"], exist_ok=True)
+ save_args(os.path.join(args["output_dir"], LLAMABOARD_CONFIG), self._build_config_dict(data))
+
+ env = deepcopy(os.environ)
+ env["LLAMABOARD_ENABLED"] = "1"
+ env["LLAMABOARD_WORKDIR"] = args["output_dir"]
+ if args.get("deepspeed", None) is not None:
+ env["FORCE_TORCHRUN"] = "1"
+
+ # NOTE: DO NOT USE shell=True to avoid security risk
+ self.trainer = Popen(["llamafactory-cli", "train", save_cmd(args)], env=env, stderr=PIPE, text=True)
+ yield from self.monitor()
+
+ def _build_config_dict(self, data: dict["Component", Any]) -> dict[str, Any]:
+ r"""Build a dictionary containing the current training configuration."""
+ config_dict = {}
+ skip_ids = ["top.lang", "top.model_path", "train.output_dir", "train.config_path"]
+ for elem, value in data.items():
+ elem_id = self.manager.get_id_by_elem(elem)
+ if elem_id not in skip_ids:
+ config_dict[elem_id] = value
+
+ return config_dict
+
+ def preview_train(self, data):
+ yield from self._preview(data, do_train=True)
+
+ def preview_eval(self, data):
+ yield from self._preview(data, do_train=False)
+
+ def run_train(self, data):
+ yield from self._launch(data, do_train=True)
+
+ def run_eval(self, data):
+ yield from self._launch(data, do_train=False)
+
+ def monitor(self):
+ r"""Monitorgit the training progress and logs."""
+ self.aborted = False
+ self.running = True
+
+ get = lambda elem_id: self.running_data[self.manager.get_elem_by_id(elem_id)]
+ lang, model_name, finetuning_type = get("top.lang"), get("top.model_name"), get("top.finetuning_type")
+ output_dir = get("{}.output_dir".format("train" if self.do_train else "eval"))
+ output_path = get_save_dir(model_name, finetuning_type, output_dir)
+
+ output_box = self.manager.get_elem_by_id("{}.output_box".format("train" if self.do_train else "eval"))
+ progress_bar = self.manager.get_elem_by_id("{}.progress_bar".format("train" if self.do_train else "eval"))
+ loss_viewer = self.manager.get_elem_by_id("train.loss_viewer") if self.do_train else None
+ swanlab_link = self.manager.get_elem_by_id("train.swanlab_link") if self.do_train else None
+
+ running_log = ""
+ return_code = -1
+ while return_code == -1:
+ if self.aborted:
+ yield {
+ output_box: ALERTS["info_aborting"][lang],
+ progress_bar: gr.Slider(visible=False),
+ }
+ else:
+ running_log, running_progress, running_info = get_trainer_info(lang, output_path, self.do_train)
+ return_dict = {
+ output_box: running_log,
+ progress_bar: running_progress,
+ }
+ if "loss_viewer" in running_info:
+ return_dict[loss_viewer] = running_info["loss_viewer"]
+
+ if "swanlab_link" in running_info:
+ return_dict[swanlab_link] = running_info["swanlab_link"]
+
+ yield return_dict
+
+ try:
+ stderr = self.trainer.communicate(timeout=2)[1]
+ return_code = self.trainer.returncode
+ except TimeoutExpired:
+ continue
+
+ if return_code == 0 or self.aborted:
+ finish_info = ALERTS["info_finished"][lang]
+ if self.do_train:
+ finish_log = ALERTS["info_finished"][lang] + "\n\n" + running_log
+ else:
+ finish_log = load_eval_results(os.path.join(output_path, "all_results.json")) + "\n\n" + running_log
+ else:
+ print(stderr)
+ finish_info = ALERTS["err_failed"][lang]
+ finish_log = ALERTS["err_failed"][lang] + f" Exit code: {return_code}\n\n```\n{stderr}\n```\n"
+
+ self._finalize(lang, finish_info)
+ return_dict = {output_box: finish_log, progress_bar: gr.Slider(visible=False)}
+ yield return_dict
+
+ def save_args(self, data):
+ r"""Save the training configuration to config path."""
+ output_box = self.manager.get_elem_by_id("train.output_box")
+ error = self._initialize(data, do_train=True, from_preview=True)
+ if error:
+ gr.Warning(error)
+ return {output_box: error}
+
+ lang = data[self.manager.get_elem_by_id("top.lang")]
+ config_path = data[self.manager.get_elem_by_id("train.config_path")]
+ os.makedirs(DEFAULT_CONFIG_DIR, exist_ok=True)
+ save_path = os.path.join(DEFAULT_CONFIG_DIR, config_path)
+
+ save_args(save_path, self._build_config_dict(data))
+ return {output_box: ALERTS["info_config_saved"][lang] + save_path}
+
+ def load_args(self, lang: str, config_path: str):
+ r"""Load the training configuration from config path."""
+ output_box = self.manager.get_elem_by_id("train.output_box")
+ config_dict = load_args(os.path.join(DEFAULT_CONFIG_DIR, config_path))
+ if config_dict is None:
+ gr.Warning(ALERTS["err_config_not_found"][lang])
+ return {output_box: ALERTS["err_config_not_found"][lang]}
+
+ output_dict: dict[Component, Any] = {output_box: ALERTS["info_config_loaded"][lang]}
+ for elem_id, value in config_dict.items():
+ output_dict[self.manager.get_elem_by_id(elem_id)] = value
+
+ return output_dict
+
+ def check_output_dir(self, lang: str, model_name: str, finetuning_type: str, output_dir: str):
+ r"""Restore the training status if output_dir exists."""
+ output_box = self.manager.get_elem_by_id("train.output_box")
+ output_dict: dict[Component, Any] = {output_box: LOCALES["output_box"][lang]["value"]}
+ if model_name and output_dir and os.path.isdir(get_save_dir(model_name, finetuning_type, output_dir)):
+ gr.Warning(ALERTS["warn_output_dir_exists"][lang])
+ output_dict[output_box] = ALERTS["warn_output_dir_exists"][lang]
+
+ output_dir = get_save_dir(model_name, finetuning_type, output_dir)
+ config_dict = load_args(os.path.join(output_dir, LLAMABOARD_CONFIG)) # load llamaboard config
+ for elem_id, value in config_dict.items():
+ output_dict[self.manager.get_elem_by_id(elem_id)] = value
+
+ return output_dict
diff --git a/src/train_sft/tests/check_license.py b/src/train_sft/tests/check_license.py
new file mode 100644
index 0000000000000000000000000000000000000000..1512347d92bc6ace6b53200899c9c8df87f5edb5
--- /dev/null
+++ b/src/train_sft/tests/check_license.py
@@ -0,0 +1,38 @@
+# Copyright 2025 the LlamaFactory team.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+from pathlib import Path
+
+
+KEYWORDS = ("Copyright", "2025", "LlamaFactory")
+
+
+def main():
+ path_list: list[Path] = []
+ for check_dir in sys.argv[1:]:
+ path_list.extend(Path(check_dir).glob("**/*.py"))
+
+ for path in path_list:
+ with open(path.absolute(), encoding="utf-8") as f:
+ file_content = f.read().strip().split("\n")
+ if not file_content[0]:
+ continue
+
+ print(f"Check license: {path}")
+ assert all(keyword in file_content[0] for keyword in KEYWORDS), f"File {path} does not contain license."
+
+
+if __name__ == "__main__":
+ main()
diff --git a/src/train_sft/tests/version.txt b/src/train_sft/tests/version.txt
new file mode 100644
index 0000000000000000000000000000000000000000..7436a40dadad42e9486d317c7cec9cca68efd58a
--- /dev/null
+++ b/src/train_sft/tests/version.txt
@@ -0,0 +1,2 @@
+# change if test fails or cache is outdated
+0.9.4.101
diff --git a/src/verl/.gitignore b/src/verl/.gitignore
new file mode 100644
index 0000000000000000000000000000000000000000..d77a5b43ffc6887b2569fcde32bfa4bb86f7ef20
--- /dev/null
+++ b/src/verl/.gitignore
@@ -0,0 +1,128 @@
+
+**/*.pt
+**/checkpoints
+**/wget-log
+**/_build/
+**/*.ckpt
+**/outputs
+**/*.tar.gz
+**/playground
+**/wandb
+
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+dataset/*
+tensorflow/my_graph/*
+.idea/
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+env/
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+tmp/
+*.egg-info/
+.installed.cfg
+*.egg
+
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*,cover
+.hypothesis/
+pytest.ini
+output.txt
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+target/
+
+# IPython Notebook
+.ipynb_checkpoints
+
+# pyenv
+.python-version
+
+# celery beat schedule file
+celerybeat-schedule
+
+# dotenv
+.env
+
+# virtualenv
+venv/
+.venv/
+ENV/
+
+# Spyder project settings
+.spyderproject
+
+# Rope project settings
+.ropeproject
+
+# vscode
+.vscode
+
+# Mac
+.DS_Store
+
+# vim
+*.swp
+
+# ckpt
+*.lock
+
+# data
+*.parquet
+
+
+# local logs
+logs
+log
+outputs
+.history
diff --git a/src/verl/.pre-commit-config.yaml b/src/verl/.pre-commit-config.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..bd77c362015f6e97767a18aa63cc50916fae12c0
--- /dev/null
+++ b/src/verl/.pre-commit-config.yaml
@@ -0,0 +1,37 @@
+repos:
+ - repo: https://github.com/astral-sh/ruff-pre-commit
+ rev: "v0.12.2"
+ hooks:
+ - id: ruff
+ args: ["--fix", "--show-fixes", "--output-format=full"]
+ exclude: ^.*\.(ipynb)$
+ - id: ruff-format
+
+ - repo: https://github.com/pre-commit/mirrors-mypy
+ rev: 'v1.17.0'
+ hooks:
+ - id: mypy
+
+ - repo: local
+ hooks:
+ - id: autogen-trainer-cfg
+ name: Generate and verify verl/trainer/config/_generated_*.yaml
+ entry: scripts/generate_trainer_config.sh
+ language: script
+ pass_filenames: false
+
+ - repo: local
+ hooks:
+ - id: check-docstrings
+ name: Check doc string coverage
+ entry: python3 tests/special_sanity/check_docstrings.py
+ language: python
+ pass_filenames: false
+
+ - repo: local
+ hooks:
+ - id: check-license
+ name: Check license
+ entry: python3 tests/special_sanity/check_license.py --directories examples recipe scripts tests verl setup.py
+ language: python
+ pass_filenames: false
diff --git a/src/verl/.readthedocs.yaml b/src/verl/.readthedocs.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..0016868541a2a0667ef40ae6a9d861bcd26b9316
--- /dev/null
+++ b/src/verl/.readthedocs.yaml
@@ -0,0 +1,19 @@
+# Read the Docs configuration file
+# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
+
+version: 2
+
+build:
+ os: ubuntu-22.04
+ tools:
+ python: "3.11"
+ rust: "1.70"
+
+sphinx:
+ configuration: docs/conf.py
+
+python:
+ install:
+ - requirements: docs/requirements-docs.txt
+ - method: pip
+ path: .
diff --git a/src/verl/CONTRIBUTING.md b/src/verl/CONTRIBUTING.md
new file mode 100644
index 0000000000000000000000000000000000000000..e953f113ed2e5665acd03d5ee93774baeb0049c9
--- /dev/null
+++ b/src/verl/CONTRIBUTING.md
@@ -0,0 +1,89 @@
+# Contributing to verl
+
+Thank you for considering a contribution to verl! We welcome contributions of any kind - bug fixes, enhancements, documentation improvements, or even just feedback. Whether you're an experienced developer or this is your first open-source project, your help is invaluable.
+
+Your support can take many forms:
+- Report issues or unexpected behaviors.
+- Suggest or implement new features.
+- Improve or expand documentation.
+- Review pull requests and assist other contributors.
+- Spread the word: share verl in blog posts, social media, or give the repo a ⭐.
+
+## Finding Issues to Contribute
+
+Looking for ways to dive in? Check out these issues:
+- [Good first issues](https://github.com/volcengine/verl/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22good%20first%20issue%22)
+- [Call for contribution](https://github.com/volcengine/verl/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22call%20for%20contribution%22)
+Furthermore, you can learn the development plan and roadmap via [RFC](https://github.com/volcengine/verl/issues?q=is%3Aissue%20state%3Aopen%20label%3ARFC) and [Roadmap](https://github.com/volcengine/verl/issues?q=state%3Aopen%20label%3A%22roadmap%22).
+
+
+## Developing
+
+- **Python-only**: install verl via `pip install -e .[test,vllm]` or `pip install -e .[test,sglang]` and iterate quickly. For full dependency setup, check out the verl [installation doc](https://verl.readthedocs.io/en/latest/start/install.html).
+
+## Code Linting and Formatting
+
+We rely on pre-commit to keep our code consistent. To set it up:
+
+```bash
+pip install pre-commit
+pre-commit install
+# for staged changes
+pre-commit run
+# for all files in the repo
+pre-commit run --all-files
+# run a specific hook with pre-commit
+# pre-commit run --all-files --show-diff-on-failure --color=always
+pre-commit run --all-files --show-diff-on-failure --color=always ruff
+pre-commit run --all-files --show-diff-on-failure --color=always autogen-trainer-cfg
+```
+
+## Testing
+
+Our test suites run on GitHub Actions. Check these workflows for details:
+- [GPU unit tests](https://github.com/volcengine/verl/blob/main/.github/workflows/gpu_unit_tests.yml)
+- [CPU unit tests](https://github.com/volcengine/verl/blob/main/.github/workflows/cpu_unit_tests.yml)
+- [vLLM tests](https://github.com/volcengine/verl/blob/main/.github/workflows/vllm.yml)
+- [SGLang tests](https://github.com/volcengine/verl/blob/main/.github/workflows/sgl.yml)
+
+### Adding CI tests
+
+If possible, please add CI test(s) for your new feature:
+
+1. Find the most relevant workflow yml file, which usually corresponds to a `hydra` default config (e.g. `ppo_trainer`, `ppo_megatron_trainer`, `sft_trainer`, etc).
+2. Add related path patterns to the `paths` section if not already included.
+3. Minimize the workload of the test script(s) (see existing scripts for examples).
+
+## Building the Docs
+```
+# Ensure verl is on your PYTHONPATH, e.g.:
+pip install -e .[test]
+
+# Install documentation dependencies
+pip install -r requirements-docs.txt
+
+# Generate HTML docs
+make clean
+make html
+
+# Preview locally
+python -m http.server -d _build/html/
+```
+Open your browser at http://localhost:8000 to explore the docs.
+
+## Pull Requests & Code Reviews
+
+Thanks for submitting a PR! To streamline reviews:
+- Follow our Pull Request Template for title format and checklist.
+- Adhere to our pre-commit lint rules and ensure all checks pass.
+- Update docs for any user-facing changes.
+- Add or update tests in the CI workflows, or explain why tests aren't applicable.
+
+## License
+
+See the [LICENSE](https://github.com/volcengine/verl/blob/main/LICENSE) file for full details.
+
+## Thank You
+
+We appreciate your contributions to verl. Your efforts help make the project stronger and more user-friendly. Happy coding!
+
diff --git a/src/verl/LICENSE b/src/verl/LICENSE
new file mode 100644
index 0000000000000000000000000000000000000000..d645695673349e3947e8e5ae42332d0ac3164cd7
--- /dev/null
+++ b/src/verl/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/src/verl/Notice.txt b/src/verl/Notice.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ade439da525ac3f82936e131a1ae386f43207fd8
--- /dev/null
+++ b/src/verl/Notice.txt
@@ -0,0 +1 @@
+Copyright 2023-2024 Bytedance Ltd. and/or its affiliates
\ No newline at end of file
diff --git a/src/verl/README.md b/src/verl/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..a70ea0e5e17d4e750331d589344e74ad2910275a
--- /dev/null
+++ b/src/verl/README.md
@@ -0,0 +1,263 @@
+
+ 👋 Hi, everyone!
+ verl is a RL training library initiated by ByteDance Seed team and maintained by the verl community.
+
+
+
+
+
+
+
+[](https://github.com/volcengine/verl/stargazers)
+[](https://twitter.com/verl_project)
+
+
+[](https://verl.readthedocs.io/en/latest/)
+
+
+
+
+
+
+verl: Volcano Engine Reinforcement Learning for LLMs
+
+verl is a flexible, efficient and production-ready RL training library for large language models (LLMs).
+
+verl is the open-source version of **[HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)** paper.
+
+verl is flexible and easy to use with:
+
+- **Easy extension of diverse RL algorithms**: The hybrid-controller programming model enables flexible representation and efficient execution of complex post-training dataflows. Build RL dataflows such as GRPO, PPO in a few lines of code.
+
+- **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as FSDP, Megatron-LM, vLLM, SGLang, etc
+
+- **Flexible device mapping**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes.
+
+- Ready integration with popular HuggingFace models
+
+verl is fast with:
+
+- **State-of-the-art throughput**: SOTA LLM training and inference engine integrations and SOTA RL throughput.
+
+- **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases.
+
+
+
+## News
+- [2025/08] verl is presented in the [PyTorch Expert Exchange Webinar](https://www.youtube.com/watch?v=Vd79NmmqY3Q&t=2s). [Slides](https://github.com/eric-haibin-lin/verl-community/blob/main/slides/verl_talk_pytorch_2025_08.pdf) available.
+- [2025/07] The [ReTool](https://arxiv.org/pdf/2504.11536) recipe is fully open sourced. [Blog](https://www.notion.so/verl-reTool-recipe-Using-multi-round-conversations-and-code-sandboxing-to-improve-the-math-of-large-23a8b5b7feba80b386b2e5b5e3c1cde0)
+- [2025/07] The first verl meetup will be held at ICML Vancouver on July 16th! Please [join us](https://lu.ma/0ek2nyao) if you are at ICML! (onsite only)
+- [2025/06] verl with Megatron backend enables large MoE models such as [DeepSeek-671b and Qwen3-236b](https://verl.readthedocs.io/en/latest/perf/dpsk.html).
+- [2025/03] [DAPO](https://dapo-sia.github.io/) is the open-sourced SOTA RL algorithm that achieves 50 points on AIME 2024 based on the Qwen2.5-32B pre-trained model, surpassing the previous SOTA achieved by DeepSeek's GRPO (DeepSeek-R1-Zero-Qwen-32B). DAPO's training is fully powered by verl and the reproduction code is available in `recipe/dapo` now.
+ more...
+
+ [2025/04] [Seed-Thinking-v1.5](https://github.com/ByteDance-Seed/Seed-Thinking-v1.5/blob/main/seed-thinking-v1.5.pdf) tech report is released! Trained with verl, Seed-Thinking-v1.5 achieves 86.7 on AIME 2024, 55.0 on Codeforces and 77.3 on GPQA, demonstrating excellent reasoning abilities in STEM and coding. Beyond reasoning tasks, the method demonstrates notable generalization across diverse domains.
+ [2025/07] verl keynote at [AWS AI Hours Singapore](https://pages.awscloud.com/aws-ai-hours-sg.html#agenda) on 7/8, verl & verl-agent project updates at [Agent for SWE meetup](https://lu.ma/e498qhsi) by LF AI & Data Singapore on 7/11.
+ [2025/06] verl team will provide latest project updates at [PyTorch Day China](https://www.lfasiallc.com/pytorch-day-china/) on June 7th. Meet our dev team in Beijing!
+ [2025/04] [VAPO](https://arxiv.org/pdf/2504.05118) (value-based augmented PPO) paper covers our latest RL method for reasoning models. Trained from Qwen-32B-base model, VAPO achieves 60.4 on AIME 2024, outperforming DAPO-32B.
+ [2025/05] [PF-PPO](https://arxiv.org/abs/2409.06957), accepted to ICML 2025, is now supported in verl! PF-PPO enhances policy learning efficiency and robustness by filtering potentially noisy reward signals and reusing high-quality experiences via a replay buffer.
+ [2025/04] We will give a tutorial about latest post-training techniques and programming guide for verl at [ICLR 2025 Expo](https://iclr.cc/virtual/2025/calendar?filter_events=Expo+Talk+Panel&filter_rooms=), [SCI-FM workshop](https://open-foundation-model.github.io/) and [LMSys afterparty](https://lu.ma/d23nyynm). Talk materials available [here](https://github.com/eric-haibin-lin/verl-community/tree/main/iclr25).
+ [2025/03] verl v0.3.0.post1 is released! See [release note](https://github.com/volcengine/verl/releases/) for details. It achieves [~1.4x speedup](https://tongyx361.github.io/blogs/posts/verl-intro/#/verl-flexible-and-efficient-rl-for-llms) compared to prev versions.
+ [2025/05] verl will be presented at [A2M Shanghai](https://a2m.msup.com.cn/home/?aid=4488&city=shanghai) on 5/16 - 5/17.
+ [2025/05] verl will be presented at [GOSIM x PyTorch Day 2025](https://paris2025.gosim.org/). See you in Paris!
+ [2025/03] We introduced the programming model of verl at the [vLLM Beijing Meetup](https://mp.weixin.qq.com/s/n77GibL2corAtQHtVEAzfg) and [verl intro and updates](https://github.com/eric-haibin-lin/verl-community/blob/main/slides/verl-lmsys-meetup.pdf) at the [SGLang-LMSYS Org Meetup](https://lu.ma/ntjrr7ig) in Sunnyvale mid-March.
+ [2025/03] We will present verl(HybridFlow) at EuroSys 2025. See you in Rotterdam!
+ [2025/02] verl v0.2.0.post2 is released!
+ [2025/02] We presented verl in the Bytedance/NVIDIA/Anyscale Ray Meetup . See you in San Jose!
+ [2025/01] [Doubao-1.5-pro](https://team.doubao.com/zh/special/doubao_1_5_pro) is released with SOTA-level performance on LLM & VLM. The RL scaling preview model is trained using verl, reaching OpenAI O1-level performance on math benchmarks (70.0 pass@1 on AIME).
+ [2024/12] verl is presented at Ray Forward 2024. Slides available here
+ [2024/12] The team presented Post-training LLMs: From Algorithms to Infrastructure at NeurIPS 2024. Slides and video available.
+ [2024/10] verl is presented at Ray Summit. Youtube video available.
+ [2024/08] HybridFlow (verl) is accepted to EuroSys 2025.
+
+
+
+## Key Features
+
+- **FSDP**, **FSDP2** and **Megatron-LM** for training.
+- **vLLM**, **SGLang** and **HF Transformers** for rollout generation.
+- Compatible with Hugging Face Transformers and Modelscope Hub: [Qwen-3](https://github.com/volcengine/verl/blob/main/examples/grpo_trainer/run_qwen3-8b.sh), Qwen-2.5, Llama3.1, Gemma2, DeepSeek-LLM, etc
+- Supervised fine-tuning.
+- Reinforcement learning with [PPO](examples/ppo_trainer/), [GRPO](examples/grpo_trainer/), [GSPO](recipe/gspo/), [ReMax](examples/remax_trainer/), [REINFORCE++](https://verl.readthedocs.io/en/latest/examples/config.html#algorithm), [RLOO](examples/rloo_trainer/), [PRIME](recipe/prime/), [DAPO](recipe/dapo/), [DrGRPO](recipe/drgrpo), [KL_Cov & Clip_Cov](recipe/entropy) etc.
+ - Support model-based reward and function-based reward (verifiable reward) for math, [coding](https://github.com/volcengine/verl/tree/main/recipe/dapo), etc
+ - Support vision-language models (VLMs) and [multi-modal RL](examples/grpo_trainer/run_qwen2_5_vl-7b.sh) with Qwen2.5-vl, Kimi-VL
+ - [Multi-turn with tool calling](https://github.com/volcengine/verl/tree/main/examples/sglang_multiturn)
+- LLM alignment recipes such as [Self-play preference optimization (SPPO)](https://github.com/volcengine/verl/tree/main/recipe/sppo)
+- Flash attention 2, [sequence packing](examples/ppo_trainer/run_qwen2-7b_seq_balance.sh), [sequence parallelism](examples/ppo_trainer/run_deepseek7b_llm_sp2.sh) support via DeepSpeed Ulysses, [LoRA](examples/sft/gsm8k/run_qwen_05_peft.sh), [Liger-kernel](examples/sft/gsm8k/run_qwen_05_sp2_liger.sh).
+- Scales up to 671B models and hundreds of GPUs with [expert parallelism](https://github.com/volcengine/verl/pull/1467)
+- Multi-gpu [LoRA RL](https://verl.readthedocs.io/en/latest/advance/ppo_lora.html) support to save memory.
+- Experiment tracking with wandb, swanlab, mlflow and tensorboard.
+
+## Upcoming Features and Changes
+
+- Q3 Roadmap https://github.com/volcengine/verl/issues/2388
+- DeepSeek 671b optimizations with Megatron https://github.com/volcengine/verl/issues/1033
+- Multi-turn rollout and tools using optimizations https://github.com/volcengine/verl/issues/1882
+- [Agent integration](https://github.com/volcengine/verl/tree/main/verl/experimental/agent_loop)
+- Async and off-policy architecture https://github.com/volcengine/verl/pull/2231
+- List of breaking changes since v0.4 https://github.com/volcengine/verl/discussions/2270
+
+## Getting Started
+
+Documentation
+
+**Quickstart:**
+
+- [Installation](https://verl.readthedocs.io/en/latest/start/install.html)
+- [Quickstart](https://verl.readthedocs.io/en/latest/start/quickstart.html)
+- [Programming Guide](https://verl.readthedocs.io/en/latest/hybrid_flow.html) & [Tech Talk](https://hcqnc.xetlk.com/sl/3vACOK) (in Chinese)
+- [PPO in verl](https://verl.readthedocs.io/en/latest/algo/ppo.html)
+- [GRPO in verl](https://verl.readthedocs.io/en/latest/algo/grpo.html)
+
+**Running a PPO example step-by-step:**
+
+- [Prepare Data for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html)
+- [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html)
+- [PPO Example Architecture](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html)
+- [Config Explanation](https://verl.readthedocs.io/en/latest/examples/config.html)
+
+**Reproducible algorithm baselines:**
+
+- [RL performance on coding, math](https://verl.readthedocs.io/en/latest/algo/baseline.html)
+
+**For code explanation and advance usage (extension):**
+
+- PPO Trainer and Workers
+ - [PPO Ray Trainer](https://verl.readthedocs.io/en/latest/workers/ray_trainer.html)
+ - [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html)
+ - [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/index.html)
+
+- Advanced Usage and Extension
+ - [Add Models with the FSDP Backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html)
+ - [Add Models with the Megatron-LM Backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html)
+ - [Multi-turn Rollout Support](https://verl.readthedocs.io/en/latest/sglang_multiturn/multiturn.html)
+ - [Search Tool Integration](https://verl.readthedocs.io/en/latest/sglang_multiturn/search_tool_example.html)
+ - [Sandbox Fusion Integration](https://verl.readthedocs.io/en/latest/examples/sandbox_fusion_example.html)
+ - [Deployment using Separate GPU Resources](https://github.com/volcengine/verl/tree/main/examples/split_placement)
+ - [Extend to Other RL(HF) algorithms](https://verl.readthedocs.io/en/latest/advance/dpo_extension.html)
+ - [Ray API design tutorial](https://verl.readthedocs.io/en/latest/advance/placement.html)
+
+**Blogs from the community**
+
+- [When Reasoning Models Break Tokenization: The Hidden Complexity of Multiturn Training](https://github.com/zhaochenyang20/Awesome-ML-SYS-Tutorial/blob/main/rlhf/verl/multi-turn/fast_tokenization/multiturn_tokenization_and_masking.md)
+- [verl deployment on AWS SageMaker](https://medium.com/@kaige.yang0110/run-verl-on-sagemaker-using-4x8-l40s-gpus-8e6d5c3c61d3)
+- [verl x SGLang Multi-turn Code Walkthrough](https://github.com/zhaochenyang20/Awesome-ML-SYS-Tutorial/blob/main/rlhf/verl/multi-turn/code-walk-through/readme_EN.md)
+- [Optimizing SGLang Memory Usage in verl](https://hebiao064.github.io/rl-memory-management)
+- [SGLang, verl, OpenBMB and Tsinghua University: Pioneering End-to-End Multi-Turn RLHF](https://github.com/zhaochenyang20/Awesome-ML-SYS-Tutorial/blob/main/rlhf/verl/multi-turn/verl-multiturn-rollout-Release.md)
+- [Reinforcement Learning from Human Feedback on AMD GPUs with verl and ROCm Integration](https://rocm.blogs.amd.com/artificial-intelligence/verl-large-scale/README.html)
+- [veMLP x verl :玩转强化学习训练](https://mp.weixin.qq.com/s/7nbqxk4knMGd-hQE9ls2tA)
+- [使用 verl 进行 GRPO 分布式强化学习训练最佳实践](https://www.volcengine.com/docs/6459/1463942)
+- [HybridFlow verl 原文浅析](https://github.com/zhaochenyang20/Awesome-ML-SYS-Tutorial/blob/main/rlhf/verl/readme.md)
+- [最高提升 20 倍吞吐量!豆包大模型团队发布全新 RLHF 框架,现已开源!](https://team.doubao.com/en/blog/%E6%9C%80%E9%AB%98%E6%8F%90%E5%8D%8720%E5%80%8D%E5%90%9E%E5%90%90%E9%87%8F-%E8%B1%86%E5%8C%85%E5%A4%A7%E6%A8%A1%E5%9E%8B%E5%9B%A2%E9%98%9F%E5%8F%91%E5%B8%83%E5%85%A8%E6%96%B0-rlhf-%E6%A1%86%E6%9E%B6-%E7%8E%B0%E5%B7%B2%E5%BC%80%E6%BA%90)
+
+## Performance Tuning Guide
+
+The performance is essential for on-policy RL algorithm. We have written a detailed [performance tuning guide](https://verl.readthedocs.io/en/latest/perf/perf_tuning.html) to help you optimize performance.
+
+## Upgrade to vLLM >= v0.8.2
+
+verl now supports vLLM>=0.8.2 when using FSDP as the training backend. Please refer to [this document](https://github.com/volcengine/verl/blob/main/docs/README_vllm0.8.md) for the installation guide and more information. Please avoid vllm 0.7.x, which contains bugs that may lead to OOMs and unexpected errors.
+
+## Use Latest SGLang
+
+SGLang is fully supported with verl, and SGLang RL Group is working extensively on building unique features, including multi-turn agentic RL, VLM RLHF, server-based RL, and partial rollout. Please refer to [this document](https://verl.readthedocs.io/en/latest/workers/sglang_worker.html) for the installation guide and more information.
+
+## Upgrade to FSDP2
+
+verl is fully embracing FSDP2! FSDP2 is recommended by torch distributed team, providing better throughput and memory usage, and is composible with other features (e.g. torch.compile). To enable FSDP2, simply use verl main and set the following options:
+```
+actor_rollout_ref.ref.strategy=fsdp2
+actor_rollout_ref.actor.strategy=fsdp2
+critic.strategy=fsdp2
+reward_model.strategy=fsdp2
+```
+Furthermore, FSDP2 cpu offloading is compatible with gradient accumulation. You can turn it on to save memory with `actor_rollout_ref.actor.fsdp_config.offload_policy=True`. For more details, see https://github.com/volcengine/verl/pull/1026
+
+## AMD Support (ROCm Kernel)
+
+verl now supports FSDP as the training engine (Megatron support coming soon) and both integrates with vLLM and SGLang as inference engines. Please refer to [this document](https://github.com/volcengine/verl/blob/main/docs/amd_tutorial/amd_build_dockerfile_page.rst) for the installation guide and more information, and [this document](https://github.com/volcengine/verl/blob/main/docs/amd_tutorial/amd_vllm_page.rst) for the vLLM performance tuning for ROCm.
+
+
+## Citation and acknowledgement
+
+If you find the project helpful, please cite:
+
+- [HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)
+- [A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization](https://i.cs.hku.hk/~cwu/papers/gmsheng-NL2Code24.pdf)
+
+```bibtex
+@article{sheng2024hybridflow,
+ title = {HybridFlow: A Flexible and Efficient RLHF Framework},
+ author = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},
+ year = {2024},
+ journal = {arXiv preprint arXiv: 2409.19256}
+}
+```
+
+verl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and contributed by Bytedance, Anyscale, LMSys.org, [Alibaba Qwen team](https://github.com/QwenLM/), Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, University of Hong Kong, ke.com, [All Hands AI](https://www.all-hands.dev/), [ModelBest](http://modelbest.cn/), JD AI Lab, Microsoft Research, [StepFun](https://www.stepfun.com/), Amazon, LinkedIn, Meituan, [Camel-AI](https://www.camel-ai.org/), [OpenManus](https://github.com/OpenManus), Xiaomi, NVIDIA research, [Baichuan](https://www.baichuan-ai.com/home), [RedNote](https://www.xiaohongshu.com/), [SwissAI](https://www.swiss-ai.org/), [Moonshot AI (Kimi)](https://www.moonshot-ai.com/), Baidu, Snowflake, Skywork.ai, JetBrains, [IceSword Lab](https://www.iceswordlab.com), and many more.
+
+## Awesome work using verl
+
+- [TinyZero](https://github.com/Jiayi-Pan/TinyZero): a reproduction of **DeepSeek R1 Zero** recipe for reasoning tasks 
+- [SkyThought](https://github.com/NovaSky-AI/SkyThought): RL training for Sky-T1-7B by NovaSky AI team. 
+- [simpleRL-reason](https://github.com/hkust-nlp/simpleRL-reason): SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild 
+- [Easy-R1](https://github.com/hiyouga/EasyR1): **Multi-modal** RL training framework 
+- [OpenManus-RL](https://github.com/OpenManus/OpenManus-RL): LLM Agents RL tunning framework for multiple agent environments. 
+- [rllm](https://github.com/agentica-project/rllm): async RL training with [verl-pipeline](https://github.com/agentica-project/verl-pipeline) 
+- [RAGEN](https://github.com/ZihanWang314/ragen): a general-purpose reasoning **agent** training framework 
+- [Search-R1](https://github.com/PeterGriffinJin/Search-R1): RL with reasoning and **searching (tool-call)** interleaved LLMs 
+- [ReSearch](https://github.com/Agent-RL/ReSearch): Learning to **Re**ason with **Search** for LLMs via Reinforcement Learning 
+- [Skywork-OR1](https://github.com/SkyworkAI/Skywork-OR1): Skywork open reaonser series 
+- [ToRL](https://github.com/GAIR-NLP/ToRL): Scaling tool-integrated RL 
+- [Absolute Zero Reasoner](https://github.com/LeapLabTHU/Absolute-Zero-Reasoner): [A no human curated data self-play framework for reasoning](https://arxiv.org/abs/2505.03335) 
+- [verl-agent](https://github.com/langfengQ/verl-agent): A scalable training framework for **long-horizon LLM/VLM agents**, along with a new algorithm **GiGPO** 
+- [RL-Factory](https://github.com/Simple-Efficient/RL-Factory): An easy and efficient RL post-training framework for Agentic Learning 
+- [ReTool](https://retool-rl.github.io/): ReTool: reinforcement learning for strategic tool use in LLMs. Code release is in progress...
+- [verl-tool](https://github.com/TIGER-AI-Lab/verl-tool): An unified and easy-to-extend tool-agent training framework based on verl
+- [PRIME](https://github.com/PRIME-RL/PRIME): Process reinforcement through implicit rewards 
+- [MemAgent](https://github.com/BytedTsinghua-SIA/MemAgent): MemAgent: Reshaping Long-Context LLM with Multi-Conv RL based Memory Agent 
+- [POLARIS](https://github.com/ChenxinAn-fdu/POLARIS): A Post-training recipe for scaling RL on Advanced Reasoning models 
+- [GUI-R1](https://github.com/ritzz-ai/GUI-R1): **GUI-R1**: A Generalist R1-style Vision-Language Action Model For **GUI Agents** 
+- [DeepRetrieval](https://github.com/pat-jj/DeepRetrieval): RL Training of **Search Agent** with **Search/Retrieval Outcome** 
+- [Code-R1](https://github.com/ganler/code-r1): Reproducing R1 for **Code** with Reliable Rewards 
+- [DeepResearcher](https://github.com/GAIR-NLP/DeepResearcher): Scaling deep research via reinforcement learning in real-world environments 
+- [VAGEN](https://github.com/RAGEN-AI/VAGEN): Training VLM agents with multi-turn reinforcement learning 
+- [RM-R1](https://arxiv.org/abs/2505.02387): RL training of reasoning reward models 
+- [LUFFY](https://arxiv.org/pdf/2504.14945): Learning to Reason under Off-Policy Guidance
+- [DeepMath](https://github.com/zwhe99/DeepMath): DeepMath-103K data and series models for math reasoning
+- [PACS](https://github.com/ritzz-ai/PACS): Implicit Actor Critic Coupling via a Supervised Learning Framework for RLVR 
+- [Entropy Mechanism of RL](https://github.com/PRIME-RL/Entropy-Mechanism-of-RL): The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning
+- [LLaSA-TTS-GRPO](https://github.com/channel-io/ch-tts-llasa-rl-grpo): TTS fine-tuning with GRPO optimization based on LLASA models 
+- [PF-PPO](https://arxiv.org/abs/2409.06957): Policy Filtration for PPO based on the reliability of reward signals for more efficient and robust RLHF.
+- [RACRO](https://github.com/gyhdog99/RACRO2): Build multi-modal reasoning models via decoupling it into query-conditioned captioning and text-only reasoning 
+- [Agent Lightning](https://github.com/microsoft/agent-lightning): A flexible and extensible framework that enables seamless agent optimization for any existing agent framework. 
+- [VTool-R1](https://github.com/VTOOL-R1/vtool-r1): VLMs Learn to Think with Images via Reinforcement Learning on Multimodal Tool Use. 
+- [Kimina-Prover-RL](https://github.com/project-numina/kimina-prover-rl/tree/main/recipe/kimina_prover_rl): Training pipeline for formal theorem proving, based on a paradigm inspired by DeepSeek-R1.
+- [RL-PLUS](https://github.com/YihongDong/RL-PLUS): Countering Capability Boundary Collapse of LLMs in Reinforcement Learning with Hybrid-policy Optimization.
+- [rStar2-Agent](https://github.com/microsoft/rStar): Using reinforcement learning with multi-step tool-calling for math tasks, rStar2-Agent-14B reaches frontier-level math reasoning in just 510 RL training steps 
+- [Vision-SR1](https://github.com/zli12321/Vision-SR1): Self-Rewarding Vision-Language Model via Reasoning Decomposition 
+- [SimpleVLA-RL](https://github.com/PRIME-RL/SimpleVLA-RL): SimpleVLA-RL: A Simple yet Effective Vision-Language Action Model for Reinforcement Learning 
+
+and many more awesome work listed in [recipe](recipe/README.md).
+
+## Contribution Guide
+
+See [contributions guide](CONTRIBUTING.md)
+
+## About [ByteDance Seed Team](https://team.doubao.com/)
+
+Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society. You can get to know Bytedance Seed better through the following channels👇
+
+---
+
+We are HIRING! Send us an [email](mailto:the.verl.project@gmail.com) if you are interested in internship/FTE opportunities in RL for agents.
diff --git a/src/verl/__init__.py b/src/verl/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/src/verl/pyproject.toml b/src/verl/pyproject.toml
new file mode 100644
index 0000000000000000000000000000000000000000..1426b507b0e06d6025f004248848cef730b53325
--- /dev/null
+++ b/src/verl/pyproject.toml
@@ -0,0 +1,113 @@
+# -------------------------------
+# build-system
+# -------------------------------
+[build-system]
+requires = [
+ "setuptools>=61.0",
+ "wheel"
+]
+build-backend = "setuptools.build_meta"
+
+# -------------------------------
+# project (PEP 621 metadata)
+# -------------------------------
+[project]
+name = "verl"
+# We'll mark the version as "dynamic" because it's read from the file "verl/version/version"
+# (PEP 621 calls this "dynamic version").
+# The actual version is specified in the [tool.setuptools.dynamic] section below.
+dynamic = ["version", "dependencies", "optional-dependencies", "authors", "urls"]
+
+description = "verl: Volcano Engine Reinforcement Learning for LLM"
+license = {text = "Apache-2.0"} # Changed from file to text format
+readme = {file = "README.md", content-type = "text/markdown"}
+requires-python = ">=3.10"
+
+# -------------------------------
+# tool.ruff - Linting configuration
+# -------------------------------
+[tool.ruff]
+# Note: While the formatter will attempt to format lines such that they remain within the line-length,
+# it isn't a hard upper bound, and formatted lines may exceed the line-length.
+line-length = 120
+exclude = ["tests/workers/rollout/test_sglang_async_rollout_sf_tools.py", "scripts/legacy_model_merger.py"]
+
+[tool.ruff.lint]
+isort = {known-first-party = ["verl"]}
+# c.f. https://github.com/vllm-project/vllm/blob/ce8d6b75fc0586045df75ee1568a5b5f9957251b/pyproject.toml
+select = [
+ # pycodestyle
+ "E",
+ # Pyflakes
+ "F",
+ # pyupgrade
+ "UP",
+ # flake8-bugbear
+ "B",
+ # isort
+ "I",
+ "G",
+]
+ignore = [
+ # star imports
+ "F405", "F403",
+ # lambda expression assignment
+ "E731",
+ # Loop control variable not used within loop body
+ "B007",
+ # f-string format
+ "UP032",
+ # `.log()` statement uses f-string
+ "G004",
+ # X | None for type annotations
+ "UP045",
+ # deprecated import
+ "UP035",
+]
+
+# -------------------------------
+# tool.mypy - typechecking config
+# -------------------------------
+[tool.mypy]
+pretty = true
+ignore_missing_imports = true
+explicit_package_bases = true
+follow_imports = "skip"
+
+# Blanket silence
+ignore_errors = true
+
+[[tool.mypy.overrides]]
+module = [
+"verl.trainer.config.algorithm",
+"verl.trainer.ppo.core_algos",
+"verl.trainer.ppo.reward",
+"verl.workers.reward_manager",
+"verl.workers.reward_manager.*",
+]
+ignore_errors = false
+
+# -------------------------------
+# tool.setuptools - Additional config
+# -------------------------------
+[tool.setuptools]
+# True means `setuptools` will attempt to include all relevant files in package_data automatically.
+# This corresponds to `include_package_data=True` in setup.py.
+include-package-data = true
+
+# We read the version from a file in 'verl/version/version'
+[tool.setuptools.dynamic]
+version = {file = "verl/version/version"}
+
+# If you need to mimic `package_dir={'': '.'}`:
+[tool.setuptools.package-dir]
+"" = "."
+
+# If you need to include specific non-Python data (like YAML files or version file):
+# This is the rough equivalent of package_data={'': ['version/*'], 'verl': ['trainer/config/*.yaml']}
+[tool.setuptools.package-data]
+verl = [
+ "version/*",
+ "trainer/config/*.yaml",
+ "trainer/config/*/*.yaml",
+]
diff --git a/src/verl/requirements-npu.txt b/src/verl/requirements-npu.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ae9ed11615696ffe3302a0b958ccab40d02e1de2
--- /dev/null
+++ b/src/verl/requirements-npu.txt
@@ -0,0 +1,21 @@
+# requirements.txt records the full set of dependencies for development
+accelerate
+codetiming
+datasets
+dill
+hydra-core
+numpy<2.0.0
+pandas
+peft>=0.15.2
+pyarrow>=15.0.0
+pybind11
+pylatexenc
+tensordict>=0.8.0,<=0.10.0,!=0.9.0
+transformers==4.52.4
+ray==2.46.0
+wandb
+mathruler
+torchdata
+einops
+qwen_vl_utils
+torchvision==0.20.1
diff --git a/src/verl/requirements.txt b/src/verl/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..64dc7f58509f9f07f4e897aa3d939e2f0a433754
--- /dev/null
+++ b/src/verl/requirements.txt
@@ -0,0 +1,27 @@
+# requirements.txt records the full set of dependencies for development
+accelerate
+codetiming
+datasets
+dill
+flash-attn
+hydra-core
+liger-kernel
+numpy<2.0.0
+pandas
+peft
+pyarrow>=19.0.0
+pybind11
+pylatexenc
+pre-commit
+ray[default]
+tensordict>=0.8.0,<=0.10.0,!=0.9.0
+torchdata
+transformers
+# vllm==0.8.4
+wandb
+packaging>=20.0
+uvicorn
+fastapi
+latex2sympy2_extended
+math_verify
+tensorboard
\ No newline at end of file
diff --git a/src/verl/requirements_sglang.txt b/src/verl/requirements_sglang.txt
new file mode 100644
index 0000000000000000000000000000000000000000..34e23f4cd2ac4cdb0e3c43da48f518679f0fddff
--- /dev/null
+++ b/src/verl/requirements_sglang.txt
@@ -0,0 +1,21 @@
+# requirements.txt records the full set of dependencies for development
+accelerate
+codetiming
+datasets
+dill
+flash-attn
+hydra-core
+numpy<2.0.0
+pandas
+peft
+pyarrow>=19.0.0
+pybind11
+pylatexenc
+ray[default]>=2.10
+tensordict>=0.8.0,<=0.10.0,!=0.9.0
+torchdata
+torchvision
+transformers
+wandb
+sglang[all]==0.4.10.post2
+huggingface_hub
diff --git a/src/verl/setup.py b/src/verl/setup.py
new file mode 100644
index 0000000000000000000000000000000000000000..780d622e14743e54071f34483f703b39b8815173
--- /dev/null
+++ b/src/verl/setup.py
@@ -0,0 +1,96 @@
+# Copyright 2024 Bytedance Ltd. and/or its affiliates
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# setup.py is the fallback installation script when pyproject.toml does not work
+import os
+from pathlib import Path
+
+from setuptools import find_packages, setup
+
+version_folder = os.path.dirname(os.path.join(os.path.abspath(__file__)))
+
+with open(os.path.join(version_folder, "verl/version/version")) as f:
+ __version__ = f.read().strip()
+
+install_requires = [
+ "accelerate",
+ "codetiming",
+ "datasets",
+ "dill",
+ "hydra-core",
+ "numpy<2.0.0",
+ "pandas",
+ "peft",
+ "pyarrow>=19.0.0",
+ "pybind11",
+ "pylatexenc",
+ "ray[default]>=2.41.0",
+ "torchdata",
+ "tensordict>=0.8.0,<=0.10.0,!=0.9.0",
+ "transformers",
+ "wandb",
+ "packaging>=20.0",
+ "tensorboard",
+]
+
+TEST_REQUIRES = ["pytest", "pre-commit", "py-spy", "pytest-asyncio"]
+PRIME_REQUIRES = ["pyext"]
+GEO_REQUIRES = ["mathruler", "torchvision", "qwen_vl_utils"]
+GPU_REQUIRES = ["liger-kernel", "flash-attn"]
+MATH_REQUIRES = ["math-verify"] # Add math-verify as an optional dependency
+VLLM_REQUIRES = ["tensordict>=0.8.0,<=0.10.0,!=0.9.0", "vllm>=0.7.3,<=0.9.1"]
+SGLANG_REQUIRES = [
+ "tensordict>=0.8.0,<=0.10.0,!=0.9.0",
+ "sglang[srt,openai]==0.4.10.post2",
+ "torch==2.7.1",
+]
+TRL_REQUIRES = ["trl<=0.9.6"]
+MCORE_REQUIRES = ["mbridge"]
+
+extras_require = {
+ "test": TEST_REQUIRES,
+ "prime": PRIME_REQUIRES,
+ "geo": GEO_REQUIRES,
+ "gpu": GPU_REQUIRES,
+ "math": MATH_REQUIRES,
+ "vllm": VLLM_REQUIRES,
+ "sglang": SGLANG_REQUIRES,
+ "trl": TRL_REQUIRES,
+ "mcore": MCORE_REQUIRES,
+}
+
+
+this_directory = Path(__file__).parent
+long_description = (this_directory / "README.md").read_text()
+
+setup(
+ name="verl",
+ version=__version__,
+ package_dir={"": "."},
+ packages=find_packages(where="."),
+ url="https://github.com/volcengine/verl",
+ license="Apache 2.0",
+ author="Bytedance - Seed - MLSys",
+ author_email="zhangchi.usc1992@bytedance.com, gmsheng@connect.hku.hk",
+ description="verl: Volcano Engine Reinforcement Learning for LLM",
+ install_requires=install_requires,
+ extras_require=extras_require,
+ package_data={
+ "": ["version/*"],
+ "verl": ["trainer/config/*.yaml"],
+ },
+ include_package_data=True,
+ long_description=long_description,
+ long_description_content_type="text/markdown",
+)
diff --git a/src/verl/test_zone_layout_reward.py b/src/verl/test_zone_layout_reward.py
new file mode 100644
index 0000000000000000000000000000000000000000..b02000a7e42734bf5427096e48fc39511db0d54c
--- /dev/null
+++ b/src/verl/test_zone_layout_reward.py
@@ -0,0 +1,360 @@
+#!/usr/bin/env python
+"""
+Test script to validate the complete reward computation for zone layout.
+
+This script tests the reward manager by:
+1. Loading a prediction JSON file
+2. Extracting the 'predict' field as solution_str
+3. Computing rewards using the zone_layout_reward functions
+"""
+
+import sys
+import os
+import json
+
+# 添加项目根目录和 verl 目录到 PYTHONPATH
+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
+PROJECT_ROOT = os.path.dirname(SCRIPT_DIR)
+sys.path.insert(0, PROJECT_ROOT)
+sys.path.insert(0, SCRIPT_DIR) # verl 目录
+
+from verl.trainer.ppo.zone_layout.zone_layout_reward import (
+ compute_format_reward,
+ compute_boundary_reward,
+ compute_collision_reward,
+ compute_convex_hull_reward,
+ compute_scene_richness_reward,
+ sample_assets_for_layout,
+ # V2 ratio-based reward functions
+ compute_boundary_reward_v2,
+ compute_collision_reward_v2,
+ compute_zone_reward_v2,
+ # 辅助函数:计算总体积
+ _compute_asset_meshes_and_volumes,
+ _compute_total_oob_volume,
+ _compute_total_collision_volume,
+)
+
+
+def test_reward_computation(json_path: str):
+ """Test the complete reward computation pipeline."""
+
+ print("=" * 80)
+ print("Zone Layout Reward Computation Test")
+ print("=" * 80)
+
+ # 1. Load the JSON file
+ print(f"\n[1] Loading JSON file: {json_path}")
+ try:
+ with open(json_path, 'r', encoding='utf-8') as f:
+ data = json.load(f)
+ print(" ✓ JSON loaded successfully")
+ except Exception as e:
+ print(f" ✗ Failed to load JSON: {e}")
+ return False
+
+ # 2. Extract predict field
+ print("\n[2] Extracting 'predict' field...")
+ solution_str = data.get('predict', '')
+ if not solution_str:
+ print(" ✗ 'predict' field is empty or missing")
+ return False
+ print(f" ✓ Solution string length: {len(solution_str)} chars")
+ print(f" Preview: {solution_str[:200]}...")
+
+ # 3. Parse format and extract layout
+ print("\n[3] Computing format reward and parsing layout...")
+ try:
+ format_score, processed_layout = compute_format_reward(solution_str)
+ print(f" Format score: {format_score}")
+
+ if format_score == -1.0 or not processed_layout:
+ print(" ✗ Format parsing failed!")
+ return False
+
+ print(" ✓ Layout parsed successfully")
+
+ # Show layout structure
+ if processed_layout:
+ print(f" Layout keys: {list(processed_layout.keys())}")
+ if 'functional_zones' in processed_layout:
+ zones = processed_layout['functional_zones']
+ print(f" Number of zones: {len(zones)}")
+ for i, zone in enumerate(zones):
+ assets = zone.get('assets', [])
+ print(f" Zone {i+1} ({zone.get('semantic_label', 'N/A')}): {len(assets)} assets")
+ if 'objects' in processed_layout:
+ print(f" Flattened objects: {len(processed_layout.get('objects', []))}")
+ except Exception as e:
+ print(f" ✗ Error during format parsing: {e}")
+ import traceback
+ traceback.print_exc()
+ return False
+
+ # 4. Sample assets (apply model_id to objects)
+ print("\n[4] Sampling assets for layout...")
+ try:
+ processed_layout = sample_assets_for_layout(processed_layout)
+ print(" ✓ Assets sampled successfully")
+
+ # Check if objects have model_id
+ objects = processed_layout.get('objects', [])
+ print(f" Total flattened objects: {len(objects)}")
+ if objects:
+ print("\n Object details:")
+ for i, obj in enumerate(objects):
+ cat = obj.get('category', 'N/A')
+ model_id = obj.get('model_id', 'N/A')
+ pos = obj.get('pos', 'N/A')
+ size = obj.get('size', 'N/A')
+ print(f" [{i+1}] {cat}: model_id={model_id[:30] if model_id != 'N/A' else 'N/A'}... pos={pos}, size={size}")
+ except Exception as e:
+ print(f" ✗ Error during asset sampling: {e}")
+ import traceback
+ traceback.print_exc()
+ return False
+
+ # 5. Compute V1 rewards
+ print("\n[5] Computing V1 rewards...")
+ v1_rewards = {}
+
+ try:
+ print(" Computing boundary reward (V1)...")
+ v1_rewards['boundary'] = compute_boundary_reward(processed_layout)
+ print(f" boundary: {v1_rewards['boundary']:.4f}")
+ except Exception as e:
+ print(f" ✗ Error: {e}")
+ v1_rewards['boundary'] = None
+
+ try:
+ print(" Computing collision reward (V1)...")
+ v1_rewards['collision'] = compute_collision_reward(processed_layout)
+ print(f" collision: {v1_rewards['collision']:.4f}")
+ except Exception as e:
+ print(f" ✗ Error: {e}")
+ v1_rewards['collision'] = None
+
+ try:
+ print(" Computing convex_hull reward (V1)...")
+ v1_rewards['convex_hull'] = compute_convex_hull_reward(processed_layout)
+ print(f" convex_hull: {v1_rewards['convex_hull']:.4f}")
+ except Exception as e:
+ print(f" ✗ Error: {e}")
+ v1_rewards['convex_hull'] = None
+
+ try:
+ print(" Computing scene_richness reward...")
+ v1_rewards['scene_richness'] = compute_scene_richness_reward(processed_layout)
+ print(f" scene_richness: {v1_rewards['scene_richness']:.4f}")
+ except Exception as e:
+ print(f" ✗ Error: {e}")
+ v1_rewards['scene_richness'] = None
+
+ # 5.5 计算总体积信息
+ print("\n[5.5] Computing volume metrics...")
+ volume_metrics = {}
+
+ # 调试: 检查物体数量
+ from verl.trainer.ppo.zone_layout.utils import _get_all_objects
+ all_objs = _get_all_objects(processed_layout)
+ print(f" DEBUG: _get_all_objects returned {len(all_objs)} objects")
+
+ # 检查顶层 assets
+ top_assets = processed_layout.get('assets', [])
+ print(f" DEBUG: processed_layout['assets'] has {len(top_assets)} items")
+
+ # 检查 functional_zones 中的 assets
+ fz_assets = []
+ for zone in processed_layout.get('functional_zones', []):
+ fz_assets.extend(zone.get('assets', []))
+ print(f" DEBUG: functional_zones assets total: {len(fz_assets)} items")
+
+ try:
+ print(" Computing total mesh volume...")
+ meshes_and_volumes = _compute_asset_meshes_and_volumes(processed_layout)
+ total_mesh_volume = sum(mv['volume'] for mv in meshes_and_volumes)
+ volume_metrics['total_mesh_volume'] = total_mesh_volume
+ print(f" Total mesh volume: {total_mesh_volume:.6f} m³")
+ print(f" Number of objects with mesh: {len(meshes_and_volumes)}")
+
+ # 调试: 检查哪些物体没有被加载
+ loaded_cats = {mv['obj'].get('category', '') for mv in meshes_and_volumes}
+ all_cats = {obj.get('category', '') for obj in all_objs}
+ print(f"\n DEBUG: All categories: {all_cats}")
+ print(f" DEBUG: Loaded categories: {loaded_cats}")
+
+ # 检查每个物体的 model_id
+ print("\n DEBUG: Checking all objects:")
+ for i, obj in enumerate(all_objs):
+ cat = obj.get('category', 'N/A')
+ model_id = obj.get('model_id', 'MISSING')
+ size = obj.get('size', 'N/A')
+ loaded = cat in [mv['obj'].get('category', '') for mv in meshes_and_volumes if mv['obj'].get('model_id') == model_id]
+ status = "✓" if model_id != 'MISSING' and any(mv['obj'] is obj for mv in meshes_and_volumes) else "✗"
+ print(f" [{i+1}] {status} {cat}: model_id={model_id[:40] if model_id != 'MISSING' else 'MISSING'}... size={size}")
+
+ # 打印每个成功加载的mesh的信息
+ print("\n Loaded meshes:")
+ for mv in meshes_and_volumes:
+ obj = mv['obj']
+ cat = obj.get('category', 'N/A')
+ model_id = obj.get('model_id', 'N/A')[:30] if obj.get('model_id') else 'N/A'
+ vol = mv['volume']
+ print(f" - {cat} ({model_id}...): {vol:.6f} m³")
+ except Exception as e:
+ print(f" ✗ Error computing mesh volume: {e}")
+ import traceback
+ traceback.print_exc()
+ volume_metrics['total_mesh_volume'] = None
+
+ try:
+ print(" Computing total OOB volume...")
+ total_oob_volume = _compute_total_oob_volume(processed_layout)
+ volume_metrics['total_oob_volume'] = total_oob_volume
+ print(f" Total OOB volume: {total_oob_volume:.6f} m³")
+
+ # 计算 OOB 比例
+ if volume_metrics.get('total_mesh_volume') and volume_metrics['total_mesh_volume'] > 0:
+ oob_ratio = total_oob_volume / volume_metrics['total_mesh_volume']
+ volume_metrics['oob_ratio'] = oob_ratio
+ print(f" OOB ratio: {oob_ratio:.6f} ({oob_ratio*100:.4f}%)")
+ except Exception as e:
+ print(f" ✗ Error computing OOB volume: {e}")
+ import traceback
+ traceback.print_exc()
+ volume_metrics['total_oob_volume'] = None
+
+ try:
+ print(" Computing total collision volume...")
+ total_collision_volume = _compute_total_collision_volume(processed_layout)
+ volume_metrics['total_collision_volume'] = total_collision_volume
+ print(f" Total collision volume: {total_collision_volume:.6f} m³")
+
+ # 计算碰撞比例
+ if volume_metrics.get('total_mesh_volume') and volume_metrics['total_mesh_volume'] > 0:
+ collision_ratio = total_collision_volume / volume_metrics['total_mesh_volume']
+ volume_metrics['collision_ratio'] = collision_ratio
+ print(f" Collision ratio: {collision_ratio:.6f} ({collision_ratio*100:.4f}%)")
+ except Exception as e:
+ print(f" ✗ Error computing collision volume: {e}")
+ import traceback
+ traceback.print_exc()
+ volume_metrics['total_collision_volume'] = None
+
+ # 6. Compute V2 rewards (ratio-based, anti-hacking)
+ print("\n[6] Computing V2 rewards (ratio-based, anti-hacking)...")
+ v2_rewards = {}
+
+ try:
+ print(" Computing boundary_v2 reward...")
+ v2_rewards['boundary_v2'] = compute_boundary_reward_v2(processed_layout)
+ print(f" boundary_v2: {v2_rewards['boundary_v2']:.4f}")
+ except Exception as e:
+ print(f" ✗ Error: {e}")
+ import traceback
+ traceback.print_exc()
+ v2_rewards['boundary_v2'] = None
+
+ try:
+ print(" Computing collision_v2 reward...")
+ v2_rewards['collision_v2'] = compute_collision_reward_v2(processed_layout)
+ print(f" collision_v2: {v2_rewards['collision_v2']:.4f}")
+ except Exception as e:
+ print(f" ✗ Error: {e}")
+ import traceback
+ traceback.print_exc()
+ v2_rewards['collision_v2'] = None
+
+ try:
+ print(" Computing zone_reward_v2 (merged zone OOB + convex hull)...")
+ v2_rewards['zone_v2'] = compute_zone_reward_v2(processed_layout)
+ print(f" zone_v2: {v2_rewards['zone_v2']:.4f}")
+ except Exception as e:
+ print(f" ✗ Error: {e}")
+ import traceback
+ traceback.print_exc()
+ v2_rewards['zone_v2'] = None
+
+ # 7. Compute weighted total (using default weights)
+ print("\n[7] Computing weighted total reward...")
+
+ # Default V2 weights from zone_layout.py
+ weights = {
+ 'format': 0.35,
+ 'boundary_v2': 0.25,
+ 'collision_v2': 0.25,
+ 'convex_hull_v2': 0.15, # uses zone_v2 internally
+ }
+
+ total_reward = 0.0
+ total_reward += weights['format'] * format_score
+
+ if v2_rewards.get('boundary_v2') is not None:
+ total_reward += weights['boundary_v2'] * v2_rewards['boundary_v2']
+
+ if v2_rewards.get('collision_v2') is not None:
+ total_reward += weights['collision_v2'] * v2_rewards['collision_v2']
+
+ if v2_rewards.get('zone_v2') is not None:
+ total_reward += weights['convex_hull_v2'] * v2_rewards['zone_v2']
+
+ print(f"\n Weighted Total Reward: {total_reward:.4f}")
+ print("\n Component breakdown:")
+ print(f" format ({weights['format']:.2f}): {format_score:.4f} -> {weights['format'] * format_score:.4f}")
+ if v2_rewards.get('boundary_v2') is not None:
+ print(f" boundary_v2 ({weights['boundary_v2']:.2f}): {v2_rewards['boundary_v2']:.4f} -> {weights['boundary_v2'] * v2_rewards['boundary_v2']:.4f}")
+ if v2_rewards.get('collision_v2') is not None:
+ print(f" collision_v2 ({weights['collision_v2']:.2f}): {v2_rewards['collision_v2']:.4f} -> {weights['collision_v2'] * v2_rewards['collision_v2']:.4f}")
+ if v2_rewards.get('zone_v2') is not None:
+ print(f" zone_v2 ({weights['convex_hull_v2']:.2f}): {v2_rewards['zone_v2']:.4f} -> {weights['convex_hull_v2'] * v2_rewards['zone_v2']:.4f}")
+
+ # 8. Summary
+ print("\n" + "=" * 80)
+ print("TEST SUMMARY")
+ print("=" * 80)
+
+ all_passed = True
+
+ print("\n[V1 Rewards]")
+ for name, value in v1_rewards.items():
+ status = "✓" if value is not None else "✗"
+ value_str = f"{value:.4f}" if value is not None else "FAILED"
+ print(f" {status} {name}: {value_str}")
+ if value is None:
+ all_passed = False
+
+ print("\n[V2 Rewards]")
+ for name, value in v2_rewards.items():
+ status = "✓" if value is not None else "✗"
+ value_str = f"{value:.4f}" if value is not None else "FAILED"
+ print(f" {status} {name}: {value_str}")
+ if value is None:
+ all_passed = False
+
+ print(f"\n[Format Score]: {format_score:.4f}")
+ print(f"[Total Reward]: {total_reward:.4f}")
+
+ if all_passed:
+ print("\n✓ ALL TESTS PASSED")
+ else:
+ print("\n✗ SOME TESTS FAILED")
+
+ return all_passed
+
+
+if __name__ == "__main__":
+ # Default test file path
+ default_json_path = "/home/v-meiszhang/amlt-project/group-layout/infer_results/individual_samples/full_20251215_002635/checkpoint-1200/arkitscenes_Training_47331258.json"
+
+ # Use command line argument if provided
+ if len(sys.argv) > 1:
+ json_path = sys.argv[1]
+ else:
+ json_path = default_json_path
+
+ if not os.path.exists(json_path):
+ print(f"Error: File not found: {json_path}")
+ sys.exit(1)
+
+ success = test_reward_computation(json_path)
+ sys.exit(0 if success else 1)