# TRL - Transformer Reinforcement Learning TRL is a full stack library where we provide a set of tools to train transformer language models with Reinforcement Learning, from the Supervised Fine-tuning step (SFT), Reward Modeling step (RM) to the Proximal Policy Optimization (PPO) step. The library is integrated with 🤗 [transformers](https://github.com/huggingface/transformers).
Check the appropriate sections of the documentation depending on your needs: ## API documentation - [Model Classes](models): *A brief overview of what each public model class does.* - [`SFTTrainer`](sft_trainer): *Supervise Fine-tune your model easily with `SFTTrainer`* - [`RewardTrainer`](reward_trainer): *Train easily your reward model using `RewardTrainer`.* - [`PPOTrainer`](ppo_trainer): *Further fine-tune the supervised fine-tuned model using PPO algorithm* - [Best-of-N Sampling](best-of-n): *Use best of n sampling as an alternative way to sample predictions from your active model* - [`DPOTrainer`](dpo_trainer): *Direct Preference Optimization training using `DPOTrainer`.* - [`TextEnvironment`](text_environments): *Text environment to train your model using tools with RL.* ## Examples - [Sentiment Tuning](sentiment_tuning): *Fine tune your model to generate positive movie contents* - [Training with PEFT](lora_tuning_peft): *Memory efficient RLHF training using adapters with PEFT* - [Detoxifying LLMs](detoxifying_a_lm): *Detoxify your language model through RLHF* - [StackLlama](using_llama_models): *End-to-end RLHF training of a Llama model on Stack exchange dataset* - [Learning with Tools](learning_tools): *Walkthrough of using `TextEnvironments`* - [Multi-Adapter Training](multi_adapter_rl): *Use a single base model and multiple adapters for memory efficient end-to-end training* ## Blog posts
thumbnail

Published on July 10, 2024

Preference Optimization for Vision Language Models with TRL

thumbnail

Published on June 12, 2024

Putting RL back in RLHF

thumbnail

Published on September 29, 2023

Finetune Stable Diffusion Models with DDPO via TRL

thumbnail

Published on August 8, 2023

Fine-tune Llama 2 with DPO

thumbnail

Published on April 5, 2023

StackLLaMA: A hands-on guide to train LLaMA with RLHF

thumbnail

Published on March 9, 2023

Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU

thumbnail

Published on December 9, 2022

Illustrating Reinforcement Learning from Human Feedback