TRL documentation

RLOO Trainer

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

RLOO Trainer

Overview

TRL supports the RLOO Trainer for training language models, as described in the paper Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs by Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Ahmet Üstün and Sara Hooker.

The abstract from the paper is the following:

AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. Proximal Policy Optimization (PPO) has been positioned by recent literature as the canonical method for the RL part of RLHF However, it involves both high computational cost and sensitive hyperparameter tuning. We posit that most of the motivational principles that led to the development of PPO are less of a practical concern in RLHF and advocate for a less computationally expensive method that preserves and even increases performance. We revisit the formulation of alignment from human preferences in the context of RL. Keeping simplicity as a guiding principle, we show that many components of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style optimization variants outperform both PPO and newly proposed β€œRL-free” methods such as DPO and RAFT. Our work suggests that careful adaptation to LLMs alignment characteristics enables benefiting from online RL optimization at low cost.

This post-training method was contributed by Costa Huang and later refactored by Shirin Yamani.

Quick start

This example demonstrates how to train a model using the RLOO method. We train a Qwen 0.5B Instruct model with the prompts from the UltraFeedback prompts dataset. You can view the data in the dataset here:

Below is the script to train the model.

# train_rloo.py
from datasets import load_dataset
from trl import RLOOConfig, RLOOTrainer

dataset = load_dataset("trl-lib/ultrafeedback-prompt", split="train")

# Dummy reward function for demonstration purposes
def reward_num_unique_letters(completions, **kwargs):
    """Reward function that rewards completions with more unique letters."""
    completion_contents = [completion[0]["content"] for completion in completions]
    return [float(len(set(content))) for content in completion_contents]

training_args = RLOOConfig(output_dir="Qwen2-0.5B-RLOO")
trainer = RLOOTrainer(
    model="Qwen/Qwen2-0.5B-Instruct",
    reward_funcs=reward_num_unique_letters,
    args=training_args,
    train_dataset=dataset,
)
trainer.train()

Execute the script using the following command:

accelerate launch train_rloo.py

Looking deeper into the RLOO method

RLOO is an online learning algorithm, meaning it improves iteratively by using the data generated by the trained model itself during training. The intuition behind RLOO objective is to maximize the advantage of the generated completions, while ensuring that the model remains close to the reference policy. To understand how RLOO works, it can be broken down into four main steps: Generating completions, computing the advantage, estimating the KL divergence, and computing the loss.

RLOO

Generating completions

At each training step, we sample a batch of prompts and generate a set of G G completions for each prompt (denoted as oi o_i ).

Computing the reward

In RLOO, the reward consists of two components: the reward provided by the reward model (or reward function) and a KL penalty that discourages the policy from deviating too far from a fixed reference policy

  1. For each of the G G generated sequences oi=(oi,1,…,oi,T) o_i = (o_{i,1}, \dots, o_{i,T}) conditioned on a queryq q , we compute a scalar reward using a reward model R(oi,q) R(o_i, q) .
  2. Concurenlty, we estimate the KL divergence between the current policy πθ \pi_\theta and the fixed reference policy Ο€ref \pi_{\text{ref}} over the sequence. The KL estimate for sequence oi o_i is: DKL ⁣[πθβˆ₯Ο€ref]=βˆ‘t=1Tlog⁑πθ(oi,t∣q,oi,<t)Ο€ref(oi,t∣q,oi,<t). \mathbb{D}_{\mathrm{KL}}\!\left[\pi_\theta\|\pi_{\mathrm{ref}}\right] = \sum_{t=1}^T \log \frac{\pi_\theta(o_{i,t} \mid q, o_{i,<t})}{\pi_{\mathrm{ref}}(o_{i,t} \mid q, o_{i,<t})}.

The final reward assigned to sequence oi o_i is then: ri=R(oi,q)βˆ’Ξ²β€‰DKL ⁣[πθβˆ₯Ο€ref], r_i = R(o_i, q) - \beta \, \mathbb{D}_{\mathrm{KL}}\!\left[\pi_\theta \|\pi_{\mathrm{ref}}\right],

where Ξ²>0 \beta > 0 controls the strength of the KL penalty.

In a purely online setting (num_iterations = 1, default), the data are generated by the current policy. In this case, the KL penalty is computed directly using the current policy.

In the more general setting (e.g., multiple gradient steps per batch), the data are instead generated by an earlier snapshotΟ€old \pi_{\text{old}} . To keep the penalty consistent with the sampling distribution, the KL is defined with respect to this policy: DKL ⁣[Ο€old βˆ₯ πref]. \mathbb{D}_{\mathrm{KL}}\!\left[\pi_{\text{old}} \,\|\, \pi_{\text{ref}}\right].

Equivalently, for a sampled sequence $o$, the Monte Carlo estimate is DKL ⁣[Ο€oldβˆ₯Ο€ref]=βˆ‘t=1Tlog⁑πold(oi,t∣q,oi,<t)Ο€ref(oi,t∣q,oi,<t). \mathbb{D}_{\mathrm{KL}}\!\left[\pi_{\text{old}} \|\pi_{\mathrm{ref}}\right] = \sum_{t=1}^T \log \frac{\pi_{\text{old}}(o_{i,t} \mid q, o_{i,<t})}{\pi_{\mathrm{ref}}(o_{i,t} \mid q, o_{i,<t})}.

Computing the advantage

Once the rewards for each completion have been computed, we calculate a baseline as the average reward of all other samples in the same batch, excluding the current sample. This baseline is used to reduce the variance of the policy gradient estimate. The advantage for each completion is then obtained as the difference between its own reward and this leave-one-out baseline.

Formally, for a batch of G completions, the baseline for completion is:bi=1Gβˆ’1βˆ‘jβ‰ irj b_i = \frac{1}{G-1} \sum_{j \neq i} r_j

and then the advantage for each completion is computed as the difference between its reward and the baseline: Ai=riβˆ’bi A_i = r_i - b_i

Computing the loss

The REINFORCE loss is simply defined as: LRLOO(ΞΈ)=βˆ’1Gβˆ‘i=1GA^i log⁑πθ(oi∣q) \mathcal{L}_{\text{RLOO}}(\theta) = - \frac{1}{G} \sum_{i=1}^G \hat{A}_i \, \log \pi_\theta(o_i \mid q)

In practice, performing multiple gradient steps on the same batch makes the actions effectively off-policy relative to the current parameters. To correct for this, we introduce the importance sampling ratio. To prevent excessively large updates when the policy changes between sampling and gradient steps, we clip this ratio: LRLOO(ΞΈ)=βˆ’1Gβˆ‘i=1Gmin⁑(πθ(oi∣q)πθold(oi∣q)A^i, clip(πθ(oi∣q)πθold(oi∣q),1βˆ’Ο΅,1+Ο΅)A^i) \mathcal{L}_{\text{RLOO}}(\theta) = - \frac{1}{G} \sum_{i=1}^G \min \left( \frac{\pi_\theta(o_i \mid q)}{\pi_{\theta_\text{old}}(o_i \mid q)} \hat{A}_i, \, \text{clip}\left(\frac{\pi_\theta(o_i \mid q)}{\pi_{\theta_\text{old}}(o_i \mid q)}, 1-\epsilon, 1+\epsilon\right) \hat{A}_i \right)

In a fully online, single-step setting (default), πθ(oi∣q)πθold(oi∣q)=1 \frac{\pi_\theta(o_i \mid q)}{\pi_{\theta_\text{old}}(o_i \mid q)} = 1 and this reduces to standard REINFORCE.

Logged metrics

While training and evaluating, we record the following reward metrics:

  • num_tokens: The total number of tokens processed so far, including both prompts and completions.

  • completions/mean_length: The average length of generated completions.

  • completions/min_length: The minimum length of generated completions.

  • completions/max_length: The maximum length of generated completions.

  • completions/mean_terminated_length: The average length of generated completions that terminate with EOS.

  • completions/min_terminated_length: The minimum length of generated completions that terminate with EOS.

  • completions/max_terminated_length: The maximum length of generated completions that terminate with EOS.

  • completions/clipped_ratio : The ratio of truncated (clipped) completions.

  • reward/{reward_func_name}/mean: The average reward from a specific reward function.

  • reward/{reward_func_name}/std: The standard deviation of the reward from a specific reward function.

  • reward: The overall average reward after applying reward weights.

  • reward_std: The standard deviation of rewards after applying reward weights. This is the average of the per-group standard deviations.

  • frac_reward_zero_std: The fraction of samples in the generation batch with a reward std of zero, implying there is little diversity for that prompt (all answers are correct or incorrect).

  • entropy: Average entropy of token predictions across generated completions. (If mask_truncated_completions=True, masked sequences tokens are excluded.)

  • kl: The average KL divergence between the model and the reference model, calculated over generated completions. Logged only if beta is nonzero.

  • clip_ratio/region_mean: The ratio of sequence probabilities where the RLOO objective is clipped to stay within the trust region:clip(ri(ΞΈ),1βˆ’Ο΅low,1+Ο΅high) ,ri(ΞΈ)=πθ(oi∣q)πθold(oi∣q) . \text{clip}\left( r_{i}(\theta), 1 - \epsilon_\mathrm{low}, 1 + \epsilon_\mathrm{high} \right)\,, \qquad r_{i}(\theta) = \frac{\pi_\theta(o_{i} \mid q)}{\pi_{\theta_{\text{old}}}(o_{i} \mid q)}\,.

    A higher value means more samples are clipped, which constrains how much the policy $\pi_\theta$ can change.

  • clip_ratio/low_mean: The average ratio of sequence probabilities that were clipped on the lower bound of the trust region: ri,t(ΞΈ)<1βˆ’Ο΅lowr_{i,t}(\theta) < 1 - \epsilon_\mathrm{low}

  • clip_ratio/low_min: The minimum ratio of sequence probabilities that were clipped on the lower bound of the trust region: ri,t(ΞΈ)<1βˆ’Ο΅lowr_{i,t}(\theta) < 1 - \epsilon_\mathrm{low}

  • clip_ratio/high_mean: The average ratio of sequence probabilities that were clipped on the upper bound of the trust region: ri,t(ΞΈ)>1+Ο΅highr_{i,t}(\theta) > 1 + \epsilon_\mathrm{high}

  • clip_ratio/high_max: The maximum ratio of sequence probabilities that were clipped on the upper bound of the trust region: ri,t(ΞΈ)>1+Ο΅highr_{i,t}(\theta) > 1 + \epsilon_\mathrm{high}.

Customization

Speed up training with vLLM-powered generation

Generation is often the main bottleneck when training with online methods. To accelerate generation, you can use vLLM, a high-throughput, low-latency inference engine for LLMs. To enable it, first install the package with

pip install trl[vllm]

We support two ways of using vLLM during training: server mode and colocate mode.

πŸ”Œ Option 1: Server mode

In this mode, vLLM runs in a separate process (and using separate GPUs) and communicates with the trainer via HTTP. This is ideal if you have dedicated GPUs for inference.

  1. Start the vLLM server:

    trl vllm-serve --model <model_name>
  2. Enable server mode in your training script:

    from trl import RLOOConfig
    
    training_args = RLOOConfig(
        ...,
        use_vllm=True,
        vllm_mode="server",  # default value, can be omitted
    )

Make sure that the server is using different GPUs than the trainer, otherwise you may run into NCCL errors. You can specify the GPUs to use with the CUDA_VISIBLE_DEVICES environment variable.

🧩 Option 2: Colocate mode

In this mode, vLLM runs inside the trainer process and shares GPU memory with the training model. This avoids launching a separate server and can improve GPU utilization, but may lead to memory contention on the training GPUs.

from trl import RLOOConfig

training_args = RLOOConfig(
    ...,
    use_vllm=True,
    vllm_mode="colocate",
)

Depending on the model size and the overall GPU memory requirements for training, you may need to adjust the vllm_gpu_memory_utilization parameter in RLOOConfig to avoid underutilization or out-of-memory errors.

We provide a HF Space to help estimate the recommended GPU memory utilization based on your model configuration and experiment settings. Simply use it as follows to get vllm_gpu_memory_utilization recommendation:

If the recommended value does not work in your environment, we suggest adding a small buffer (e.g., +0.05 or +0.1) to the recommended value to ensure stability.

By default, RLOO uses MASTER_ADDR=localhost and MASTER_PORT=12345 for vLLM, but you can override these values by setting the environment variables accordingly.

For more information, see Speeding up training with vLLM.

RLOO at scale: train a 70B+ Model on multiple nodes

When training large models like Qwen2.5-72B, you need several key optimizations to make the training efficient and scalable across multiple GPUs and nodes. These include:

  • DeepSpeed ZeRO Stage 3: ZeRO leverages data parallelism to distribute model states (weights, gradients, optimizer states) across multiple GPUs and CPUs, reducing memory and compute requirements on each device. Since large models cannot fit on a single GPU, using ZeRO Stage 3 is required for training such model. For more details, see DeepSpeed Integration.
  • Accelerate: Accelerate is a library that simplifies distributed training across multiple GPUs and nodes. It provides a simple API to launch distributed training and handles the complexities of distributed training, such as data parallelism, gradient accumulation, and distributed data loading. For more details, see Distributing Training.
  • vLLM: See the previous section on how to use vLLM to speed up generation.

Below is an example SLURM script to train a 70B model with RLOO on multiple nodes. This script trains a model on 4 nodes and uses the 5th node for vLLM-powered generation.

#!/bin/bash
#SBATCH --nodes=5
#SBATCH --gres=gpu:8

# Get the list of allocated nodes
NODELIST=($(scontrol show hostnames $SLURM_JOB_NODELIST))

# Assign the first 4 nodes for training and the 5th node for vLLM
TRAIN_NODES="${NODELIST[@]:0:4}"  # Nodes 0, 1, 2, 3 for training
VLLM_NODE="${NODELIST[4]}"  # Node 4 for vLLM

# Run training on the first 4 nodes (Group 1)
srun --nodes=4 --ntasks=4 --nodelist="${NODELIST[@]:0:4}" accelerate launch \
     --config_file examples/accelerate_configs/deepspeed_zero3.yaml \
     --num_processes 32 \
     --num_machines 4 \
     --main_process_ip ${NODELIST[0]} \
     --machine_rank $SLURM_PROCID \
     --rdzv_backend c10d \
     train_rloo.py \
     --server_ip $VLLM_NODE &

# Run vLLM server on the 5th node (Group 2)
srun --nodes=1 --ntasks=1 --nodelist="${NODELIST[4]}" trl vllm-serve --model Qwen/Qwen2.5-72B --tensor_parallel_size 8 &

wait
import argparse

from datasets import load_dataset
from trl import RLOOTrainer, RLOOConfig

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--vllm_server_host", type=str, default="", help="The server IP")
    args = parser.parse_args()

    # Example dataset from TLDR
    dataset = load_dataset("trl-lib/tldr", split="train")

    # Dummy reward function: count the number of unique characters in the completions
    def reward_num_unique_chars(completions, **kwargs):
        return [len(set(c)) for c in completions]

    training_args = RLOOConfig(
        output_dir="Qwen2.5-72B-RLOO",
        per_device_train_batch_size=4,
        bf16=True,
        gradient_checkpointing=True,
        use_vllm=True,
        vllm_server_host=args.vllm_server_host.replace("ip-", "").replace("-", "."),  # from ip-X-X-X-X to X.X.X.X
    )

    trainer = RLOOTrainer(model="Qwen/Qwen2.5-72B", args=training_args, reward_funcs=reward_num_unique_chars, train_dataset=dataset)
    trainer.train()

if __name__=="__main__":
    main()

Using a custom reward function

The RLOOTrainer supports using custom reward functions instead of dense reward models. To ensure compatibility, your reward function must satisfy the following requirements:

  1. Input arguments:

    • The function must accept the following as keyword arguments:

      • prompts (contains the prompts),
      • completions (contains the generated completions),
      • completions_ids (contains the tokenized completions),
      • trainer_state (TrainerState): The current state of the trainer. This can be used to implement dynamic reward functions, such as curriculum learning, where the reward is adjusted based on the training progress.
      • All columns names (but prompt) that the dataset may have. For example, if the dataset contains a column named ground_truth, the function will be called with ground_truth as a keyword argument.

      The easiest way to comply with this requirement is to use **kwargs in the function signature.

    • Depending on the dataset format, the input will vary:

  2. Return value: The function must return a list of floats. Each float represents the reward corresponding to a single completion.

Example 1: Reward longer completions

Below is an example of a reward function for a standard format that rewards longer completions:

def reward_func(completions_ids, **kwargs):
    """Reward function that assigns higher scores to longer completions (in terms of token count)."""
    return [float(len(ids)) for ids in completions_ids]

You can test it as follows:

>>> prompts = ["The sky is", "The sun is"]  # not used in the reward function, but the trainer will pass it
>>> completions = [" blue.", " in the sky."]  # not used in the reward function, but the trainer will pass it
>>> completions_ids = [[6303, 13], [304, 279, 12884, 13]]
>>> reward_func(prompts=prompts, completions=completions, completions_ids=completions_ids)
[2.0, 4.0]

Example 1.1: Reward longer completions (based in the number of characters)

Same as the previous example, but this time the reward function is based on the number of characters instead of tokens.

def reward_func(completions, **kwargs):
    """Reward function that assigns higher scores to longer completions (in terms of character count)."""
    return [float(len(completion)) for completion in completions]

You can test it as follows:

>>> prompts = ["The sky is", "The sun is"]
>>> completions = [" blue.", " in the sky."]
>>> completions_ids = [[6303, 13], [304, 279, 12884, 13]]  # not used in the reward function, but the trainer will pass it
>>> reward_func(prompts=prompts, completions=completions, completions_ids=completions_ids)
[6.0, 12.0]

Example 2: Reward completions with specific format

Below is an example of a reward function that checks if the completion has a specific format. This example is inspired by the format reward function used in the paper DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. It is designed for conversational format, where prompts and completions consist of structured messages.

import re

def format_reward_func(completions, **kwargs):
    """Reward function that checks if the completion has a specific format."""
    pattern = r"^<think>.*?</think><answer>.*?</answer>$"
    completion_contents = [completion[0]["content"] for completion in completions]
    matches = [re.match(pattern, content) for content in completion_contents]
    return [1.0 if match else 0.0 for match in matches]

You can test this function as follows:

>>> prompts = [
...     [{"role": "assistant", "content": "What is the result of (1 + 2) * 4?"}],
...     [{"role": "assistant", "content": "What is the result of (3 + 1) * 2?"}],
... ]
>>> completions = [
...     [{"role": "assistant", "content": "<think>The sum of 1 and 2 is 3, which we multiply by 4 to get 12.</think><answer>(1 + 2) * 4 = 12</answer>"}],
...     [{"role": "assistant", "content": "The sum of 3 and 1 is 4, which we multiply by 2 to get 8. So (3 + 1) * 2 = 8."}],
... ]
>>> format_reward_func(prompts=prompts, completions=completions)
[1.0, 0.0]

Example 3: Reward completions based on a reference

Below is an example of a reward function that checks if the completion is correct. This example is inspired by the accuracy reward function used in the paper DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. This example is designed for standard format, where the dataset contains a column named ground_truth.

import re

def reward_func(completions, ground_truth, **kwargs):
    # Regular expression to capture content inside \boxed{}
    matches = [re.search(r"\\boxed\{(.*?)\}", completion) for completion in completions]
    contents = [match.group(1) if match else "" for match in matches]
    # Reward 1 if the content is the same as the ground truth, 0 otherwise
    return [1.0 if c == gt else 0.0 for c, gt in zip(contents, ground_truth)]

You can test this function as follows:

>>> prompts = ["Problem: Solve the equation $2x + 3 = 7$. Solution:", "Problem: Solve the equation $3x - 5 = 10$."]
>>> completions = [r" The solution is \boxed{2}.", r" The solution is \boxed{6}."]
>>> ground_truth = ["2", "5"]
>>> reward_func(prompts=prompts, completions=completions, ground_truth=ground_truth)
[1.0, 0.0]

Example 4: Multi-task reward functions

Below is an example of using multiple reward functions in the RLOOTrainer. In this example, we define two task-specific reward functions: math_reward_func and coding_reward_func. The math_reward_func rewards math problems based on their correctness, while the coding_reward_func rewards coding problems based on whether the solution works.

from datasets import Dataset
from trl import RLOOTrainer

# Define a dataset that contains both math and coding problems
dataset = Dataset.from_list(
    [
        {"prompt": "What is 2+2?", "task": "math"},
        {"prompt": "Write a function that returns the sum of two numbers.", "task": "code"},
        {"prompt": "What is 3*4?", "task": "math"},
        {"prompt": "Write a function that returns the product of two numbers.", "task": "code"},
    ]
)

# Math-specific reward function
def math_reward_func(prompts, completions, task, **kwargs):
    rewards = []
    for prompt, completion, t in zip(prompts, completions, task):
        if t == "math":
            # Calculate math-specific reward
            correct = check_math_solution(prompt, completion)
            reward = 1.0 if correct else -1.0
            rewards.append(reward)
        else:
            # Return None for non-math tasks
            rewards.append(None)
    return rewards

# Coding-specific reward function
def coding_reward_func(prompts, completions, task, **kwargs):
    rewards = []
    for prompt, completion, t in zip(prompts, completions, task):
        if t == "coding":
            # Calculate coding-specific reward
            works = test_code_solution(prompt, completion)
            reward = 1.0 if works else -1.0
            rewards.append(reward)
        else:
            # Return None for non-coding tasks
            rewards.append(None)
    return rewards

# Use both task-specific reward functions
trainer = RLOOTrainer(
    model="Qwen/Qwen2-0.5B-Instruct",
    reward_funcs=[math_reward_func, coding_reward_func],
    train_dataset=dataset,
)

trainer.train()

In this example, the math_reward_func and coding_reward_func are designed to work with a mixed dataset that contains both math and coding problems. The task column in the dataset is used to determine which reward function to apply to each problem. If there is no relevant reward function for a sample in the dataset, the reward function will return None and the RLOOTrainer will continue with the valid functions and tasks. This allows the RLOOTrainer to handle multiple reward functions with different applicability.

Note that the RLOOTrainer will ignore the None rewards returned by the reward functions and only consider the rewards returned by the relevant functions. This ensures that the model is trained on the relevant tasks and ignores the tasks for which there is no relevant reward function.

Passing the reward function to the trainer

To use your custom reward function, pass it to the RLOOTrainer as follows:

from trl import RLOOTrainer

trainer = RLOOTrainer(
    reward_funcs=reward_func,
    ...,
)

If you have multiple reward functions, you can pass them as a list:

from trl import RLOOTrainer

trainer = RLOOTrainer(
    reward_funcs=[reward_func1, reward_func2],
    ...,
)

and the reward will be computed as the sum of the rewards from each function, or the weighted sum if reward_weights is provided in the config.

Note that RLOOTrainer supports multiple reward functions of different types. See the parameters documentation for more details.

RLOOTrainer

class trl.RLOOTrainer

< >

( model: typing.Union[str, transformers.modeling_utils.PreTrainedModel] = None reward_funcs: typing.Union[str, transformers.modeling_utils.PreTrainedModel, typing.Callable[[list, list], list[float]], list[typing.Union[str, transformers.modeling_utils.PreTrainedModel, typing.Callable[[list, list], list[float]]]]] = None args: typing.Optional[trl.trainer.rloo_config.RLOOConfig] = None train_dataset: typing.Union[datasets.arrow_dataset.Dataset, datasets.iterable_dataset.IterableDataset, NoneType] = None eval_dataset: typing.Union[datasets.arrow_dataset.Dataset, datasets.iterable_dataset.IterableDataset, dict[str, typing.Union[datasets.arrow_dataset.Dataset, datasets.iterable_dataset.IterableDataset]], NoneType] = None processing_class: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, transformers.processing_utils.ProcessorMixin, NoneType] = None reward_processing_classes: typing.Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, list[transformers.tokenization_utils_base.PreTrainedTokenizerBase], NoneType] = None callbacks: typing.Optional[list[transformers.trainer_callback.TrainerCallback]] = None optimizers: tuple = (None, None) peft_config: typing.Optional[ForwardRef('PeftConfig')] = None config = None reward_model = None policy = None ref_policy = None data_collator = None )

Parameters

  • model (Union[str, PreTrainedModel]) — Model to be trained. Can be either:

    • A string, being the model id of a pretrained model hosted inside a model repo on huggingface.co, or a path to a directory containing model weights saved using save_pretrained, e.g., './my_model_directory/'. The model is loaded using from_pretrained with the keyword arguments in args.model_init_kwargs.
    • A PreTrainedModel object. Only causal language models are supported.
  • reward_funcs (Union[RewardFunc, list[RewardFunc]]) — Reward functions to be used for computing the rewards. To compute the rewards, we call all the reward functions with the prompts and completions and sum the rewards. Can be either:

    • A single reward function, such as:

      • A string: The model ID of a pretrained model hosted inside a model repo on huggingface.co, or a path to a directory containing model weights saved using save_pretrained, e.g., './my_model_directory/'. The model is loaded using from_pretrained with num_labels=1 and the keyword arguments in args.model_init_kwargs.

      • A PreTrainedModel object: Only sequence classification models are supported.

      • A custom reward function: The function is provided with the prompts and the generated completions, plus any additional columns in the dataset. It should return a list of rewards. Custom reward functions can also return None when the reward is not applicable to those samples. This is useful for multi-task training where different reward functions apply to different types of samples. When a reward function returns None for a sample, that reward function is excluded from the reward calculation for that sample. For more details, see Using a custom reward function.

        The trainer’s state is also passed to the reward function. The trainer’s state is an instance of TrainerState and can be accessed by accessing the trainer_state argument to the reward function’s signature.

    • A list of reward functions, where each item can independently be any of the above types. Mixing different types within the list (e.g., a string model ID and a custom reward function) is allowed.

  • args (RLOOConfig, optional, defaults to None) — Configuration for this trainer. If None, a default configuration is used.
  • train_dataset (Dataset or IterableDataset) — Dataset to use for training. It must include a column "prompt". Any additional columns in the dataset is ignored. The format of the samples can be either:

    • Standard: Each sample contains plain text.
    • Conversational: Each sample contains structured messages (e.g., role and content).
  • eval_dataset (Dataset, IterableDataset or `dict[str, Union[Dataset, —
  • IterableDataset]]`) — Dataset to use for evaluation. It must meet the same requirements as train_dataset.
  • processing_class (PreTrainedTokenizerBase, ProcessorMixin or None, optional, defaults to None) — Processing class used to process the data. The padding side must be set to “left”. If None, the processing class is loaded from the model’s name with from_pretrained. A padding token, tokenizer.pad_token, must be set. If the processing class has not set a padding token, tokenizer.eos_token will be used as the default.
  • reward_processing_classes (Union[PreTrainedTokenizerBase, list[PreTrainedTokenizerBase]], optional, defaults to None) — Processing classes corresponding to the reward functions specified in reward_funcs. Can be either:

    • A single processing class: Used when reward_funcs contains only one reward function.
    • A list of processing classes: Must match the order and length of the reward functions in reward_funcs. If set to None, or if an element of the list corresponding to a PreTrainedModel is None, the tokenizer for the model is automatically loaded using from_pretrained. For elements in reward_funcs that are custom reward functions (not PreTrainedModel), the corresponding entries in reward_processing_classes are ignored.
  • callbacks (list of TrainerCallback, optional, defaults to None) — List of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in here.

    If you want to remove one of the default callbacks used, use the remove_callback method.

  • optimizers (tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR], optional, defaults to `(None, —
  • None)`) — A tuple containing the optimizer and the scheduler to use. Will default to an instance of AdamW on your model and a scheduler given by get_linear_schedule_with_warmup controlled by args.
  • peft_config (~peft.PeftConfig, optional, defaults to None) — PEFT configuration used to wrap the model. If None, the model is not wrapped.

Trainer for the Reinforce Leave One Out (RLOO) method. This algorithm was initially proposed in the paper [Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs] (https://huggingface.co/papers/2402.14740).

Example:

from datasets import load_dataset
from trl import RLOOTrainer

dataset = load_dataset("trl-lib/tldr", split="train")


def reward_func(completions, **kwargs):
    # Dummy reward function that rewards completions with more unique letters.
    return [float(len(set(completion))) for completion in completions]


trainer = RLOOTrainer(
    model="Qwen/Qwen2-0.5B-Instruct",
    reward_funcs=reward_func,
    train_dataset=dataset,
)

trainer.train()

train

< >

( resume_from_checkpoint: typing.Union[str, bool, NoneType] = None trial: typing.Union[ForwardRef('optuna.Trial'), dict[str, typing.Any], NoneType] = None ignore_keys_for_eval: typing.Optional[list[str]] = None **kwargs )

Parameters

  • resume_from_checkpoint (str or bool, optional) — If a str, local path to a saved checkpoint as saved by a previous instance of Trainer. If a bool and equals True, load the last checkpoint in args.output_dir as saved by a previous instance of Trainer. If present, training will resume from the model/optimizer/scheduler states loaded here.
  • trial (optuna.Trial or dict[str, Any], optional) — The trial run or the hyperparameter dictionary for hyperparameter search.
  • ignore_keys_for_eval (list[str], optional) — A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training.
  • kwargs (dict[str, Any], optional) — Additional keyword arguments used to hide deprecated arguments

Main training entry point.

save_model

< >

( output_dir: typing.Optional[str] = None _internal_call: bool = False )

Will save the model, so you can reload it using from_pretrained().

Will only save from the main process.

push_to_hub

< >

( commit_message: typing.Optional[str] = 'End of training' blocking: bool = True token: typing.Optional[str] = None revision: typing.Optional[str] = None **kwargs )

Parameters

  • commit_message (str, optional, defaults to "End of training") — Message to commit while pushing.
  • blocking (bool, optional, defaults to True) — Whether the function should return only when the git push has finished.
  • token (str, optional, defaults to None) — Token with write permission to overwrite Trainer’s original args.
  • revision (str, optional) — The git revision to commit from. Defaults to the head of the “main” branch.
  • kwargs (dict[str, Any], optional) — Additional keyword arguments passed along to ~Trainer.create_model_card.

Upload self.model and self.processing_class to the πŸ€— model hub on the repo self.args.hub_model_id.

RLOOConfig

class trl.RLOOConfig

< >

( output_dir: typing.Optional[str] = None overwrite_output_dir: bool = False do_train: bool = False do_eval: bool = False do_predict: bool = False eval_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'no' prediction_loss_only: bool = False per_device_train_batch_size: int = 8 per_device_eval_batch_size: int = 8 per_gpu_train_batch_size: typing.Optional[int] = None per_gpu_eval_batch_size: typing.Optional[int] = None gradient_accumulation_steps: int = 1 eval_accumulation_steps: typing.Optional[int] = None eval_delay: typing.Optional[float] = 0 torch_empty_cache_steps: typing.Optional[int] = None learning_rate: float = 1e-06 weight_decay: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 max_grad_norm: float = 1.0 num_train_epochs: float = 3.0 max_steps: int = -1 lr_scheduler_type: typing.Union[transformers.trainer_utils.SchedulerType, str] = 'linear' lr_scheduler_kwargs: typing.Union[dict[str, typing.Any], str, NoneType] = <factory> warmup_ratio: float = 0.0 warmup_steps: int = 0 log_level: str = 'passive' log_level_replica: str = 'warning' log_on_each_node: bool = True logging_dir: typing.Optional[str] = None logging_strategy: typing.Union[transformers.trainer_utils.IntervalStrategy, str] = 'steps' logging_first_step: bool = False logging_steps: float = 10 logging_nan_inf_filter: bool = True save_strategy: typing.Union[transformers.trainer_utils.SaveStrategy, str] = 'steps' save_steps: float = 500 save_total_limit: typing.Optional[int] = None save_safetensors: typing.Optional[bool] = True save_on_each_node: bool = False save_only_model: bool = False restore_callback_states_from_checkpoint: bool = False no_cuda: bool = False use_cpu: bool = False use_mps_device: bool = False seed: int = 42 data_seed: typing.Optional[int] = None jit_mode_eval: bool = False use_ipex: bool = False bf16: typing.Optional[bool] = None fp16: bool = False fp16_opt_level: str = 'O1' half_precision_backend: str = 'auto' bf16_full_eval: bool = False fp16_full_eval: bool = False tf32: typing.Optional[bool] = None local_rank: int = -1 ddp_backend: typing.Optional[str] = None tpu_num_cores: typing.Optional[int] = None tpu_metrics_debug: bool = False debug: typing.Union[str, list[transformers.debug_utils.DebugOption]] = '' dataloader_drop_last: bool = False eval_steps: typing.Optional[float] = None dataloader_num_workers: int = 0 dataloader_prefetch_factor: typing.Optional[int] = None past_index: int = -1 run_name: typing.Optional[str] = None disable_tqdm: typing.Optional[bool] = None remove_unused_columns: typing.Optional[bool] = False label_names: typing.Optional[list[str]] = None load_best_model_at_end: typing.Optional[bool] = False metric_for_best_model: typing.Optional[str] = None greater_is_better: typing.Optional[bool] = None ignore_data_skip: bool = False fsdp: typing.Union[list[transformers.trainer_utils.FSDPOption], str, NoneType] = '' fsdp_min_num_params: int = 0 fsdp_config: typing.Union[dict[str, typing.Any], str, NoneType] = None fsdp_transformer_layer_cls_to_wrap: typing.Optional[str] = None accelerator_config: typing.Union[dict, str, NoneType] = None parallelism_config: typing.Optional[ForwardRef('ParallelismConfig')] = None deepspeed: typing.Union[dict, str, NoneType] = None label_smoothing_factor: float = 0.0 optim: typing.Union[transformers.training_args.OptimizerNames, str] = 'adamw_torch_fused' optim_args: typing.Optional[str] = None adafactor: bool = False group_by_length: bool = False length_column_name: typing.Optional[str] = 'length' report_to: typing.Union[NoneType, str, list[str]] = None ddp_find_unused_parameters: typing.Optional[bool] = None ddp_bucket_cap_mb: typing.Optional[int] = None ddp_broadcast_buffers: typing.Optional[bool] = None dataloader_pin_memory: bool = True dataloader_persistent_workers: bool = False skip_memory_metrics: bool = True use_legacy_prediction_loop: bool = False push_to_hub: bool = False resume_from_checkpoint: typing.Optional[str] = None hub_model_id: typing.Optional[str] = None hub_strategy: typing.Union[transformers.trainer_utils.HubStrategy, str] = 'every_save' hub_token: typing.Optional[str] = None hub_private_repo: typing.Optional[bool] = None hub_always_push: bool = False hub_revision: typing.Optional[str] = None gradient_checkpointing: bool = True gradient_checkpointing_kwargs: typing.Union[dict[str, typing.Any], str, NoneType] = None include_inputs_for_metrics: bool = False include_for_metrics: list = <factory> eval_do_concat_batches: bool = True fp16_backend: str = 'auto' push_to_hub_model_id: typing.Optional[str] = None push_to_hub_organization: typing.Optional[str] = None push_to_hub_token: typing.Optional[str] = None mp_parameters: str = '' auto_find_batch_size: bool = False full_determinism: bool = False torchdynamo: typing.Optional[str] = None ray_scope: typing.Optional[str] = 'last' ddp_timeout: int = 1800 torch_compile: bool = False torch_compile_backend: typing.Optional[str] = None torch_compile_mode: typing.Optional[str] = None include_tokens_per_second: typing.Optional[bool] = False include_num_input_tokens_seen: typing.Optional[bool] = False neftune_noise_alpha: typing.Optional[float] = None optim_target_modules: typing.Union[NoneType, str, list[str]] = None batch_eval_metrics: bool = False eval_on_start: bool = False use_liger_kernel: typing.Optional[bool] = False liger_kernel_config: typing.Optional[dict[str, bool]] = None eval_use_gather_object: typing.Optional[bool] = False average_tokens_across_devices: typing.Optional[bool] = True model_init_kwargs: typing.Union[dict, str, NoneType] = None disable_dropout: bool = False max_prompt_length: typing.Optional[int] = 512 num_generations: typing.Optional[int] = 2 max_completion_length: typing.Optional[int] = 256 ds3_gather_for_generation: bool = True shuffle_dataset: typing.Optional[bool] = True generation_batch_size: typing.Optional[int] = None steps_per_generation: typing.Optional[int] = None temperature: float = 1.0 top_p: float = 1.0 top_k: typing.Optional[int] = None min_p: typing.Optional[float] = None generation_kwargs: typing.Optional[dict] = None repetition_penalty: float = 1.0 use_transformers_paged: bool = False cache_implementation: typing.Optional[str] = None use_vllm: bool = False vllm_server_base_url: typing.Optional[str] = None vllm_mode: str = 'server' vllm_model_impl: str = 'vllm' vllm_guided_decoding_regex: typing.Optional[str] = None vllm_server_host: str = '0.0.0.0' vllm_server_port: int = 8000 vllm_server_timeout: float = 240.0 vllm_gpu_memory_utilization: float = 0.3 vllm_tensor_parallel_size: int = 1 beta: float = 0.05 num_iterations: int = 1 epsilon: float = 0.2 epsilon_high: typing.Optional[float] = None reward_weights: typing.Optional[list[float]] = None normalize_advantages: bool = False reward_clip_range: typing.Optional[tuple[float, float]] = None mask_truncated_completions: bool = False sync_ref_model: bool = False ref_model_mixup_alpha: float = 0.6 ref_model_sync_steps: int = 512 log_completions: bool = False num_completions_to_print: typing.Optional[int] = None wandb_log_unique_prompts: typing.Optional[bool] = False rloo_k: typing.Optional[int] = None cliprange: typing.Optional[float] = None kl_coef: typing.Optional[float] = None exp_name: typing.Optional[str] = None normalize_reward: typing.Optional[bool] = None num_ppo_epochs: typing.Optional[int] = None num_mini_batches: typing.Optional[int] = None total_episodes: typing.Optional[int] = None response_length: typing.Optional[int] = None token_level_kl: typing.Optional[bool] = None dataset_num_proc: typing.Optional[int] = None local_rollout_forward_batch_size: typing.Optional[int] = None num_sample_generations: typing.Optional[int] = None stop_token: typing.Optional[str] = None stop_token_id: typing.Optional[int] = None missing_eos_penalty: typing.Optional[float] = None )

Parameters that control the model and reference model

  • model_init_kwargs (str, dict[str, Any] or None, optional, defaults to None) — Keyword arguments for from_pretrained, used when the model argument of the GRPOTrainer is provided as a string.
  • disable_dropout (bool, optional, defaults to False) — Whether to disable dropout in the model. This is useful for training with a reference model, as it prevents the model from generating different logprobs for the same input.

Parameters that control the data preprocessing

  • remove_unused_columns (bool, optional, defaults to False) — Whether to only keep the column "prompt" in the dataset. If you use a custom reward function that requires any column other than "prompts" and "completions", you should keep this to False.
  • max_prompt_length (int or None, optional, defaults to 512) — Maximum length of the prompt. If the prompt is longer than this value, it will be truncated left.
  • num_generations (int or None, optional, defaults to 2) — Number of generations per prompt to sample. The effective batch size (num_processes * per_device_batch_size
    • gradient_accumulation_steps) must be evenly divisible by this value.
  • max_completion_length (int or None, optional, defaults to 256) — Maximum length of the generated completion.
  • ds3_gather_for_generation (bool, optional, defaults to True) — This setting applies to DeepSpeed ZeRO-3. If enabled, the policy model weights are gathered for generation, improving generation speed. However, disabling this option allows training models that exceed the VRAM capacity of a single GPU, albeit at the cost of slower generation. Disabling this option is not compatible with vLLM generation.
  • shuffle_dataset (bool, optional, defaults to True) — Whether to shuffle the training dataset.

Parameters that control generation

  • generation_batch_size — (int or None, optional, defaults to None): Batch size to use for generation. If None, it defaults to the effective training batch size: per_device_train_batch_size * num_processes * steps_per_generation. In other words, there is one generation batch processed per optimization step. Mutually exclusive with steps_per_generation.
  • steps_per_generation — (int or None, optional, defaults to None): Number of steps per generation. If None, it defaults to gradient_accumulation_steps. Mutually exclusive with generation_batch_size.
  • temperature (float, defaults to 1.0) — Temperature for sampling. The higher the temperature, the more random the completions.
  • top_p (float, optional, defaults to 1.0) — Float that controls the cumulative probability of the top tokens to consider. Must be in (0, 1]. Set to 1.0 to consider all tokens.
  • top_k (int or None, optional, defaults to None) — Number of highest probability vocabulary tokens to keep for top-k-filtering. If None, top-k-filtering is disabled and all tokens are considered.
  • min_p (float or None, optional, defaults to None) — Minimum token probability, which will be scaled by the probability of the most likely token. It must be a value between 0.0 and 1.0. Typical values are in the 0.01-0.2 range.
  • repetition_penalty (float, optional, defaults to 1.0) — Float that penalizes new tokens based on whether they appear in the prompt and the generated text so far. Values > 1.0 encourage the model to use new tokens, while values < 1.0 encourage the model to repeat tokens.
  • use_transformers_paged (bool, optional, defaults to False) — Whether to use the transformers paged implementation for generation. If set to True, the transformers paged implementation will be used for generation instead of the default padded implementation. This parameter is only effective when use_vllm is set to False.
  • cache_implementation (str or None, optional, defaults to None) — Implementation of the cache method for faster generation when use_vllm is set to False.
  • generation_kwargs (dict[str, Any] or None, optional, defaults to None) — Additional keyword arguments to pass to GenerationConfig (if using transformers) or SamplingParams (if using vLLM) when sampling completions. This can be used to further customize the generation behavior, such as setting suppress_tokens, num_beams, etc. If it contains keys that conflict with the other generation parameters (like min_p, top_p, etc.), they will override them.

Parameters that control generation acceleration powered by vLLM

  • use_vllm (bool, optional, defaults to False) — Whether to use vLLM for generating completions. If set to True, the trainer will use vLLM for generation instead of the default model.generate(). Requires vllm to be installed.
  • vllm_mode (str, optional, defaults to "server") — Mode to use for vLLM integration when use_vllm is set to True. Must be one of "server" or "colocate".

    • "server": The trainer will send generation requests to a separate vLLM server. Make sure a TRL vLLM server is running (start with trl vllm-serve).
    • "colocate": vLLM will run in the same process and share the training GPUs. This avoids the need for a separate server but may cause resource contention with training.
  • vllm_guided_decoding_regex (str or None, optional, defaults to None) — Regex for vLLM guided decoding. If None (default), guided decoding is disabled.

Parameters that control the vLLM server (only used when `vllm_mode` is `"server"`)

  • vllm_server_base_url (str or None, optional, defaults to None) — Base URL for the vLLM server (e.g., "http://localhost:8000"). If provided, vllm_server_host and vllm_server_port are ignored.
  • vllm_server_host (str, optional, defaults to "0.0.0.0") — Host of the vLLM server to connect to. Ignored if vllm_server_base_url is provided.
  • vllm_server_port (int, optional, defaults to 8000) — Port of the vLLM server to connect to. Ignored if vllm_server_base_url is provided.
  • vllm_server_timeout (float, optional, defaults to 240.0) — Total timeout duration in seconds to wait for the vLLM server to be up. If the server is not up after the timeout, a ConnectionError is raised.

Parameters that control colocated vLLM execution (only used when `vllm_mode` is `"colocate"`)

  • vllm_gpu_memory_utilization (float, optional, defaults to 0.3) — Control the GPU memory utilization for vLLM. This setting only applies when vllm_mode is set to "colocate". If you are using vllm_mode="server", this parameter must be passed separately when launching the vLLM server via the --vllm_gpu_memory_utilization flag.
  • vllm_tensor_parallel_size (int, optional, defaults to 1) — Control the tensor parallel size for vLLM. This setting only applies when vllm_mode is set to "colocate". If you are using vllm_mode="server", this parameter must be passed separately when launching the vLLM server via the --vllm_tensor_parallel_size flag.
  • vllm_model_impl (str, optional, defaults to "vllm") — Model implementation to use for vLLM. Must be one of "transformers" or "vllm". "transformers": Use the transformers backend for model implementation. "vllm": Use the vllm library for model implementation.

Parameters that control the training

  • beta (float, optional, defaults to 0.05) — KL coefficient. If 0.0, the reference model is not loaded, reducing memory usage and improving training speed.
  • num_iterations (int, optional, defaults to 1) — Number of iterations per batch (denoted as μ in the algorithm).
  • epsilon (float, optional, defaults to 0.2) — Epsilon value for clipping.
  • epsilon_high (float or None, optional, defaults to None) — Upper-bound epsilon value for clipping. If not specified, it defaults to the same value as the lower-bound specified in argument epsilon. Paper DAPO recommends 0.28.
  • reward_weights (list[float] or None, optional, defaults to None) — Weights for each reward function. Must match the number of reward functions. If None, all rewards are weighted equally with weight 1.0.
  • normalize_advantages (bool, optional, defaults to False) — Whether to normalize advantages. Normalization is done per generation batch to have mean 0.0 and standard deviation of 1.0.
  • reward_clip_range (tuple[float, float] or None, optional, defaults to None) — Clip range for rewards as (min, max). If None, no clipping is applied.
  • mask_truncated_completions (bool, optional, defaults to False) — When enabled, truncated completions are excluded from the loss calculation, preventing them from being incorrectly penalized and introducing noise during training. According to the DAPO paper, this is a good practice for training stability.
  • sync_ref_model (bool, optional, defaults to False) — Whether to synchronize the reference model with the active model every ref_model_sync_steps steps, using the ref_model_mixup_alpha parameter. This synchronization originates from the TR-DPO paper.
  • ref_model_mixup_alpha (float, optional, defaults to 0.6) — α parameter from the TR-DPO paper, which controls the mix between the current policy and the previous reference policy during updates. The reference policy is updated according to the equation: π_ref = α * π_θ + (1 - α) * π_ref_prev. To use this parameter, you must set sync_ref_model=True.
  • ref_model_sync_steps (int, optional, defaults to 512) — τ parameter from the TR-DPO paper, which determines how frequently the current policy is synchronized with the reference policy. To use this parameter, you must set sync_ref_model=True.

Parameters that control the logging

  • log_completions (bool, optional, defaults to False) — Whether to log a sample of (prompt, completion) pairs every logging_steps steps. If rich is installed, it prints the sample. If wandb logging is enabled, it logs it to wandb.
  • num_completions_to_print (int or None, optional, defaults to None) — Number of completions to print with rich. If None, all completions are logged.
  • wandb_log_unique_prompts (bool, optional, defaults to False) — Whether to log unique prompts in wandb. If True, only unique prompts are logged. If False, all prompts are logged.

Configuration class for the RLOOTrainer.

This class includes only the parameters that are specific to RLOO training. For a full list of training arguments, please refer to the TrainingArguments documentation. Note that default values in this class may differ from those in TrainingArguments.

Using HfArgumentParser we can turn this class into argparse arguments that can be specified on the command line.

References

  1. RLOO Paper
  2. Paper Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs
  3. Paper - REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models
  4. Blog Post - Putting RL back in RLHF
  5. Blog Post - Unraveling RLHF and Its Variants: Progress and Practical Engineering Insights
  6. Youtube - RLOO: A Cost-Efficient Optimization for Learning from Human Feedback in LLMs

Migration Guide from the old implementation (0.21 and below)

With the release of version 0.22.0, we have revamped the RLOOTrainer to be more alinged with other online trainers in the library like GRPOTrainer. This new implementation introduces several changes to the configuration parameters and overall structure of the trainer. Below is a summary of the key changes for RLOOConfig:

TRL ≀ 0.21.x TRL β‰₯ 0.22.0
rloo_k renamed to num_generations
cliprange renamed to epsilon
kl_coef renamed to beta
exp_name renamed to run_name. Use run_name = f"{exp_name}__{seed}__{int(time.time())}" to replicate old behavior
normalize_reward renamed to normalize_advantages. Note: this always normalized advantages (despite the old name)
num_ppo_epochs renamed to num_iterations (default: 1)
token_level_kl removed – KL is now computed only at the sequence level
dataset_num_proc removed – it was unused
num_mini_batches renamed to steps_per_generation
total_episodes use max_steps=total_episodes / gradient_accumulation_steps instead
local_rollout_forward_batch_size removed – now automatically set to per_device_train_batch_size (or per_device_eval_batch_size during evaluation)
num_sample_generations removed – use logging_steps to control generation logging frequency
response_length renamed to max_completion_length (default: 256)
stop_token removed
stop_token_id removed – use processing_class.eos_token_id instead
missing_eos_penalty removed – replicate with a custom reward function checking if eos_token_id is in completion_ids

Below is a summary of the key changes for RLOOTrainer:

TRL ≀ 0.21.x TRL β‰₯ 0.22.0
config renamed to args
reward_model renamed to reward_funcs, which now supports both reward models and custom reward functions
policy renamed to model
ref_policy removed – the reference model is now created automatically from model
data_collator removed
< > Update on GitHub