Skip to content

Releases: huggingface/trl

v1.0.0

31 Mar 14:15
f3e9ac1

Choose a tag to compare

thumbnail-2

Read our blog post for an overview of TRL v1.

Features

Asynchronous GRPO

Asynchronous GRPO decouples generation from the gradient update loop by offloading rollouts to an external vLLM server. Generation runs in parallel while training continues, eliminating idle GPU time and improving hardware utilization.

from trl.experimental.async_grpo import AsyncGRPOTrainer
from trl.rewards import accuracy_reward
from datasets import load_dataset

dataset = load_dataset("trl-lib/DeepMath-103K", split="train")

trainer = AsyncGRPOTrainer(
    model="Qwen/Qwen2.5-0.5B-Instruct",
    reward_funcs=accuracy_reward,
    train_dataset=dataset,
)
trainer.train()

by @qgallouedec in #5293

Variational Sequence-Level Soft Policy Optimization (VESPO)

Screenshot 2026-03-20 at 5 49 50 PM

VESPO addresses training instability in off-policy RL caused by policy staleness, asynchronous updates, and train-inference mismatches. Rather than relying on heuristic token-level clipping (GRPO) or sequence-length normalization (GSPO), VESPO derives a principled reshaping kernel from a variational framework. In practice, this yields a smooth, asymmetric Gamma weighting function that gracefully suppresses extreme sequence-level importance weights without introducing length bias. It can be enabled via the loss_type parameter of GRPOConfig:

from trl import GRPOConfig, GRPOTrainer

trainer = GRPOTrainer(
    model="Qwen/Qwen3-0.6B",
    args=GRPOConfig(loss_type="vespo"),
    ...
)

by @casinca in #5199

Divergence Proximal Policy Optimization (DPPO)

z_TXYw37xZqsQ21YiDkYL SfgWotuuuRKPkg-0bxWv1

DPPO is a new experimental trainer that replaces the standard PPO clipping mechanism with divergence constraints, providing more principled trust-region updates.

by @LeonEricsson in #5117

Self-Distillation Policy Optimization (SDPO)

SDPO is a new experimental trainer that augments on-policy RL with self-distillation from the model's own high-reward trajectories. Instead of using an external teacher, SDPO treats the current model conditioned on feedback as a self-teacher, distilling its feedback-informed predictions back into the policy.

from trl.experimental import SDPOTrainer, SDPOConfig

config = SDPOConfig(
    output_dir="./results",
    num_generations=8,
    success_reward_threshold=1.0,
    use_successful_as_teacher=True,
)

trainer = SDPOTrainer(
    model="Qwen/Qwen2.5-Math-1.5B-Instruct",
    reward_funcs=[accuracy_reward],
    args=config,
    train_dataset=dataset,
)
trainer.train()

by @MengAiDev in #4935

Reward functions can now log extra columns and scalar metrics

Reward functions can return a dictionary of extra values (scalars or per-sample columns) that will be logged alongside the reward. This makes it easier to track intermediate signals without writing custom callbacks.

def my_reward_fn(completions, answer, log_extra=None, log_metric=None, **kwargs):
    extracted = [extract_answer(c) for c in completions]
    rewards = [1.0 if e == a else 0.0 for e, a in zip(extracted, answer)]

    if log_extra:
        log_extra("golden_answer", list(answer))
        log_extra("extracted_answer", extracted)

    if log_metric:
        log_metric("accuracy", sum(rewards) / len(rewards))

    return rewards
image image

by @manueldeprada in #5233

Tool calling support in VLLMClient.chat()

VLLMClient.chat() now supports tool calling, enabling agentic workflows directly through the vLLM client interface.

by @kansalaman in #4889

35% faster packing

BFD packing is 35% faster. The "bfd-requeue" packing strategy has also been renamed to "bfd_split". See MIGRATION.md for details.

benchmark_results

by @mariosasko in #5189

[GKD] Buffer implementation and vLLM inference for distillation trainer

The GKD/GOLD trainer now supports buffered rollout generation, decoupling generation from gradient updates for more efficient distillation. vLLM inference support has also been added to the base self-distillation trainer.

by @cmpatino in #5137 and #5388

v0 → v1 migration guide

A MIGRATION.md guide has been added covering all breaking changes when upgrading from TRL v0 to v1. If you're already on v0.29, the changes are minimal.

by @qgallouedec in #5255

Other

Fixes

  • Fix DPOTrainer collators to truncate sequences before padding by @albertvillanova in #5305
  • Prevent corruption of DPO VLM training if "keep_end" truncation_mode by @albertvillanova in #5286
  • Fix mm_token_type_ids silently dropped in DPO VLM training by @albertvillanova in #5279
  • Fix UNEXPECTED lm_head.weight warning when loading a CausalLM as a reward model by @albertvillanova in #5295
  • Fix accuracy_reward crash when called from non-main thread by @qgallouedec in #5281
  • Fix GRPOTrainer attribute access for vLLM model config by @falcondai in #5302
  • [GRPO] Fix re-tokenization bug in tool-calling loop by @qgallouedec in #5242
  • [CPO/ORPO] Fix handling of different length chosen/rejected prompts by @davmels in #4639
  • Fix RewardFunc type alias to reflect actual calling convention by @s-zx in #5246
  • fix(ppo): add gradient_checkpointing_enable/disable to PolicyAndValueWrapper by @s-zx in #5245
  • Fix prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in #5212
  • Fix support for model_init_kwargs when passed as CLI JSON string by @albertvillanova in #5230
  • Fix support for model_init_kwargs in MiniLLM when passed as CLI JSON string by @albertvillanova in #5274
  • Fix support for model_init_kwargs in GKD/GOLD when passed as CLI JSON string by @albertvillanova in #5266
  • Sync entire prompt/completion token tensors before indexing by @shawnghu in #5218
  • Clean up model update group on worker exit by @AmineDiro in #5325
  • Fix prefix EOS slicing for tool suffix (with Qwen3/3.5 chat templates) by @casinca in #5330
  • Fix: apply reward_weights to logged reward/reward_std in GRPOTrainer by @lailanelkoussy in #5353
  • Fix IDs shape mismatch in SFT for VLMs with text-only by @albertvillanova in #5354

Documentation and Examples

Read more

v1.0.0rc1

20 Mar 23:55

Choose a tag to compare

v1.0.0rc1 Pre-release
Pre-release

Features

Variational Sequence-Level Soft Policy Optimization (VESPO)

Screenshot 2026-03-20 at 5 49 50 PM

VESPO addresses training instability in off-policy RL caused by policy staleness, asynchronous updates, and train-inference mismatches. Rather than relying on heuristic token-level clipping (GRPO) or sequence-length normalization (GSPO), VESPO derives a principled reshaping kernel from a variational framework. In practice, this yields a smooth, asymmetric Gamma weighting function that gracefully suppresses extreme sequence-level importance weights without introducing length bias. It can be enabled via the loss_type parameter of GRPOConfig:

from trl import GRPOConfig, GRPOTrainer

trainer = GRPOTrainer(
    model="Qwen/Qwen3-0.6B",
    args=GRPOConfig(loss_type="vespo"),
    ...
)

by @casinca in #5199

Divergence Proximal Policy Optimization (DPPO)

z_TXYw37xZqsQ21YiDkYL SfgWotuuuRKPkg-0bxWv1

DPPO is a new experimental trainer that replaces the standard PPO clipping mechanism with divergence constraints, providing more principled trust-region updates.

by @LeonEricsson in #5117

Reward functions can now log extra columns and scalar metrics

Reward functions can return a dictionary of extra values (scalars or per-sample columns) that will be logged alongside the reward. This makes it easier to track intermediate signals without writing custom callbacks.

def my_reward_fn(completions, answer, log_extra=None, log_metric=None, **kwargs):
    extracted = [extract_answer(c) for c in completions]
    rewards = [1.0 if e == a else 0.0 for e, a in zip(extracted, answer)]

    if log_extra:
        log_extra("golden_answer", list(answer))
        log_extra("extracted_answer", extracted)

    if log_metric:
        log_metric("accuracy", sum(rewards) / len(rewards))

    return rewards
image image

by @manueldeprada in #5233

Tool calling support in VLLMClient.chat()

VLLMClient.chat() now supports tool calling, enabling agentic workflows directly through the vLLM client interface.

by @kansalaman in #4889

35% faster packing

BFD packing is 35% faster. The "bfd-requeue" packing strategy has also been renamed to "bfd_split". See MIGRATION.md for details.

benchmark_results

by @mariosasko in #5189

[GKD] Buffer implementation for distillation trainer

The GKD/GOLD trainer now supports buffered rollout generation, decoupling generation from gradient updates for more efficient distillation.

by @cmpatino in #5137

v0 → v1 migration guide

A MIGRATION.md guide has been added covering all breaking changes when upgrading from TRL v0 to v1. If you're already on v0.29, the changes are minimal.

by @qgallouedec in #5255

Other

Fixes

  • Fix DPOTrainer collators to truncate sequences before padding by @albertvillanova in #5305
  • Prevent corruption of DPO VLM training if "keep_end" truncation_mode by @albertvillanova in #5286
  • Fix mm_token_type_ids silently dropped in DPO VLM training by @albertvillanova in #5279
  • Fix UNEXPECTED lm_head.weight warning when loading a CausalLM as a reward model by @albertvillanova in #5295
  • Fix accuracy_reward crash when called from non-main thread by @qgallouedec in #5281
  • Fix GRPOTrainer attribute access for vLLM model config by @falcondai in #5302
  • [GRPO] Fix re-tokenization bug in tool-calling loop by @qgallouedec in #5242
  • [CPO/ORPO] Fix handling of different length chosen/rejected prompts by @davmels in #4639
  • Fix RewardFunc type alias to reflect actual calling convention by @s-zx in #5246
  • fix(ppo): add gradient_checkpointing_enable/disable to PolicyAndValueWrapper by @s-zx in #5245
  • Fix prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in #5212
  • Fix support for model_init_kwargs when passed as CLI JSON string by @albertvillanova in #5230
  • Fix support for model_init_kwargs in MiniLLM when passed as CLI JSON string by @albertvillanova in #5274
  • Fix support for model_init_kwargs in GKD/GOLD when passed as CLI JSON string by @albertvillanova in #5266
  • Sync entire prompt/completion token tensors before indexing by @shawnghu in #5218
  • Clean up model update group on worker exit by @AmineDiro in #5325

Documentation and Examples

What's Changed

Read more

v0.29.1

20 Mar 03:57

Choose a tag to compare

What's Changed

  • Handle mm_token_type_ids in SFT/GRPO/RLOO to fix IndexError by @albertvillanova in #5178
  • Fix prepare_multimodal_messages to support tool_calls and tool role by @alvarobartt in #5212
  • Fix type for model_init_kwargs when passed as CLI JSON string by @albertvillanova in #5230
  • Decouple rollout dispatch from vLLM backend in GRPO _generate_single_turn by @albertvillanova in #5122
  • Simplify logic for structured outputs across vLLM versions by @albertvillanova in #5215
  • Add support for raw ids in prompts in vLLM client and server by @qgallouedec in #5225
  • Add VLM support when passing raw token IDs to vLLM client by @qgallouedec in #5227
  • Move rollout_func from _generate_single_turn to _generate by @qgallouedec in #5232
  • [GRPO/RLOO] Tokenize before vLLM generation call by @qgallouedec in #5238
  • Support JSON string parsing of teacher_model_init_kwargs in MiniLLMConfig by @albertvillanova in #5259
  • [GRPO/RLOO] Unify tokenization across all generation backends in _generate_single_turn by @qgallouedec in #5239
  • [GRPO/RLOO] Extract tokenize prompts from _generate_single_turn by @qgallouedec in #5240
  • [CPO/ORPO] Fix handling of different length chosen/rejected prompts. by @davmels in #4639
  • Fix type for teacher_model_init_kwargs when passed as CLI JSON string by @albertvillanova in #5258
  • Fix support for model_init_kwargs in GKD/GOLD when passed as CLI JSON string by @albertvillanova in #5266
  • Fix mm_token_type_ids silently dropped in DPO VLM training by @albertvillanova in #5279
  • Fix support for model_init_kwargs in MiniLLM when passed as CLI JSON string by @albertvillanova in #5274
  • Fix GRPOTrainer attribute access for vLLM model config by @falcondai in #5302
  • [GRPO] Fix re-tokenization bug in tool-calling loop by concatenating token IDs by @qgallouedec in #5242

New Contributors

Full Changelog: v0.29.0...v0.29.1

v0.29.0

25 Feb 22:38
d24e194

Choose a tag to compare

Features

Add environment_factory to GRPOTrainer

GRPOTrainer now accepts an environment_factory argument, allowing users to specify a custom environment class for training. This enables more flexible and diverse training scenarios by letting users define their own environments with specific dynamics and reward structures.

from datasets import Dataset
from trl import GRPOConfig, GRPOTrainer

dataset = Dataset.from_dict({
    "prompt": [[{"role": "user", "content": f"Increment the counter by {i}."}] for i in range(1, 7)]
})

def reward_func(environments, **kwargs):
    return [env.counter for env in environments]

class IncrementEnv:
    def reset(self):
        self.counter = 0

    def increment(self, step: int) -> int:
        """
        Increment the internal counter.

        Args:
            step: Value to add to the counter.

        Returns:
            The updated counter value.
        """
        self.counter += step
        return self.counter

trainer = GRPOTrainer(
    model="Qwen/Qwen3-0.6B",
    args=GRPOConfig(chat_template_kwargs={"enable_thinking": False}),
    train_dataset=dataset,
    reward_funcs=reward_func,
    environment_factory=IncrementEnv,
)
trainer.train()

by @qgallouedec in #5093

Skills

TRL introduces agent-native CLI Integration: trl-training, a first-class Agent Skill that exposes TRL’s training workflows (SFT, DPO, GRPO, etc.) in a structured, agent-readable format. The skill is packaged directly with the trl library and can be installed via the CLI:

# Install into the project's agent directory (default scope=project), by agent name: claude, codex, opencode
trl skills install trl-training --target <agent>

This enables AI agents to safely and reproducibly execute TRL training workflows using a well-defined interface.

Skills can be installed at the project or global scope, and support explicit targets and overwrite controls.

Other

Fixes

Documentation and Examples

Deprecations

CI Improvements

  • Upgrade GitHub Actions to latest versions by @salmanmkc in #4893
  • Remove duplicated tests for SFT and add gradient checkpointing tests by @qgallouedec in #5054
  • Up...
Read more

v0.28.0

10 Feb 13:28
49ef334

Choose a tag to compare

Features

Experimental

Fixes

Documentation and Examples

Deprecations

CI Improvements

Miscellaneous

What's Changed

Read more

v0.27.2

03 Feb 18:10

Choose a tag to compare

What's Changed

  • Remove access to warnings_issued by @qgallouedec in #4960
  • Fix SFTTrainer init logic: remove TrainingArguments.push_to_hub_token only for transformers < v5 by @albertvillanova in #4942
  • Fix extra EOS appended in DPO preprocessing for conversational data by @qgallouedec in #4908

Full Changelog: v0.27.1...v0.27.2

v0.27.1

24 Jan 03:42

Choose a tag to compare

What's Changed

  • Fix: undefined current_gradient_accumulation_steps by @qgallouedec in #4852
  • fix(DeepSeek OPSM): passing correct (vLLM) logprobs by @casinca in #4857
  • Fix SFT training for prompt-completion type and transformers v5 by @qgallouedec in #4880
  • Bugfix: Logprob drift in vLLM serving mode (compared to colocate mode) by @kdubovikov in #4873
  • Fix RewardTrainer's results not reproducible by @liyc-ai in #4887

New Contributors

Full Changelog: v0.27.0...v0.27.1

v0.27.0

16 Jan 02:34
17acd61

Choose a tag to compare

Features

  • Add vllm_group_port argument to GRPO, RLOO and OnlineDPO configuration by @pointerhacker in #4545
  • Preserve truncated tokens in BFD packing by @qgallouedec in #4632
  • Support async reward functions and parallelize call to reward functions. by @pramodith in #4567
  • RLOO supports async rewards. by @pramodith in #4718
  • Support vLLM 0.12.0 by @jiqing-feng in #4117
  • feat: DeepSeek V3.2 Off-policy sequence masking by @casinca in #4689
  • 🎭 Up to 50% less VRAM during forward with forward_masked_logits function by @qgallouedec in #4729
  • [GRPO] Add a config to limit the number of tool calling iterations by @pramodith in #4761
  • Switch gradient checkpointing default to use_reentrant=False (PyTorch recommended) by @qgallouedec in #4811
  • Add support for GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL Optimization by @nbasyl in #4785

Experimental

  • Move AutoModelForCausalLMWithValueHead and AutoModelForSeq2SeqLMWithValueHead to experimental by @qgallouedec in #4654
  • Move DPODataCollatorWithPadding to experimental.utils by @qgallouedec in #4667
  • Move DataCollatorForChatML to experimental.utils by @qgallouedec in #4668
  • Move add_bos_token_if_needed and add_eos_token_if_needed to experimental.utils by @qgallouedec in #4674
  • Move truncate_right and SIMPLE_CHAT_TEMPLATE to experimental.utils by @qgallouedec in #4677
  • Move prepare_model_for_kbit_training, enable_gradient_checkpointing, prepare_peft_model to experimental.utils by @qgallouedec in #4704
  • Move get_reward function to experimental.utils by @qgallouedec in #4683
  • Remove experimental imports from testing_utils by @albertvillanova in #4727
  • ORPO: Avoid catastrophic cancellation in loss function by @hartmans in #4763
  • Refactor KTO [1/N]: Modernize model initialization by @albertvillanova in #4783
  • [GOLD] add probability merging fix to implement chain rule by @kashif in #4765
  • Refactor KTO coordinated with DPO [a/N]: Remove encoder-decoder support by @albertvillanova in #4792
  • Refactor KTO coordinated with DPO [b/N]: Simplify truncation logic by @albertvillanova in #4808

Fixes

  • Accounting for case num_generations_eval=1 in the calculation of the advantage by @qgallouedec in #4662
  • Fix vLLM error for tools usage not supported when running GRPO training by @apalmas-saifh in #4663
  • Fix GRPO config validation in case num_generations_eval is specified and different than num_generations by @apalmas-saifh in #4682
  • Fix top_k default value to 0 for disabling top-k filtering by @albertvillanova in #4695
  • Include generation_config for tiny model uploads by @qgallouedec in #4643
  • Fix KeyError with transformers 5.0.0+ where push_to_hub_token is removed by @Manodeepray in #4691
  • Overwrite model default generation config used by model.generate by @albertvillanova in #4647
  • Fix: handle multiple tool calls in qwen3_schema by @mattbui in #4709
  • Fix bugs when using multi-gpu: dataset streaming for offline trainers + dtype initialization by @kaixuanliu in #3950
  • Ensure llm-blender is importable with transformers >= v5 by @albertvillanova in #4781
  • Monkey patch for HybridCache in Liger-Kernel with transformers v5 by @qgallouedec in #4798
  • [fix] GRPOTrainer: proper access args by @carlyou in #4801
  • Fix vllm compat patches to be applied only to affected versions by @albertvillanova in #4815
  • fix bug when sft calc outputs.token_accuracy by @kaixuanliu in #4814
  • fix xpu vllm client server by @jiqing-feng in #4780

Documentation and Examples

Deprecations

CI Improvements

Read more

v0.26.2

18 Dec 15:55
8c26b7d

Choose a tag to compare

What's Changed

Full Changelog: v0.26.1...v0.26.2

v0.26.1

12 Dec 17:50

Choose a tag to compare

What's Changed

  • Fix vLLM error for tools usage not supported when running GRPO training by @apalmas-saifh in #4663
  • Fix GRPO config validation in case num_generations_eval is specified and different than num_generations by @apalmas-saifh in #4682

New Contributors

Full Changelog: v0.26.0...v0.26.1