user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-03-30 02:12:16
| body
stringlengths 1
173k
| issue_number
int64 1
3.18k
| __index_level_0__
int64 0
8.59k
|
---|---|---|---|---|
vagitablebirdcode | 2025-03-12T13:04:26 | I'm very sorry—I didn't notice the main branch and was only looking at the 0.15.2 branch and a few PR branches. The code you mentioned is not present in these branches. In fact, these code in main branch has already implemented my idea.
Thank you very much for your response! I will go ahead and close this issue. | 3,057 | 312 |
HuggingFaceDocBuilderDev | 2025-03-11T23:24:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3056). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,056 | 313 |
Pclanglais | 2025-03-11T23:20:28 | Same question for a different issue. I'm using a model with special tokens signalling different text parts and I'm unable to access them, without setting skip_special_tokens=False | 3,054 | 314 |
qgallouedec | 2025-03-11T23:30:39 | Thanks for the suggestion. In fact it's already been suggested in #2728, and I think this solution should actually be avoided: https://github.com/huggingface/trl/pull/2728#issuecomment-2635166424
| 3,054 | 315 |
mtoslalibu | 2025-03-12T13:43:46 | > Thanks for the suggestion. In fact it's already been suggested in [#2728](https://github.com/huggingface/trl/pull/2728), and I think this solution should actually be avoided: [#2728 (comment)](https://github.com/huggingface/trl/pull/2728#issuecomment-2635166424)
Thank you for your response. I will introduce the batch-related parameters (like max-num-seq) one by one, then. The motivation is that batch size has a strong impact on inference duration, tuning which can reduce GRPO training duration. | 3,054 | 316 |
qgallouedec | 2025-03-11T16:58:24 | @loricxy0707 can you confirm that this fixes your issue? | 3,053 | 317 |
HuggingFaceDocBuilderDev | 2025-03-11T17:01:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3053). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,053 | 318 |
HuggingFaceDocBuilderDev | 2025-03-11T14:58:02 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3052). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,052 | 319 |
qgallouedec | 2025-03-11T14:36:20 | Hi, thanks for the question. Yes, we first generate, then compute the reward and the loss, then the weights are updated.
> From the looks, it feels like the parameter update is blocked until the first two steps are complete. Does that mean the GPUs (with the model weights loaded) remain idle until then?
With vLLM yes, without, these GPUs are used to generate.
> I believe it's the same behavior for both the approaches:
> - gathering the parameters (on a single GPU) from ds3 before generation
> - using a separate GPU with vllm for generation
Not exactly, because without vllm, the weights are gathered in all devices, so all devices generate. | 3,050 | 320 |
yash-malik | 2025-03-11T16:29:01 | Thanks for the answer! That makes sense! | 3,050 | 321 |
Rocketknight1 | 2025-03-11T12:20:54 | cc @zucchini-nlp @qgallouedec | 3,051 | 322 |
qgallouedec | 2025-03-11T13:25:59 | This is not high priority, so contributions are very welcome. This issue belongs to TRL, I'll transfer it. | 3,051 | 323 |
SabaPivot | 2025-03-13T07:04:02 | > This is not high priority, so contributions are very welcome. This issue belongs to TRL, I'll transfer it.
Sure. https://github.com/om-ai-lab/VLM-R1
Team om-ai-lab has implemented the GRPO Trainer for QWEN-VL series model.
Hope this helps. | 3,051 | 324 |
qgallouedec | 2025-03-11T14:37:42 | Good point, I'll be happy to receive a PR for this :) | 3,049 | 325 |
shirinyamani | 2025-03-28T17:29:31 | I've commented on your PR! | 3,049 | 326 |
jamesbraza | 2025-03-28T17:42:19 | Hi @shirinyamani thanks for the PR comment, but I think you're misunderstanding here, can you reopen this issue? This issue still stands.
https://github.com/Future-House/trl/pull/9 was about resolving https://github.com/huggingface/trl/issues/3018 on a fork, and by happenstance I fixed this issue in that PR too. However, I am not going to open that PR into actual `trl` as it was too hacky. | 3,049 | 327 |
qgallouedec | 2025-03-11T13:40:08 | Thanks for fixing it. Can you just apply the suggestion?
| 3,048 | 328 |
HuggingFaceDocBuilderDev | 2025-03-11T17:25:09 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3048). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,048 | 329 |
HuggingFaceDocBuilderDev | 2025-03-11T13:53:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,046 | 330 |
qgallouedec | 2025-03-10T19:27:22 | I see the issue, maybe we should not make the assumption that all prompts are always different.
The alternative is to do something like
`prompt[::self.num_generations]`
WDYT? | 3,045 | 331 |
shing100 | 2025-03-11T08:52:14 | I have the same issue.
After updating trl, there have been too many VRAMs used to learn.
When SFT training the 7.8b model with 2 nodes (H100*8), use a total of 454.08 GiB.
Liger-kernal + deepspeed zero3
micro batch size 1
sequence_len 8192
https://github.com/axolotl-ai-cloud/axolotl/issues/2387 | 3,044 | 332 |
maoulee | 2025-03-11T11:49:38 | > I have the same issue.
>
> After updating trl, there have been too many VRAMs used to learn.
>
> When SFT training the 7.8b model with 2 nodes (H100*8), use a total of 454.08 GiB.
>
> Liger-kernal + deepspeed zero3 micro batch size 1 sequence_len 8192
>
> [axolotl-ai-cloud/axolotl#2387](https://github.com/axolotl-ai-cloud/axolotl/issues/2387)
Have you solve this problem in trl ?I find this code can be work fine in unsloth but with a very slowly speed | 3,044 | 333 |
qgallouedec | 2025-03-11T17:43:21 | Can you provide the full traceback? Here it's hard to know where is the memory peak | 3,044 | 334 |
maoulee | 2025-03-13T03:53:15 | > Can you provide the full traceback? Here it's hard to know where is the memory peak
I have sovle this problem by use function in unsloth-zoo, it helps vllm get the weights of lora instead of move model weights to vllm it reduced the vram of model weights
Here is the terminal out put :INFO 03-13 11:20:41 gptq_marlin.py:202] Using MarlinLinearKernel for GPTQMarlinLinearMethod
Loading safetensors checkpoint shards: 0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 7.22it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00, 7.21it/s]
INFO 03-13 11:20:42 model_runner.py:1115] Loading model weights took 0.4302 GB
INFO 03-13 11:20:42 punica_selector.py:18] Using PunicaWrapperGPU.
INFO 03-13 11:20:55 worker.py:267] Memory profiling takes 12.69 seconds
INFO 03-13 11:20:55 worker.py:267] the current vLLM instance can use total_gpu_memory (39.39GiB) x gpu_memory_utilization (0.20) = 7.88GiB
INFO 03-13 11:20:55 worker.py:267] model weights take 0.43GiB; non_torch_memory takes 0.09GiB; PyTorch activation peak memory takes 1.39GiB; the rest of the memory reserved for KV Cache is 5.97GiB.
INFO 03-13 11:20:55 executor_base.py:110] # CUDA blocks: 32588, # CPU blocks: 21845
INFO 03-13 11:20:55 executor_base.py:115] Maximum concurrency for 2500 tokens per request: 208.56x
INFO 03-13 11:21:14 model_runner.py:1434] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
Capturing CUDA graph shapes: 100%|████████████████████████████████████████| 35/35 [00:21<00:00, 1.64it/s]
INFO 03-13 11:21:36 model_runner.py:1562] Graph capturing finished in 22 secs, took 1.74 GiB
INFO 03-13 11:21:36 llm_engine.py:431] init engine (profile, create kv cache, warmup model) took 54.14 seconds
{'loss': 0.0, 'grad_norm': 2.7884418964385986, 'learning_rate': 1.0101010101010103e-07, 'rewards/reward_len': -321.578125, 'reward': -321.578125, 'reward_std': 314.4985647201538, 'completion_length': 229.140625, 'kl': 0.0, 'epoch': 0.01}
{'loss': -0.0, 'grad_norm': 1.232833981513977, 'learning_rate': 2.0202020202020205e-07, 'rewards/reward_len': -120.75, 'reward': -120.75, 'reward_std': 145.09556579589844, 'completion_length': 79.8125, 'kl': 0.0, 'epoch': 0.02}
{'loss': -0.0, 'grad_norm': 1.4564472436904907, 'learning_rate': 3.0303030303030305e-07, 'rewards/reward_len': -170.6875, 'reward': -170.6875, 'reward_std': 182.55911830067635, 'completion_length': 104.9375, 'kl': -5.692243576049805e-06, 'epoch': 0.03}
{'loss': -0.0, 'grad_norm': 3.2063918113708496, 'learning_rate': 4.040404040404041e-07, 'rewards/reward_len': -110.671875, 'reward': -110.671875, 'reward_std': 129.3709478378296, 'completion_length': 73.71875, 'kl': -8.501112461090088e-06, 'epoch': 0.04}
{'loss': -0.0, 'grad_norm': 1.7419143915176392, 'learning_rate': 5.05050505050505e-07, 'rewards/reward_len': -234.28125, 'reward': -234.28125, 'reward_std': 278.61364382505417, 'completion_length': 128.328125, 'kl': -7.413327693939209e-06, 'epoch': 0.05}
{'loss': -0.0, 'grad_norm': 2.447553873062134, 'learning_rate': 6.060606060606061e-07, 'rewards/reward_len': -201.859375, 'reward': -201.859375, 'reward_std': 169.03560876846313, 'completion_length': 157.59375, 'kl': -6.861984729766846e-06, 'epoch': 0.06}
{'loss': -0.0, 'grad_norm': 1.1706939935684204, 'learning_rate': 7.070707070707071e-07, 'rewards/reward_len': -75.9375, 'reward': -75.9375, 'reward_std': 133.00669565796852, 'completion_length': 55.546875, 'kl': -6.794929504394531e-06, 'epoch': 0.07}
{'loss': -0.0, 'grad_norm': 2.1840455532073975, 'learning_rate': 8.080808080808082e-07, 'rewards/reward_len': -399.328125, 'reward': -399.328125, 'reward_std': 241.75924617052078, 'completion_length': 297.390625, 'kl': -4.477798938751221e-06, 'epoch': 0.08}
{'loss': -0.0, 'grad_norm': 2.187257766723633, 'learning_rate': 9.090909090909091e-07, 'rewards/reward_len': -199.421875, 'reward': -199.421875, 'reward_std': 201.53497797250748, 'completion_length': 132.828125, 'kl': -6.563961505889893e-06, 'epoch': 0.09}
{'loss': 0.0, 'grad_norm': 1.8141218423843384, 'learning_rate': 1.01010101010101e-06, 'rewards/reward_len': -334.484375, 'reward': -334.484375, 'reward_std': 281.6256628036499, 'completion_length': 225.859375, 'kl': 1.1272728443145752e-05, 'epoch': 0.1}
{'loss': 0.0, 'grad_norm': 2.5700647830963135, 'learning_rate': 1.111111111111111e-06, 'rewards/reward_len': -163.3125, 'reward': -163.3125, 'reward_std': 170.36365354061127, 'completion_length': 118.921875, 'kl': 1.0117888450622559e-05, 'epoch': 0.11}
{'loss': 0.0, 'grad_norm': 1.258663535118103, 'learning_rate': 1.2121212121212122e-06, 'rewards/reward_len': -317.734375, 'reward': -317.734375, 'reward_std': 255.7184435725212, 'completion_length': 214.5, 'kl': 1.574307680130005e-05, 'epoch': 0.12}
{'loss': 0.0, 'grad_norm': 2.4687442779541016, 'learning_rate': 1.3131313131313134e-06, 'rewards/reward_len': -397.640625, 'reward': -397.640625, 'reward_std': 343.2056703567505, 'completion_length': 255.921875, 'kl': 0.0002644285559654236, 'epoch': 0.13}
{'loss': 0.0, 'grad_norm': 2.0361921787261963, 'learning_rate': 1.4141414141414143e-06, 'rewards/reward_len': -61.234375, 'reward': -61.234375, 'reward_std': 134.06728866696358, 'completion_length': 41.28125, 'kl': 0.000720784068107605, 'epoch': 0.14}
{'loss': 0.0, 'grad_norm': 2.076171875, 'learning_rate': 1.5151515151515152e-06, 'rewards/reward_len': -68.78125, 'reward': -68.78125, 'reward_std': 85.20245426893234, 'completion_length': 50.203125, 'kl': 0.0004588514566421509, 'epoch': 0.15}
{'loss': 0.0, 'grad_norm': 2.653731107711792, 'learning_rate': 1.6161616161616164e-06, 'rewards/reward_len': -244.984375, 'reward': -244.984375, 'reward_std': 229.60207390785217, 'completion_length': 167.515625, 'kl': 0.0006752237677574158, 'epoch': 0.16}
{'loss': 0.0, 'grad_norm': 1.4232606887817383, 'learning_rate': 1.7171717171717173e-06, 'rewards/reward_len': -433.109375, 'reward': -433.109375, 'reward_std': 422.8487824201584, 'completion_length': 293.953125, 'kl': 0.0012104883790016174, 'epoch': 0.17}
{'loss': 0.0001, 'grad_norm': 1.926514983177185, 'learning_rate': 1.8181818181818183e-06, 'rewards/reward_len': -183.265625, 'reward': -183.265625, 'reward_std': 192.7100260257721, 'completion_length': 130.8125, 'kl': 0.0017363205552101135, 'epoch': 0.18}
{'loss': 0.0001, 'grad_norm': 1.6588062047958374, 'learning_rate': 1.9191919191919192e-06, 'rewards/reward_len': -81.53125, 'reward': -81.53125, 'reward_std': 122.57226317375898, 'completion_length': 56.0, 'kl': 0.0016131997108459473, 'epoch': 0.19}
{'loss': 0.0001, 'grad_norm': 1.1836130619049072, 'learning_rate': 2.02020202020202e-06, 'rewards/reward_len': -78.203125, 'reward': -78.203125, 'reward_std': 164.13510417938232, 'completion_length': 54.578125, 'kl': 0.0036144256591796875, 'epoch': 0.2}
{'loss': 0.0003, 'grad_norm': 1.376534342765808, 'learning_rate': 2.1212121212121216e-06, 'rewards/reward_len': -221.0625, 'reward': -221.0625, 'reward_std': 193.60646617412567, 'completion_length': 159.625, 'kl': 0.006711140275001526, 'epoch': 0.21}
{'loss': 0.0004, 'grad_norm': 1.8582404851913452, 'learning_rate': 2.222222222222222e-06, 'rewards/reward_len': -147.4375, 'reward': -147.4375, 'reward_std': 201.59488809108734, 'completion_length': 104.296875, 'kl': 0.01008462905883789, 'epoch': 0.22}
{'loss': 0.0004, 'grad_norm': 2.769685745239258, 'learning_rate': 2.3232323232323234e-06, 'rewards/reward_len': -73.296875, 'reward': -73.296875, 'reward_std': 126.35262995958328, 'completion_length': 58.734375, 'kl': 0.010751724243164062, 'epoch': 0.23}
{'loss': 0.0004, 'grad_norm': 1.448876976966858, 'learning_rate': 2.4242424242424244e-06, 'rewards/reward_len': -326.9375, 'reward': -326.9375, 'reward_std': 302.11334347724915, 'completion_length': 230.59375, 'kl': 0.00894937664270401, 'epoch': 0.24}
{'loss': 0.0003, 'grad_norm': 6.789086818695068, 'learning_rate': 2.5252525252525258e-06, 'rewards/reward_len': -63.015625, 'reward': -63.015625, 'reward_std': 75.98726436495781, 'completion_length': 39.59375, 'kl': 0.00805211067199707, 'epoch': 0.25}
{'loss': 0.0004, 'grad_norm': 4.663589000701904, 'learning_rate': 2.6262626262626267e-06, 'rewards/reward_len': -78.328125, 'reward': -78.328125, 'reward_std': 89.43056464195251, 'completion_length': 54.546875, 'kl': 0.01078033447265625, 'epoch': 0.26}
3%|█▋ | 26/990 [21:29<12:05:37, 45.16s/it] | 3,044 | 335 |
kashif | 2025-03-13T07:45:55 | thanks @abhigoyal1997 having a look now | 3,043 | 336 |
kashif | 2025-03-13T08:21:09 | @abhigoyal1997 is the issue that the `beta` is not the same as the beta in paper? Also, note that the ` F.kl_div` takes inputs q and p to calculate KL(p||q) which can cause confusion too? | 3,043 | 337 |
kashif | 2025-03-13T08:28:51 | The paper has:

In TRL it is implemented as:
$$
D_{{JSD}(\beta)}(P \| Q) = \beta KL\Big(P \Big \| \beta Q + (1- \beta)P \Big) + (1 - \beta) KL\Big(Q \Big \| \beta Q + (1 - \beta) P \Big)
$$
You can see when beta=0, the loss is the KL(student || teacher) which is `F.kl_div(teacher, student)` in TRL and when beta=1, the loss is KL(teacher || student) which is `F.kl_div(student, teacher)` so there is a difference in the original vs. the TRL formulation
| 3,043 | 338 |
HuggingFaceDocBuilderDev | 2025-03-13T09:23:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3043). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,043 | 339 |
abhigoyal1997 | 2025-03-13T10:51:28 | > The paper has: 
>
> In TRL it is implemented as:
>
> D J S D ( β ) ( P | Q ) = β K L ( P | β Q + ( 1 − β ) P ) + ( 1 − β ) K L ( Q | β Q + ( 1 − β ) P )
>
> You can see when beta=0, the loss is the KL(student || teacher) which is `F.kl_div(teacher, student)` in TRL and when beta=1, the loss is KL(teacher || student) which is `F.kl_div(student, teacher)` so there is a difference in the original vs. the TRL formulation
Hi Kashif, yes this was the problem. The mixture distribution was calculated with the wrong weights.
Thanks for reviewing and approving! | 3,043 | 340 |
skoshx | 2025-03-10T20:45:07 | So turns out this was just a skill issue, `max_length` defaults to `512`, so in `_forward` the outputs get truncated, so we get zero output, causing the error.
There should probably be an assertion like;
```py
assert config.max_length > config.max_new_tokens, "`max_length` should be higher than `max_new_tokens` or your outputs will get truncated to zero length."
``` | 3,042 | 341 |
qgallouedec | 2025-03-11T17:47:44 | How much memory does your system have? | 3,039 | 342 |
qgallouedec | 2025-03-11T17:52:48 | From the log, it's not clear where this memory peak occur. Can you try to be even more precise with the looping pattern you made? I'll give it a try myself as well | 3,039 | 343 |
qgallouedec | 2025-03-10T05:39:50 | Thanks for reporting. Trl doesn't support python 3.14. Currently, 3.13 should work but it not officially supported, see #2593. Max supported version is 3.12. | 3,038 | 344 |
debdeepsanyal | 2025-03-09T07:00:54 | same issue. the code was working with the GRPOTrainer earlier but now it throws off this RuntimeError. | 3,035 | 345 |
debdeepsanyal | 2025-03-09T19:37:36 | After some further checking, I think the problem occurs if I am using `device_map='auto'`. Someone kindly fix this portion.
| 3,035 | 346 |
stevebell117 | 2025-03-10T14:29:44 | We also have `device_map='auto'` | 3,035 | 347 |
qgallouedec | 2025-03-08T18:24:24 | I am encountering this issue as well. Any idea how to solve it? | 3,034 | 348 |
dongdongzhaoUP | 2025-03-12T13:47:13 | Also | 3,034 | 349 |
jenna-russell | 2025-03-18T19:14:51 | I also am encountering this issue | 3,034 | 350 |
lilakk | 2025-03-18T19:18:25 | I've been encountering the same issue! | 3,034 | 351 |
Bingogogogogo | 2025-03-19T11:35:24 | same issue | 3,034 | 352 |
Vanchrn | 2025-03-22T02:38:28 | same | 3,034 | 353 |
wofeishenling | 2025-03-23T11:55:03 | same issue | 3,034 | 354 |
naajeehxe | 2025-03-24T11:54:01 | same here...
| 3,034 | 355 |
AndreiCComan | 2025-03-10T16:23:51 | @JinyuanSun I had a similar issue in #2856 which has been fixed. Could you try to run the same MRE and the latest changes (i.e., learning rate etc.) I posted there? | 3,031 | 356 |
zhangwengyu999 | 2025-03-18T12:10:25 | Same issue.
#2856 is not the same issue, since we want to perform GRPO on a fine-tuned PeftModel, but not perform GRPO together with PEFT. | 3,031 | 357 |
DingZhenChen-code | 2025-03-20T12:51:56 | Same issue. How to continue training on a fine-tuned PeftModel which lora module is not merged.
Maybe resume from ckpt is helpful. | 3,031 | 358 |
qgallouedec | 2025-03-11T14:07:37 | That's a good point.
That's also what's done in open-instruct: https://github.com/allenai/open-instruct/blob/6d5320539f23a6dd55c892fd35e7e86907569af1/open_instruct/grpo_vllm_thread_ray_gtrl.py#L777C9-L777C37
Ideally, we would like to have some curves to show this gap, so if someone has any, feel free to share. | 3,029 | 359 |
HuggingFaceDocBuilderDev | 2025-03-11T15:37:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3029). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,029 | 360 |
deekshaVarshney | 2025-03-07T15:19:09 | @kashif | 3,027 | 361 |
qgallouedec | 2025-03-07T13:48:04 | If I summarize your article, the term KL doesn't seem correct to you because the sampling is done under $π_{\mathrm{old}}$ and not with $π_θ$?
Note that in practice (and this is the default setting), $μ=1$ (implies $π_{\mathrm{old}} = π_{\theta}$), this issue doesn’t arise.
In the general case, we would need to find a way to perform importance sampling on the KL term—is that your idea? | 3,025 | 362 |
zanghyu | 2025-03-07T16:41:07 | > If I summarize your article, the term KL doesn't seem correct to you because the sampling is done under π old and not with π θ ? Note that in practice (and this is the default setting), μ = 1 (implies π old = π θ ), this issue doesn’t arise. In the general case, we would need to find a way to perform importance sampling on the KL term—is that your idea?
Yes exactly. So the current implementation of GRPO is just an on-policy version, it does not seem like the original one in GRPO paper. | 3,025 | 363 |
qgallouedec | 2025-03-07T17:59:36 | In the DeepSeek Math paper, they use the same KL term, no? | 3,025 | 364 |
zanghyu | 2025-03-07T18:29:18 | > In the DeepSeek Math paper, they use the same KL term, no?
I got your point. Yeah, they use the same KL term, while the equation in their paper shows that their samples are from the old policy distribution. So the default implementation in this repo is okay ( as being on-policy), but it is hard to say how to implement an off-policy version, right? | 3,025 | 365 |
qgallouedec | 2025-03-07T18:46:34 | Maybe with some kind of importance sampling? | 3,025 | 366 |
zanghyu | 2025-03-08T02:39:55 | > Maybe with some kind of importance sampling?
$$\nabla_\theta\mathbb{E}_{\pi_\theta}[\log\pi_\theta - \log\pi_\text{ref}]=\mathbb{E}_{\pi_\theta}[(\log\pi_\theta-\log\pi_\text{ref})\cdot \nabla_\theta \log\pi_\theta$$. So we only need to add the logprob difference between $$\log\pi_\theta$$ and $$\log\pi_\text{ref}$$ in the reward function. By doing so, we don't need to re-sample again, we can just use the samples from the old policy, and since we add this term into the reward function, it naturally multiply the coef of IS, so everythings fine. It's quite simple.
---
The formula seems doesn't render right... | 3,025 | 367 |
qgallouedec | 2025-03-07T13:17:11 | Thanks for reporting. The easiest is indeed to turn it off. Another way is to call `LLM.llm_engine.reset_prefix_cache()` (suggested by @hmellor) after the new weights are loaded. If someone wants to try this and if it works, a PR would be welcome | 3,024 | 368 |
HuggingFaceDocBuilderDev | 2025-03-07T11:25:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3023). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,023 | 369 |
qgallouedec | 2025-03-07T16:25:40 | Can you share some code and results? | 3,021 | 370 |
AMindToThink | 2025-03-06T20:44:01 | Here's the problem part of the documentation: [here](https://huggingface.co/docs/trl/en/sft_trainer#:~:text=dataset%20%3D%20load_dataset(%22lucasmccabe%2Dlmi/CodeAlpaca%2D20k%22%2C%20split%3D%22train%22) | 3,019 | 371 |
tchang1997 | 2025-03-10T13:54:09 | +1 — As a hack, I've been getting around this by defining new reward functions and setting `reward_weight` to zero (so it still gets logged, but doesn't affect the "actual" reward). | 3,018 | 372 |
qgallouedec | 2025-03-06T15:53:58 | Let's say that you've 8 GPUs, in the limit you can have `per_device_batch_size=1` and `num_generations=8`. And set the number of gradient accumulation steps to any value.
> Currently `per_device_train_batch_size` must be a multiple of `num_generations` which can severely limit how large you can make it before
That's not exactly that. It's per_device_train_batch_size*num_devices that must be a multiple of `num_generations`.
While I understand the motivation, I think it's not straightforward to implement. | 3,017 | 373 |
JamesBowerXanda | 2025-03-06T16:19:59 | Ah yes, sorry I forgot about number of devices. Though this doesn't change much right, we just amend my statement to
`num_devices * per_device_train_batch_size * gradient_accumulation_steps ` must be a multiple of `num_generations`.
Is it complicated because currently the prepare_inputs method does both the generation and score calculation then the inputs are passed straight to the compute_loss method by the Trainer superclass?
I can see how it could cause more issues than it is worth having to fiddle with the core pipeline just for one trainer. I just thought I would bring it because I noticed how much smoother the training seemed when I was able to up the number of generations using smaller models and this seemed to be the big bottleneck to that. | 3,017 | 374 |
qgallouedec | 2025-03-06T18:06:00 |
> Is it complicated because currently the prepare_inputs method does both the generation and score calculation then the inputs are passed straight to the compute_loss method by the Trainer superclass?
Yes that's correct
> I was able to up the number of generations using smaller models and this seemed to be the big bottleneck to that.
You can increase the number of generations quite high actually. Example, if you've 8 GPUs that can handle 4 generations, you can use number of generations per prompt up to 32,
| 3,017 | 375 |
JamesBowerXanda | 2025-03-07T09:05:00 | Ok, I understand, thanks for your prompt responses.
Unfortunately I am most interested in using this on my personal gpu so I am not using multiple gpu clusters.
Thanks for your time, I am happy for the issue to be closed since it is not deemed feasible. | 3,017 | 376 |
qgallouedec | 2025-03-07T09:11:09 | With 1 GPU, the best you can do is to set `num_generations=per_device_train_batch_size`, and set the `gradient_accumulation_steps` depending on the desired effective batch size. Example:
```
per_device_train_batch_size = 8
num_generations = 8
gradient_accumulation_steps = 16
```
To have an effective batch size of 128 | 3,017 | 377 |
JamesBowerXanda | 2025-03-07T09:28:08 | I understand this but it doesn't solve the issue of the loss function being an estimation based on a sample size of 8.

Based on the GRPO loss formulation the expectation we estimate is conditional on the input prompt as are the advantage calculations and just increasing the gradient accumulation to 16 gives us 16 high variance estimates of the expectation rather than one low variance estimation.
I hope this makes sense. As I said before I can see why this is deemed not worth it since most large scale use cases can probably afford to just up the number of gpus. I had just hoped it would be an easier adjustment that would allow us hobbyists to stick closer to the theory of the paper. | 3,017 | 378 |
qgallouedec | 2025-03-07T10:20:17 | Then you should increase `num_generations`. By default it's 8, but in the DeepSeek Math paper, they use 64. Of course you'll be probably limited by the compute here if you've only have 1 GPU | 3,017 | 379 |
qgallouedec | 2025-03-07T10:25:20 | > I had just hoped it would be an easier adjustment
In fact, this is tricky, as it would involve sampling, generating and calculating the advantage for the whole batch, then iterating somehow over the batch. It's not impossible, but it adds an implementation complexity that I don't think is justified.
In my experience, playing with a low `num_generations` gives good results. | 3,017 | 380 |
JamesBowerXanda | 2025-03-07T11:00:04 | Forgive my naivety but would it not be as simple as overiding the `training_step` method for `GRPOTrainer` from the base `Trainer` one which is:
```
def training_step(
self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], num_items_in_batch=None
) -> torch.Tensor:
"""
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Args:
model (`nn.Module`):
The model to train.
inputs (`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument `labels`. Check your model's documentation for all accepted arguments.
Return:
`torch.Tensor`: The tensor with training loss on this batch.
"""
model.train()
if hasattr(self.optimizer, "train") and callable(self.optimizer.train):
self.optimizer.train()
inputs = self._prepare_inputs(inputs)
if is_sagemaker_mp_enabled():
loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps)
return loss_mb.reduce_mean().detach().to(self.args.device)
with self.compute_loss_context_manager():
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
del inputs
if (
self.args.torch_empty_cache_steps is not None
and self.state.global_step % self.args.torch_empty_cache_steps == 0
):
if is_torch_xpu_available():
torch.xpu.empty_cache()
elif is_torch_mlu_available():
torch.mlu.empty_cache()
elif is_torch_musa_available():
torch.musa.empty_cache()
elif is_torch_npu_available():
torch.npu.empty_cache()
elif is_torch_mps_available(min_version="2.0"):
torch.mps.empty_cache()
else:
torch.cuda.empty_cache()
kwargs = {}
# For LOMO optimizers you need to explicitly use the learnign rate
if self.args.optim in [OptimizerNames.LOMO, OptimizerNames.ADALOMO]:
kwargs["learning_rate"] = self._get_learning_rate()
if self.args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
else:
# Finally we need to normalize the loss for reporting
if not self.model_accepts_loss_kwargs and self.compute_loss_func is None:
loss = loss / self.args.gradient_accumulation_steps
# Turning off loss scaling w.r.t. gradient accumulation when DeepSpeed is enabled
# https://github.com/huggingface/transformers/pull/35808
if self.accelerator.distributed_type == DistributedType.DEEPSPEED:
kwargs["scale_wrt_gas"] = False
self.accelerator.backward(loss, **kwargs)
return loss.detach()
```
to somehting like
```
def training_step(
self, model: nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]], num_items_in_batch=None
) -> torch.Tensor:
"""
Perform a training step on a batch of inputs.
Subclass and override to inject custom behavior.
Args:
model (`nn.Module`):
The model to train.
inputs (`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument `labels`. Check your model's documentation for all accepted arguments.
Return:
`torch.Tensor`: The tensor with training loss on this batch.
"""
model.train()
if hasattr(self.optimizer, "train") and callable(self.optimizer.train):
self.optimizer.train()
inputs = self._prepare_inputs(inputs)
if is_sagemaker_mp_enabled():
loss_mb = smp_forward_backward(model, inputs, self.args.gradient_accumulation_steps)
return loss_mb.reduce_mean().detach().to(self.args.device)
# CHANGED: Split the inputs into mini-batches
mini_batch_size = self.args.per_device_train_batch_size * self.args.n_gpu
mini_batch_inputs = []
for i in range(inputs["prompt_ids"].shape[0] // mini_batch_size):
mini_batch_inputs.append(
{
key: value[i * mini_batch_size : (i + 1) * mini_batch_size] for key, value in inputs.items()
}
)
losses = []
del inputs
# CHANGED: Iterate over the mini-batches for loss calculation and gradient backward pass
for inputs in mini_batch_inputs:
with self.compute_loss_context_manager():
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
del inputs
if (
self.args.torch_empty_cache_steps is not None
and self.state.global_step % self.args.torch_empty_cache_steps == 0
):
if is_torch_xpu_available():
torch.xpu.empty_cache()
elif is_torch_mlu_available():
torch.mlu.empty_cache()
elif is_torch_musa_available():
torch.musa.empty_cache()
elif is_torch_npu_available():
torch.npu.empty_cache()
elif is_torch_mps_available(min_version="2.0"):
torch.mps.empty_cache()
else:
torch.cuda.empty_cache()
kwargs = {}
# For LOMO optimizers you need to explicitly use the learnign rate
if self.args.optim in [OptimizerNames.LOMO, OptimizerNames.ADALOMO]:
kwargs["learning_rate"] = self._get_learning_rate()
if self.args.n_gpu > 1:
loss = loss.mean() # mean() to average on multi-gpu parallel training
if self.use_apex:
with amp.scale_loss(loss, self.optimizer) as scaled_loss:
scaled_loss.backward()
else:
# Finally we need to normalize the loss for reporting
if not self.model_accepts_loss_kwargs and self.compute_loss_func is None:
loss = loss / self.args.gradient_accumulation_steps
# Turning off loss scaling w.r.t. gradient accumulation when DeepSpeed is enabled
# https://github.com/huggingface/transformers/pull/35808
if self.accelerator.distributed_type == DistributedType.DEEPSPEED:
kwargs["scale_wrt_gas"] = False
self.accelerator.backward(loss, **kwargs)
# CHANGED: Append the loss to the list so that we can average it later and return the same value as before
losses.append(loss.detach())
# CHANGED: Average the losses and return the same value as before
loss = torch.mean(torch.tensor(losses))
return loss.detach()
```
I have added comments starting with `# CHANGED:` to all parts I have edited from the trainers method. | 3,017 | 381 |
JamesBowerXanda | 2025-03-07T11:04:13 | Sorry, I am not trying to be a pain. As I said previously I am happy for you to close this if it is just a no go. Just thought I would offer the suggestion in case it helped. | 3,017 | 382 |
qgallouedec | 2025-03-07T11:11:11 | It might work, but that's the complexity I want to avoid. Forking the repo might be the best option here. Or subclass `GRPOTrainer` to override the `training_step` method. | 3,017 | 383 |
JamesBowerXanda | 2025-03-07T11:17:52 | Ok, I am happy to do that. I won't bog you down anymore on this. | 3,017 | 384 |
ingambe | 2025-03-16T21:13:30 | Actually, being restricted on the minibatch size by the number of trajectories is very limiting.
Depending on the problem, if the variance is large or the reward is very sparse, 8 iterations will not cut it. | 3,017 | 385 |
skoshx | 2025-03-06T14:47:19 | This is the offending code in `online_dpo_trainer.py`:
```py
def _generate(self, model, prompts):
eos_token_id = self.processing_class.eos_token_id
pad_token_id = self.processing_class.pad_token_id
# Apply chat template and tokenize the input. We do this on-the-fly to enable the use of reward models and
# policies with different tokenizers / chat templates.
inputs = [{"prompt": prompt} for prompt in prompts]
inputs = [maybe_apply_chat_template(x, self.processing_class) for x in inputs]
inputs = [self.tokenize_row(x, model.config.is_encoder_decoder, self.processing_class) for x in inputs]
inputs = self.data_collator(inputs)
# Sample 2 completions per prompt of size `max_new_tokens` from the model
inputs = self._prepare_inputs(inputs)
prompt_ids = inputs["prompt_input_ids"].repeat(2, 1)
prompt_mask = inputs["prompt_attention_mask"].repeat(2, 1)
with unwrap_model_for_generation(
model, self.accelerator, gather_deepspeed3_params=self.args.ds3_gather_for_generation
) as unwrapped_model:
output = unwrapped_model.generate(
input_ids=prompt_ids,
attention_mask=prompt_mask,
generation_config=self.generation_config,
)
completion_ids = output[:, prompt_ids.size(1) :]
completion_ids, completion_mask = truncate_right(completion_ids, eos_token_id, pad_token_id)
return prompt_ids, prompt_mask, completion_ids, completion_mask
```
I fixed the error by moving the input tokenization and collation logic inside the `unwrap_model_for_generation` block.
```py
with unwrap_model_for_generation(
model, self.accelerator, gather_deepspeed3_params=self.args.ds3_gather_for_generation
) as unwrapped_model:
# Apply chat template and tokenize the input. We do this on-the-fly to enable the use of reward models and
# policies with different tokenizers / chat templates.
inputs = [{"prompt": prompt} for prompt in prompts]
inputs = [maybe_apply_chat_template(x, self.processing_class) for x in inputs]
inputs = [self.tokenize_row(x, model.config.is_encoder_decoder, self.processing_class) for x in inputs]
inputs = self.data_collator(inputs)
# Sample 2 completions per prompt of size `max_new_tokens` from the model
inputs = self._prepare_inputs(inputs)
prompt_ids = inputs["prompt_input_ids"].repeat(2, 1)
prompt_mask = inputs["prompt_attention_mask"].repeat(2, 1)
output = unwrapped_model.generate(
input_ids=prompt_ids,
attention_mask=prompt_mask,
generation_config=self.generation_config,
)
```
That seemed to work, but then I get some quite bad looking error:
```
AttributeError: 'DeepSpeedZeRoOffload' object has no attribute '_register_hooks_recursively'
[rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/ml-data/crafty/simple/docs_dpo_online_repro.py", line 28, in <module>
[rank0]: trainer.train()
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/transformers/trainer.py", line 2241, in train
[rank0]: return inner_training_loop(
[rank0]: ^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/trainer/online_dpo_trainer.py", line 538, in training_step
[rank0]: prompt_ids, prompt_mask, completion_ids, completion_mask = self._generate(model, prompts)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/trainer/online_dpo_trainer.py", line 482, in _generate
[rank0]: with unwrap_model_for_generation(
[rank0]: File "/usr/lib/python3.11/contextlib.py", line 144, in __exit__
[rank0]: next(self.gen)
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/models/utils.py", line 213, in unwrap_model_for_generation
[rank0]: add_hooks(model)
[rank0]: File "/mnt/ml-data/crafty/simple/repro-venv/lib/python3.11/site-packages/trl/models/utils.py", line 174, in add_hooks
[rank0]: optimizer_offload._register_hooks_recursively(optimizer_offload.module)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: AttributeError: 'DeepSpeedZeRoOffload' object has no attribute '_register_hooks_recursively'
```
So I tried by using a lower DeepSpeed stage (1), and a smaller model so they would fit on one GPU:
```py
# train_online_dpo.py
from datasets import load_dataset
from trl import OnlineDPOConfig, OnlineDPOTrainer, PairRMJudge
from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import BasePairwiseJudge
class DummyPairwiseJudge(BasePairwiseJudge):
def judge(self, prompts: list[str], completions: list[list[str]], shuffle_order: bool = True) -> list[int]:
return [0 for prompt in prompts]
pass
pass
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
# model = AutoModelForCausalLM.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct")
# tokenizer = AutoTokenizer.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct")
# Explicitly defining `ref_model` because of error "ValueError: DeepSpeed ZeRO-3 is enabled and is not compatible with `create_reference_model()`. Please instantiate your reference model directly with `AutoModelForCausalLM.from_pretrained()`."
# ref_model = AutoModelForCausalLM.from_pretrained("unsloth/Meta-Llama-3.1-8B-Instruct")
ref_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
train_dataset = load_dataset("trl-lib/ultrafeedback-prompt", split="train")
training_args = OnlineDPOConfig(output_dir="Qwen2-0.5B-OnlineDPO", logging_steps=10, bf16=True)
trainer = OnlineDPOTrainer(
model=model, judge=DummyPairwiseJudge(), args=training_args, processing_class=tokenizer, train_dataset=train_dataset, ref_model=ref_model
)
trainer.train()
```
This trains successfully:
```bash
{'loss': 0.6932, 'grad_norm': 46.673831939697266, 'learning_rate': 4.823521106875618e-07, 'objective/kl': 0.9423828125, 'objective/entropy': 255.7, 'objective/non_score_reward': -0.09420166015625, 'rewards/chosen': -0.001299285888671875, 'rewards/rejected': -0.00139923095703125, 'rewards/accuracies': 0.45625, 'rewards/margins': 0.000103759765625, 'logps/chosen': -53.7, 'logps/rejected': -56.8, 'val/contain_eos_token': 0.29375, 'beta': 0.09999999999999999, 'epoch': 0.11}
4%|████▋ | 255/7083 [10:28<4:36:33, 2.43s/it]
```
So basically, seems like using DeepSpeed Stage 3 just doesn't work. And it's a shame because even 7B models can't be finetuned without quantization even with A100 80GB GPUs...
| 3,016 | 386 |
skoshx | 2025-03-06T17:08:01 | 🎉 Update:
Quickly reading through the DeepSpeed codebase gave me the understanding that `DeepSpeedZeRoOffload` class automatically registered hooks upon instance creation, so I removed the `optimized_offload._register_hooks_recursively(optimizer_offload.module)` code line (`add_hooks` can completely be disregarded in `trl/models/utils.py`) and now Online DPO works with DeepSpeed ZeRo Stage 3.
The above training with the `unsloth/Meta-Llama-3.1-8B-Instruct` model on 2xA100 (80GB) node would take about 53 hours to complete:
```
0%| | 6/7083 [03:00<53:37:49, 27.28s/it]
```
I'm happy to open a PR to make these fixes, but would love the input of a maintainer to maybe shed some light on potential problems from these patches, since I haven't worked that long on the TRL repo. | 3,016 | 387 |
qgallouedec | 2025-03-06T17:59:42 | Is it related to #2963? | 3,016 | 388 |
skoshx | 2025-03-06T19:55:55 | The second part is related, but that won't fix the original "AttributeError: 'dict' object has no attribute 'is_encoder_decoder'" error.
Also, I see that PR was merged, but still I'm not convinced it's even needed to call `self._register_deepspeed_module(self.module)`, like they do in that PR, since it gets called automatically in `__init__`? Am I missing something?
[Code line where hooks are automatically set up](https://github.com/deepspeedai/DeepSpeed/blob/c2c81993948fc28385542196c8544fb442017987/deepspeed/runtime/zero/parameter_offload.py#L177) | 3,016 | 389 |
qgallouedec | 2025-03-06T08:00:54 | Thanks for reporting, how would you fix that? | 3,015 | 390 |
Boltzmachine | 2025-03-06T19:48:19 | I clamp it for now | 3,015 | 391 |
vagitablebirdcode | 2025-03-14T09:55:29 | I recommend implementing a similar `SoftClip` method in Pytorch as in TensorFlow Probability for truncation, its formula is similar to the following:

This activation function ensures that the output is smooth over the entire defined domain to prevent gradient explosion during backpropagation here. | 3,015 | 392 |
August-murr | 2025-03-06T07:59:50 | @qgallouedec I'm gonna have to ask you to reproduce or at least rerun the code you used to train the https://github.com/huggingface/trl/pull/2873#issuecomment-2663793035 so I can calrify wether the problem is on my side and my script or TRL. | 3,013 | 393 |
AndreiCComan | 2025-03-06T17:47:02 | @August-murr I had a similar issue in #2856 which has been fixed. Could you try to run the same MRE I posted in #2856 and confirm you are facing the same issue? | 3,013 | 394 |
cuiyuhao1996 | 2025-03-18T02:52:42 | I ran into the same problem, even with the latest update. | 3,013 | 395 |
cuiyuhao1996 | 2025-03-18T02:54:40 | Have you solved the problem? :) | 3,013 | 396 |
August-murr | 2025-03-18T12:05:25 | > Have you solved the problem? :)
@qgallouedec said he was working on it
@qgallouedec any updates? | 3,013 | 397 |
qgallouedec | 2025-03-07T16:36:09 | This can be considered; have you tried implementing it? | 3,010 | 398 |
radna0 | 2025-03-07T16:39:35 | @qgallouedec I’m still experimenting with LMDeploy for inference, so not yet. | 3,010 | 399 |
HuggingFaceDocBuilderDev | 2025-03-11T14:34:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3009). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,009 | 400 |
qgallouedec | 2025-03-22T18:19:41 | ## Benchmark packing
```python
import timeit
import numpy as np
from datasets import Dataset
from trl.data_utils import pack_examples, pack_dataset
# Create a larger dataset with sequence lengths following a gamma distribution
num_samples = 10_000
# Generate sequence lengths following a gamma distribution
seq_lengths = np.random.gamma(shape=5, scale=20, size=num_samples) # mean will be 100
seq_lengths = np.clip(seq_lengths, 10, None).astype(int) # Clip to [10, inf)
# Generate input sequences with random lengths based on gamma distribution
examples = {
"input_ids": [list(range(length)) for length in seq_lengths],
"attention_mask": [[1] * length for length in seq_lengths],
}
dataset = Dataset.from_dict(examples)
max_length = 128 # Set a fixed packing length
# Benchmark pack_dataset
time_pack_dataset = timeit.timeit(lambda: pack_dataset(dataset, max_length), number=10)
# Benchmark dataset.map with pack_examples
time_pack_examples = timeit.timeit(
lambda: dataset.map(pack_examples, batched=True, fn_kwargs={"seq_length": max_length}), number=10
)
print(f"pack_dataset time: {time_pack_dataset:.4f} seconds")
print(f"dataset.map(pack_examples) time: {time_pack_examples:.4f} seconds")
```
```
pack_dataset time: 0.0667 seconds
dataset.map(pack_examples) time: 19.3734 seconds
Speedup: 290.46x
``` | 3,009 | 401 |
qgallouedec | 2025-03-22T18:22:40 | ## Benchmark truncate
```python
import timeit
import numpy as np
from datasets import Dataset
from trl.data_utils import truncate_dataset
def truncate_examples(example, max_length):
return {key: example[key][:max_length] for key in ["input_ids", "attention_mask"]}
# Create a larger dataset with sequence lengths following a gamma distribution
num_samples = 10_000
# Generate sequence lengths following a gamma distribution
seq_lengths = np.random.gamma(shape=5, scale=20, size=num_samples) # mean will be 100
seq_lengths = np.clip(seq_lengths, 10, None).astype(int) # Clip to [10, inf)
# Generate input sequences with random lengths based on gamma distribution
examples = {
"input_ids": [list(range(length)) for length in seq_lengths],
"attention_mask": [[1] * length for length in seq_lengths],
}
dataset = Dataset.from_dict(examples)
max_length = 128 # Set a fixed truncation length
# Benchmark truncate_dataset
time_truncate_dataset = timeit.timeit(lambda: truncate_dataset(dataset, max_length), number=10)
# Benchmark dataset.map with truncate_examples
time_truncate_examples = timeit.timeit(
lambda: dataset.map(truncate_examples, batched=True, fn_kwargs={"max_length": max_length}), number=10
)
print(f"truncate_dataset time: {time_truncate_dataset:.4f} seconds")
print(f"dataset.map(truncate_examples) time: {time_truncate_examples:.4f} seconds")
print(f"Speedup: {time_truncate_examples / time_truncate_dataset:.2f}x")
```
```
truncate_dataset time: 0.0611 seconds
dataset.map(truncate_examples) time: 6.3807 seconds
Speedup: 104.47x
``` | 3,009 | 402 |
qgallouedec | 2025-03-05T17:22:33 | Thanks for reporting. I can't reproduce right now. Can you try to provide the full code with a dataset and a model that allow to reproduce? Also, try downgrading to vLLM 0.7.2 and pull the latests commit from trl. Looking forward to know if it solves the issue. | 3,008 | 403 |
iamansinha | 2025-03-12T08:24:20 | @qgallouedec Thanks for your reply!
[Line 705 of grpo_trainer.py](https://github.com/huggingface/trl/blob/3f0695a4ca6f27bd1b7d0280c71960e7aff0d298/trl/trainer/grpo_trainer.py#L705):
`device = self.accelerator.device` was giving just `"cuda"`.
So, I was able to patch the error by manually setting `device = 'cuda:0'` before Line 751.
I found out that I was facing this problem only with 2xA100 setup, and not with another machine with 4xA100. So it might be my machine specific issue if you are unable to reproduce this error. So, closing this issue for now. | 3,008 | 404 |
luckyyangrun | 2025-03-18T06:27:34 | i face the same issue with 2*4090 | 3,008 | 405 |
Vanchrn | 2025-03-22T03:26:38 | same | 3,008 | 406 |
HuggingFaceDocBuilderDev | 2025-03-03T18:28:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 3,003 | 407 |
OctoSabercat | 2025-03-03T17:38:34 | @bot /style | 3,002 | 408 |
HelloWorldLTY | 2025-03-03T20:04:50 | Hi, did you try the model and have any ideas? Thanks. | 2,999 | 409 |
tastelikefeet | 2025-03-14T02:43:39 | May be you can try our framework based on trl: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/full_vllm_qwenvl.sh
We support train a 72B model with 4 A100 GPUs:
https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/train_72b_4gpu.sh | 2,999 | 410 |
Wangbiao2 | 2025-03-15T10:27:54 | > May be you can try our framework based on trl: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/full_vllm_qwenvl.sh We support train a 72B model with 4 A100 GPUs: https://github.com/modelscope/ms-swift/blob/main/examples/train/grpo/train_72b_4gpu.sh
Thank you! | 2,999 | 411 |
Subsets and Splits