user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-03-30 02:12:16
body
stringlengths
1
173k
issue_number
int64
1
3.18k
__index_level_0__
int64
0
8.59k
pxyWaterMoon
2025-03-02T15:20:08
I met the same problem while trianing Alpaca-7B with GRPO on A100,the trl environments are as follows: ``` - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.31 - Python version: 3.11.11 - TRL version: 0.16.0.dev0 - PyTorch version: 2.6.0 - CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB - Transformers version: 4.49.0 - Accelerate version: 1.3.0 - Accelerate config: not found - Datasets version: 3.3.0 - HF Hub version: 0.28.1 - bitsandbytes version: not installed - DeepSpeed version: 0.16.3 - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: not installed - PEFT version: not installed - vLLM version: not installed ```
2,996
412
zsychina
2025-03-02T18:11:46
Another report ```bash 0%| | 2/87543 [00:18<223:50:10, 9.20s/it]../aten/src/ATen/native/cuda/TensorCompare.cu:110: _assert_async_cuda_kernel: block: [0,0,0], thread: [0,0,0] Assertion `probability tensor contains either `inf`, `nan` or element < 0` failed. Traceback (most recent call last): File "/home/zhusiyuan/test_trl/example.py", line 27, in <module> trainer.train() File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2241, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 3692, in training_step inputs = self._prepare_inputs(inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/trl/trainer/grpo_trainer.py", line 564, in _prepare_inputs prompt_completion_ids = unwrapped_model.generate( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2223, in generate result = self._sample( ^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 3200, in _sample while self._has_unfinished_sequences( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2401, in _has_unfinished_sequences elif this_peer_finished: ^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. Traceback (most recent call last): File "/home/zhusiyuan/test_trl/example.py", line 27, in <module> trainer.train() File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2241, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 2548, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/trainer.py", line 3692, in training_step inputs = self._prepare_inputs(inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/trl/trainer/grpo_trainer.py", line 564, in _prepare_inputs prompt_completion_ids = unwrapped_model.generate( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2223, in generate result = self._sample( ^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 3200, in _sample while self._has_unfinished_sequences( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/zhusiyuan/miniconda3/envs/trl/lib/python3.12/site-packages/transformers/generation/utils.py", line 2401, in _has_unfinished_sequences elif this_peer_finished: ^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ```
2,996
413
zsychina
2025-03-03T06:56:47
Guys, I think I found the problem. For distributed training, it has to be ``` accelerate lauch grpo_example.py ``` while ``` python -u grpo_example.py ``` is ok for single gpu training, but may cause above errors in distributed training.
2,996
414
qgallouedec
2025-03-01T08:41:02
Indeed. This comes from https://github.com/huggingface/trl/pull/2881. Our idea on this is, unless we find that it gives worst results, we should align with classsical loss normalization (global). Have you compared the two options? If so, the results would be very useful.
2,995
415
tchang1997
2025-03-03T21:30:49
On a related note, is there a reason why the per token loss is globally normalized (L950 of [`grpo_trainer.py`](https://github.com/huggingface/trl/blob/7442d42c21697fd6c0998a75e7478ed4b40490be/trl/trainer/grpo_trainer.py)), but the KL term continues to use per-sequence normalization (L956 of [`grpo_trainer.py`](https://github.com/huggingface/trl/blob/7442d42c21697fd6c0998a75e7478ed4b40490be/trl/trainer/grpo_trainer.py))? Looks like the [GRPO paper (Eq. 3)](https://arxiv.org/pdf/2402.03300) sequence-normalizes both (expanding the KL divergence term), so I wonder these should be consistent (i.e., both global normalization, or both sequence-norm, not a mix).
2,995
416
qgallouedec
2025-03-03T22:17:16
Actually L956 is just logging. But you're right that it should use a consistent normalization. Would you like to open a PR to fix this line?
2,995
417
tchang1997
2025-03-03T22:51:04
Ah, I see that now. Anyway — opened a [PR as discussed](https://github.com/huggingface/trl/pull/3004)!
2,995
418
HuggingFaceDocBuilderDev
2025-02-28T18:45:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2993). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,993
419
HuggingFaceDocBuilderDev
2025-02-28T14:00:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2991). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,991
420
qgallouedec
2025-02-28T14:46:29
This could help to understand: https://github.com/huggingface/trl/blob/3d94e4e25c5e63ac56a62cdb9dd5f6ec4153e3c0/trl/trainer/grpo_trainer.py#L580-L598 Feel free to get back to me if it's still not clear. > Additionally, I noticed that the current implementation determines divisibility by computing the factors, which seems a bit redundant. It might be more concise to directly check: Indeed, but I wanted to provide the user with values that could solve the issue: https://github.com/huggingface/trl/blob/3d94e4e25c5e63ac56a62cdb9dd5f6ec4153e3c0/trl/trainer/grpo_trainer.py#L424
2,990
421
Facico
2025-02-28T14:54:55
Thanks!
2,990
422
qgallouedec
2025-02-28T12:48:54
Thanks for the suggestion. In fact it's already been suggested in #2728, and I think this solution should actually be avoided: https://github.com/huggingface/trl/pull/2728#issuecomment-2635166424
2,989
423
nopepper
2025-02-28T12:52:54
@qgallouedec Interesting, thanks for the links, I hadn't seen the discussion. I was working on a project that really benefited from custom sampling kwargs (seq2seq task where target significantly overlaps input) and thought it would be useful for others. Would it be possible to add the generation arguments by name then to GRPOConfig? Specifically: `top_p`, `top_k`, `min_p`, `repetition_penalty` and `length_penalty` are shared by both `SamplingParams` and `GenerationConfig`
2,989
424
qgallouedec
2025-02-28T14:40:33
Yes I think it makes sense. I'm not sure that they all share the same default in vLLM and transformers though. You'll need to check
2,989
425
nopepper
2025-02-28T15:32:56
I added them to a new section called `Parameters that control generation` together with `temperature`. We get around the diverging default values problem by just making them optional and not passing any argument that's `None` :D
2,989
426
qgallouedec
2025-03-04T22:19:14
Defaults values | | transformers | vLLM | | ------------------ | ------------------ | -------------- | | top_p | 1.0 | 1.0 | | top_k | 50 (supports None) | -1 (means all) | | min_p | None | 0.0 | | repetition_penalty | 1.0 | 1.0 |
2,989
427
HuggingFaceDocBuilderDev
2025-03-04T23:13:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2989). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,989
428
HuggingFaceDocBuilderDev
2025-02-28T11:19:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2987). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,987
429
HuggingFaceDocBuilderDev
2025-02-28T10:46:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2986). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,986
430
qgallouedec
2025-02-28T08:54:52
Which trainer?
2,985
431
cyr0930
2025-02-28T08:59:55
ah sorry it's DPOTrainer
2,985
432
kevinlu1248
2025-03-08T02:36:10
It also seems to not work with Deepspeed stage 1/2, getting: ``` [rank0]: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) ```
2,985
433
kevinlu1248
2025-03-08T03:11:04
Found a fix for stage 1 & 2 by explicitly initializing the reference model, it looks like it gets garbage collected off of VRAM as well after the initial log prob computation is complete.
2,985
434
jamesbraza
2025-02-28T07:05:09
Is there a standard solution for DeepSpeed tests in CI? I think this is the first integration test for DeepSpeed added to the repo. In the future, we can expand it to cover https://github.com/huggingface/trl/pull/2871 and https://github.com/huggingface/trl/pull/2963.
2,984
435
qgallouedec
2025-03-24T16:36:18
I can't reproduce, this runs on my side: ```python from datasets import load_dataset from trl import GRPOConfig, GRPOTrainer def dummy_reward_func(completions, **kwargs): return [0.0] * len(completions) dataset = load_dataset("trl-lib/tldr") training_args = GRPOConfig(output_dir="2983", num_iterations=3) trainer = GRPOTrainer( model="Qwen/Qwen2-0.5B-Instruct", reward_funcs=dummy_reward_func, args=training_args, train_dataset=dataset["train"], ) trainer.train() ``` Maybe try to upgrade TRL? If you still get the issue, please provide a MRE.
2,983
436
Andcircle
2025-03-24T20:53:19
Thanks @qgallouedec, this is long time back I saw the new release note for 0.16.0, num_iteration is added as a feature. I will try it out again
2,983
437
qgallouedec
2025-03-24T21:30:56
Thanks, yes, sorry for the delay, the number of open issues is overwhelming, I'm trying to catch up.
2,983
438
HuggingFaceDocBuilderDev
2025-02-28T10:49:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2982). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,982
439
jhinpan
2025-02-28T01:36:57
Current Plan of adding alternative inference backend as SGLang: 0. Add global flag as `use_sglang` 1. Init the offline Engine in `def __init__()` 2. Implement function of `_update_sglang_engine_weights` to function like `_move_model_to_vllm` 3. Receive generation requests in `_prepare_inputs()` 4. Shut down SGLang engine after speeding up the generation
2,981
440
jhinpan
2025-02-28T01:46:48
Current Issue Summary: - Successful Standalone Inference: The SGLang Server works correctly as an inference engine when run in a separate terminal. - Distributed Engine Initialization Failure: When switching to using the SGLang engine within the distributed trainer context, the distributed initialization of the offline SGLang engine fails. - Logging Insights: - All processes successfully pass the explicit barriers, and the main process reaches the SGLang engine initialization. - However, no log messages are observed indicating that the generation call (`engine.generate()`) is executed or that a response is returned. - Conclusion: These observations suggest that execution is stalling either inside or before the call to engine.generate(). The issue likely lies within SGLang’s internal behavior when operating under the distributed trainer context.
2,981
441
jhinpan
2025-03-01T02:41:57
It seems the issue is now narrowing down to: `accelerate launch` has some conflicts with SGLang offline engine intialization. Need to make more checks these two days. - Firstly, maybe we can check whether Accelerate + SGLang, i.e. without things in TRL, behaves normally - If so, maybe we can check whether manually creating processes (instead of using Accelerate) + SGLang works; otherwise we may need to either make minimal sample or check what global things does TRL change - If so, maybe we can check environment variable differences between the "manual+SGLang" and "Accelerate+SGLang", and try to remove the different ones, and see whether it works - If so, we may further dig that single (or several) env var to know what is happening
2,981
442
zhaochenyang20
2025-03-11T01:52:36
Amazing work, I will review it quickly.
2,981
443
qgallouedec
2025-03-12T02:40:12
Is this PR ready for review? There seems to be many files/lines used for dev that aren't cleaned. Also, can you make sure to run the pre-commit? It seems like you use a custom format config and it results to many lines changed but not related to the PR
2,981
444
zhaochenyang20
2025-03-12T02:56:21
@jhinpan Rebase with the main? Then I can ask them to review.
2,981
445
jhinpan
2025-03-12T03:11:07
> Is this PR ready for review? There seems to be many files/lines used for dev that aren't cleaned. > Also, can you make sure to run the pre-commit? It seems like you use a custom format config and it results to many lines changed but not related to the PR I just cleaned all those dev files and run the pre-commit. Hope that work. Feel free to lmk whether those testing scripts needed to be removed. @qgallouedec @zhaochenyang20
2,981
446
zhaochenyang20
2025-03-12T04:21:06
> This branch has conflicts that must be resolved > Changes can be cleanly merged. @jhinpan
2,981
447
nopepper
2025-02-28T07:57:32
I was wrong, I had `beta=0.0` in all my experiments. Setting `beta=0.001` was enough to prevent the gradient explosion. Perhaps we shouldn't suggest that option in the docs so prominently? ![Image](https://github.com/user-attachments/assets/901a2028-4096-4d58-85d6-faff13573669)
2,980
448
qgallouedec
2025-02-28T15:24:36
I suspected that this could produce surprising results on a long run. https://github.com/huggingface/trl/pull/2806#issuecomment-2645941307 Would you recommend adding some sort of warning in the documentation?
2,980
449
nopepper
2025-02-28T15:30:41
Sounds good. Perhaps something like this? ```bash KL coefficient. If `0.0`, the reference model is not loaded, reducing memory usage and improving training speed, but may be numerically unstable for long training runs. ```
2,980
450
qgallouedec
2025-02-28T15:56:08
Looks good! Are you willing to open a PR?
2,980
451
kenluozhenyu
2025-03-03T14:50:08
Same problem, TRL version: 0.15.1, on Windows 11
2,979
452
Tony-yzj
2025-03-19T03:51:08
same problem, TRL version: 0.15.2, CUDA12.1, on Windows 11
2,979
453
qgallouedec
2025-02-27T22:16:39
Thanks for the suggestion, do you have any such tutorial in mind?
2,978
454
ParagEkbote
2025-02-28T14:23:52
Yes, we could use a model like smolm2 and use DPO or ORPO with a custom dataset to display the integration. WDYT? cc: @qgallouedec
2,978
455
ParagEkbote
2025-03-11T18:26:23
Gentle ping cc: @qgallouedec
2,978
456
qgallouedec
2025-03-11T18:31:43
Hi, sorry if this is unclear. In fact this part of the documentation belongs to the community, to share its notebooks. That's why it's called “community tutorials”. If you have a notebook to add, we can add it. But I think what you're really looking for is documentation for the various TRL integrations? If yes, then we have an “integration" section in the doc. It's not finished yet, and we're very open to contributions.
2,978
457
qgallouedec
2025-02-27T14:41:06
Try to downgrade vLLM to 0.7.2. See #2952
2,977
458
zaddy6
2025-02-27T14:42:40
vLLM 0.7.2 doesnt support the new phi4 mini any other workaround apart from downgrading?
2,977
459
qgallouedec
2025-02-27T14:15:14
Answer here: https://github.com/huggingface/open-r1/issues/239#issuecomment-2646297851 😊
2,976
460
L1n111ya
2025-02-27T14:25:03
> Answer here: [huggingface/open-r1#239 (comment)](https://github.com/huggingface/open-r1/issues/239#issuecomment-2646297851) 😊 Thank you for your reply. I know the advantage function is 0, but what puzzles me is that since that's the case, the loss only has the KL divergence term. Does it not update based on the reward function? How does the reward converge?
2,976
461
iamansinha
2025-03-01T14:03:22
This might help: [huggingface/open-r1/issues/239#issuecomment-2692241946](https://github.com/huggingface/open-r1/issues/239#issuecomment-2692241946)
2,976
462
L1n111ya
2025-03-02T04:08:38
> This might help: [huggingface/open-r1/issues/239#issuecomment-2692241946](https://github.com/huggingface/open-r1/issues/239#issuecomment-2692241946) Thank you for your reply, it has resolved my doubts.
2,976
463
HuggingFaceDocBuilderDev
2025-02-27T09:15:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2975). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,975
464
nbasyl
2025-02-27T09:49:52
Thanks for double-checking this!
2,974
465
HuggingFaceDocBuilderDev
2025-02-27T11:10:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2974). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,974
466
canghongjian
2025-02-27T08:09:29
Two h20 is enough to start the training, but the speed is quite slow.
2,972
467
Tuziking
2025-02-27T08:42:26
> Two h20 is enough to start the training, but the speed is quite slow.两个 h20 足够开始训练,但速度相当慢。 I use GRPOtrainer to train with two H20 , but cuda out of memory(I tried more H20 to train but also failed). What's strange is that during the first step, each GPU only uses 40GB, but at the second step, it suddenly fills up and causes the OOM error. I don't know if this is a bug. My configuration is as follows: ``` python training_args = GRPOConfig( output_dir=output_dir, learning_rate=5e-6, adam_beta1 = 0.9, adam_beta2 = 0.99, weight_decay = 0.1, warmup_ratio = 0.1, lr_scheduler_type='cosine', logging_steps=1, bf16=True, per_device_train_batch_size=1, gradient_accumulation_steps=1, num_generations=2, max_prompt_length=256, max_completion_length=512, num_train_epochs=1, save_steps=100, max_grad_norm=0.1, log_on_each_node=False, # use_vllm=False, report_to="wandb", vllm_gpu_memory_utilization=0.5, ) trainer = GRPOTrainer( model = model, # reward_funcs = xmlcount_reward_func, reward_funcs = [ xmlcount_reward_func, soft_format_reward_func, # strict_format_reward_func, int_reward_func, correctness_reward_func, ], args = training_args, train_dataset = dataset, ) ```
2,972
468
Fox237
2025-03-14T02:22:35
同样的问题,解决了吗?
2,972
469
baibizhe
2025-03-01T15:58:05
no.i've stuck on this problem for a while. the fact is that trl only support tensor parallel=1 currently. you could switch to some other framework such as verl and openrlhf. they work smoothly
2,971
470
xz259
2025-03-15T15:17:45
I think the prompts over the gradient_accumulation_steps should be batched together. 7 generations is under utilizing the inference GPU.
2,971
471
lyh1028
2025-03-09T03:11:59
have the same question
2,970
472
qgallouedec
2025-02-27T13:28:40
Thanks @logicaltrojan!, I took the opportunity to fix it everywhere! Once the CI is green we can merge :)
2,969
473
qgallouedec
2025-02-27T13:28:58
@bot /style
2,969
474
github-actions[bot]
2025-02-27T13:29:19
Style fixes have been applied. [View the workflow run here](https://github.com/huggingface/trl/actions/runs/13567545048).
2,969
475
HuggingFaceDocBuilderDev
2025-02-27T13:32:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2969). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,969
476
qgallouedec
2025-02-26T20:44:38
Yes, the solution is just to rename the argument :)
2,968
477
ErikKankaTrea
2025-02-27T03:36:14
Thanks!!!
2,968
478
HuggingFaceDocBuilderDev
2025-02-26T09:54:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2966). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,966
479
jojo23333
2025-02-27T04:06:43
Hi @qgallouedec ,thanks for your great contribution! in this [line](https://github.com/huggingface/trl/blob/019fc6dbaa03b888f9d5c1845f0f690da8ed310c/trl/trainer/grpo_trainer.py#L752) , I wonder why not get logp when sampling but instead do the inference again? vllm should also support getting the logp value.
2,966
480
qgallouedec
2025-02-27T08:41:32
Yes, but we need the gradient, and the logp returned by vllm are just the values
2,966
481
zetian1025
2025-03-03T11:07:39
Same issue. The problem seems to be related to deepseed stage 3 (use stage 2 will avoid the problem)
2,965
482
Sampson1107
2025-03-12T11:56:54
Same issue. Same stage3
2,965
483
yeruoforever
2025-02-26T14:05:44
TypeError: '>=' not supported between instances of 'list' and 'tuple'
2,963
484
qgallouedec
2025-02-26T16:21:37
Thanks a lot @jamesbraza
2,963
485
qgallouedec
2025-02-26T16:24:32
by the way, is this change needed as well? https://github.com/huggingface/trl/pull/2871
2,963
486
jamesbraza
2025-02-26T18:55:39
> by the way, is this change needed as well? #2871 Yes, testing it now. Clearly I didn't test this PR previously, as @yeruoforever reported it had a `TypeError` haha, my bad.
2,963
487
jamesbraza
2025-02-27T07:56:10
Hi @qgallouedec I have completed my validations, this PR is ready for merge. I had to also pull in: - https://github.com/huggingface/trl/pull/2871 (thanks for sharing it) - I hit https://github.com/huggingface/trl/issues/2953, and to fix it, I changed [this line](https://github.com/huggingface/trl/blob/019fc6dbaa03b888f9d5c1845f0f690da8ed310c/trl/trainer/grpo_trainer.py#L752) to `with torch.no_grad()` instead of `with torch.inference_mode()`
2,963
488
qgallouedec
2025-02-27T08:51:54
Thanks, but I can't see the above mentionned changes. Did you pushed them?
2,963
489
jamesbraza
2025-02-27T19:26:59
> Thanks, but I can't see the above mentionned changes. Did you pushed them? The other ones I mentioned weren't related to this PR, so they're not here. They are all here: https://github.com/Future-House/trl/tree/working-grpo-2025-02-27
2,963
490
qgallouedec
2025-03-04T15:46:55
Thanks! Just commit the suggestions and we are good to merge. Usually [allowing maintainer to edit the PR](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) makes things easier for us.
2,963
491
jamesbraza
2025-03-04T17:54:28
> Usually [allowing maintainer to edit the PR](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork) makes things easier for us. Yeah I always do, but for some reason that checkbox isn't present in this PR's right panel :/ <img width="340" alt="screenshot of right panel" src="https://github.com/user-attachments/assets/63754c9c-f88e-4117-b6f1-5bdaa4935c4d" /> Regardless, PR is ready for review again
2,963
492
qgallouedec
2025-03-05T13:58:44
Thanks!
2,963
493
HuggingFaceDocBuilderDev
2025-03-05T14:03:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2963). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,963
494
gengzijun
2025-02-26T07:11:19
I'm having the same issue, do you know how to fix this now
2,962
495
nopepper
2025-02-26T13:50:22
I've noticed a similar issue. DeepSpeed runs will eventually fail catastrophically and the reward plummets to 0. DDP doesn't work anymore when using `vLLM` and just hangs on the first step.
2,960
496
qgallouedec
2025-02-26T14:14:48
You should set num_processes to 2 if you have 2 GPUs. Also, to get comparable results you must ensure that the effective batch size (num gpus*per device batch size*grad accum) is the same. I'm not sure it will solve your problem, let me know
2,960
497
Kfkcome
2025-02-26T15:29:28
> You should set num_processes to 2 if you have 2 GPUs. Also, to get comparable results you must ensure that the effective batch size (num gpus_per device batch size_grad accum) is the same. I'm not sure it will solve your problem, let me know If I set num_processes = 1, the first gpu focus train and the second gpu focus sampling(vllm). And I use the same args, only change the `export CUDA_VISIBLE_DEVICES=0` to `export CUDA_VISIBLE_DEVICES=0,2` I got the another test use the same args and one gpu. The format reward increase very fast just like the one before I test.... ![Image](https://github.com/user-attachments/assets/5ff946cc-b1a3-4608-9efa-6688c5078039)
2,960
498
I-l-l-I
2025-03-12T11:22:54
@Kfkcome Something similar happened to me. There may be a communication problem between the GPUs. I solved it by replacing ```python llm_model.load_weights(state_dict.items()) ``` in `GRPOTrainer._move_model_to_vllm` with ```python llm_device = next(llm_model.parameters()).device for name, param in state_dict.items(): weight = param.to('cpu').to(llm_device) llm_model.load_weights(weights=[(name, weight)]) del weight ```
2,960
499
HuggingFaceDocBuilderDev
2025-02-25T13:18:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2956). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,956
500
qgallouedec
2025-02-25T13:35:22
Thank for this great work!! It seems very close to GRPO, can you summarize the key differences to make the reviewing a bit easier for me?
2,955
501
liziniu
2025-02-25T15:13:06
Hi, ReMax differs from GRPO in two key aspects: baseline estimation and advantage calculation. ## Key Conceptual Differences **Baseline Estimation:** - GRPO uses the averaged empirical reward as the baseline value - ReMax simply uses the reward value of a greedy decoding completion as the baseline **Advantage Calculation:** - GRPO calculates the grouped mean and standard deviation for normalization - ReMax does not require this normalization step ## Implementation Details The implementation of `remax_config.py` is basically same with `grpo_config.py`, with modifications primarily in the trainer code's (`remax_trainer.py`) `_generate_and_score_completions` function (lines 690-880): ### Key Modifications 1. **Lines 690-760:** Modified generation to incorporate greedy decoding for baseline estimation - Sampling parameters for vllm/HF generation slightly changed to accommodate this 2. **Lines 760-808:** Minimal changes to existing code 3. **Lines 810-860:** Added calculation of rewards for greedy completions 4. **Lines 860-870:** Direct advantage calculation without additional operations like gathering ### Additional Changes - **`__init__` method:** - Changed vllm sampling parameters by setting `n = 1` - This preserves the function of generating multiple completions since prompts are repeated in lines 690-760 - **`compute_loss` method (line 951):** - Added `dim=1` when calculating averaged loss - Loss is first normalized across timesteps then across different batches - This implementation follows the description in the paper I also provide an introduction to ReMax at the [docs](https://github.com/huggingface/trl/pull/2955/commits/c0fcdd350103d0f22a17645f44de8cf41719cd06?short_path=15a860f#diff-15a860fec3744a3eecf254dff5060ee5d30d3ec245dbc8dfad7417bcc3bc513b). If you have additional questions, please feel free to let me know.
2,955
502
qgallouedec
2025-02-27T11:07:37
Thanks! Can you try to integrate the very latest changes in GRPO?
2,955
503
liziniu
2025-03-03T08:27:12
Hi, I’ve integrated the latest changes from GRPO. Below is a summary of the updates: - Lines 736-802: Modified the sampling strategy for ReMax to incorporate greedy decoding, which improves baseline estimation for ReMax. - Lines 857-906: Added reward calculations for greedy completions. These rewards will be used to compute the advantage function. - Lines 908-915: Implemented a customized advantage calculation specifically tailored for ReMax. Let me know if you have any questions or need further details!
2,955
504
kashif
2025-03-03T08:31:55
currently, the remax trainer file is a copy of the remax config file... is that a mistake?
2,955
505
liziniu
2025-03-03T11:44:48
Hi @kashif, thank you for pointing that out! It was my mistake to copy the wrong content. I’ve now fixed it.
2,955
506
liziniu
2025-03-12T02:36:54
Hi @qgallouedec Could you please review the code when you have a moment? If you need any additional information to assist with the review, I’d be happy to provide it. Thanks in advance!
2,955
507
HuggingFaceDocBuilderDev
2025-02-25T10:14:24
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2954). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,954
508
qgallouedec
2025-02-25T10:41:41
Nice! When it works, can you also add a few lines in https://huggingface.co/docs/trl/en/reducing_memory_usage? 🙏
2,954
509
kashif
2025-02-25T13:03:41
sure!
2,954
510
casper-hansen
2025-03-03T16:47:10
Looks like this PR was parked for now. @kashif did the implementation not work? This is super relevant to me if I am going to use TRL for training long-context reasoners
2,954
511