user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-03-30 02:12:16
body
stringlengths
1
173k
issue_number
int64
1
3.18k
__index_level_0__
int64
0
8.59k
qgallouedec
2024-10-24T18:27:09
PPO expect `reward_model` to be a model (torch module), not a function.
2,273
700
HuggingFaceDocBuilderDev
2024-10-24T15:52:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2272). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,272
701
HuggingFaceDocBuilderDev
2024-10-24T10:06:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2270). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,270
702
qgallouedec
2024-10-25T16:12:25
Some tests are failing due to PairRM loading: it is fixed in #2276, you can safely ignore it
2,270
703
edbeeching
2024-10-28T09:30:36
Hi @cutecharmingkid , unfortunately the answer is not trivial. Does the domain of your task match the tasks used to fine-tune the base vision-instruct model? I would imagine 10k-100k example would be enough, but I have not tested extensively.
2,269
704
qgallouedec
2024-10-25T16:02:36
Thanks for reporting, please share your system info
2,268
705
Isaaclgz
2024-10-27T05:14:50
> Thanks for reporting, please share your system info Thanks for looking into this! System: Debian 11 Python 3.10 1xA100-80GB Nvidia driver 550.90.07, CUDA 12.4 (running this on a GCP CE instance based on the c0-deeplearning-common-cu123-v20240922-debian-11-py310 image) Env: torch==2.4.0 transformers==4.44.0 trl==0.11.3 flash-attn==2.6.3 accelerate==1.0.1
2,268
706
chenyang399
2024-11-08T04:40:19
is there any chance that we can run KTO script with 24G GPU
2,268
707
qgallouedec
2024-10-24T18:10:55
Thanks @cameronphchen!
2,266
708
HuggingFaceDocBuilderDev
2024-10-24T18:15:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2266). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,266
709
qgallouedec
2024-10-23T08:12:32
Thanks for reporting, it should have been fixed with #2261. CAN you confirm?
2,264
710
ArcherShirou
2024-10-24T02:28:19
Thank you for your response. After updating the code and testing it, everything is running smoothly now. For the 14B and 72B models, quantization is necessary when using the 0.5B reward model. However, if I switch to the 70B or 72B reward model, I still encounter out-of-memory (OOM) issues midway, even with quantization and LoRA applied. Do you have any good solutions for this?
2,264
711
qgallouedec
2024-10-24T18:34:55
You can try reducing the generation length. Closing the issue as the initial question is answered
2,264
712
HuggingFaceDocBuilderDev
2024-10-24T13:49:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2263). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,263
713
qgallouedec
2024-11-23T12:50:57
Looks good overall. Feel free to request a final review from me when you think it's ready to be merged
2,263
714
yiyepiaoling0715
2024-12-25T02:48:30
same question, has been solved?
2,262
715
HuggingFaceDocBuilderDev
2024-10-21T16:47:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2261). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,261
716
qgallouedec
2024-10-21T15:04:46
Thanks @cameronphchen!
2,259
717
HuggingFaceDocBuilderDev
2024-10-21T15:08:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2259). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,259
718
qgallouedec
2024-10-24T13:01:30
Thanks for the PR! However, I was actually considering simply removing this bot. In my opinion, it's fine to leave issues open for extended periods. I generally review all the issues and follow up when more information is needed and there hasn't been any activity for a while. From my experience, this bot tends to close issues that should remain open more often than it helps track active ones. See #1949 #1956. What's more, the bot doesn't seem to have been working for a while, and nobody here seems to miss it. What do you think @lewtun @kashif?
2,258
719
Ananya54321
2024-10-25T02:02:26
Ohh that makes sense! Thank you for responding!
2,258
720
lewtun
2024-10-28T20:07:28
Yes I agree, let's disable the bot since it's more of a nuisance than a help
2,258
721
qgallouedec
2024-11-11T23:16:04
Close as a consequence of #2300
2,258
722
SinclairCoder
2024-10-21T18:07:30
I solved it with torchrun launch.
2,257
723
Qinghao-Hu
2024-10-22T01:37:47
same problem
2,257
724
SinclairCoder
2024-10-22T11:50:10
@Qinghao-Hu launch it with torchrun if also a multigpu training case.
2,257
725
innat
2024-10-24T07:31:44
what does it mean? , [src](https://huggingface.co/docs/accelerate/usage_guides/big_modeling). > Multiple GPUs, or “model parallelism”, can be utilized but only one GPU will be active at any given moment. This forces the GPU to wait for the previous GPU to send it the output. You should launch your script normally with Python instead of other tools like torchrun and accelerate launch. > You may also be interested in pipeline parallelism which utilizes all available GPUs at once, instead of only having one GPU active at a time. This approach is less flexbile though. For more details, refer to the [Memory-efficient pipeline parallelism](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference#memory-efficient-pipeline-parallelism-experimental) guide.
2,256
726
gaetanlop
2024-10-22T00:27:31
Hey @mertege, adding the possibility to store teacher logits in the `GKDTrainer` is only useful when setting the parameter `lmbda` to 0 (which corresponds to standard KD). The all point of GKD is to enable on-policy KD (KD on sequences generated by the student) which means that we cannot store teacher logits offline during a pre-processing step.
2,255
727
mertege
2024-10-22T07:03:50
Thanks for reply @gaetanlop.
2,255
728
qgallouedec
2024-10-21T16:50:10
> all latest can you run `trl env` please?
2,254
729
qgallouedec
2024-10-21T16:50:37
Also please provide the full traceback
2,254
730
saxenarohit
2024-10-21T17:42:36
Thanks ``` - Platform: Linux-5.4.0-187-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - PyTorch version: 2.2.0a0+81ea7a4 - CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB - Transformers version: 4.45.2 - Accelerate version: 1.0.1 - Accelerate config: not found - Datasets version: 3.0.1 - HF Hub version: 0.26.0 - TRL version: 0.12.0.dev0 - bitsandbytes version: 0.43.1 - DeepSpeed version: not installed - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: not installed - PEFT version: 0.13. ``` There is no traceback. It's a request to check for a possible bug. During evaluation in the collate_fn `labels = batch["input_ids"].clone()` this will possibly have the gold answer in the input_ids during the evaluation?
2,254
731
edbeeching
2024-10-23T08:45:08
Hi @saxenarohit. This is normal, we are just looking at the eval loss. I think you might be thinking of a generative eval, where given a prompt, `model.generate` is used to autoregressively compute an answer, which can then be compared to the ground truth "gold answer". I will close the issue, but feel free to reopen if needed.
2,254
732
qgallouedec
2024-10-19T17:13:40
This is because you need to provide a split dataset (containing both a training split and an evaluation split) when you use TRL scripts . I realize the following limitations: - when you're not evaluating, you still need to have a split dataset - you may want the script to split the dataset when necessary. This could be solved by adding something like : ```python if training_args.eval_strategy != "none" and script_args.dataset_test_split not in dataset : dataset = dataset[script_args.dataset_train_split].split(test_size=0.05) ... trainer = AnyTrainer( ... train_dataset=dataset[script_args.dataset_train_split], eval_dataset=dataset[script_args.dataset_test_split] if training_args.eval_strategy != "none" else None, ... ) ``` WDYT @kashif @lewtun ? Is this situation common enough to justify this addition?
2,253
733
lewtun
2024-10-24T09:34:00
I don't think we should automatically generate a test split for the user (it's a bit too much magic), but I would be in favour of having the logic to set `eval_dataset` to `None` if no eval strategy is provided
2,253
734
qgallouedec
2024-10-24T09:36:01
> I don't think we should automatically generate a test split for the user (it's a bit too much magic), but I would be in favour of having the logic to set `eval_dataset` to `None` if no eval strategy is provided Sounds reasonable.
2,253
735
HuggingFaceDocBuilderDev
2024-10-18T22:38:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2252). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,252
736
qgallouedec
2024-10-20T13:52:17
Thanks for the PR! Can you just run `make precommit`
2,252
737
ngxson
2024-10-20T22:25:27
@qgallouedec Thanks! Should be good now
2,252
738
qgallouedec
2024-10-21T07:35:04
It seems like this case occurs twice in our tests: ``` FAILED tests/test_dataset_formatting.py::SetupChatFormatTestCase::test_example_with_setup_model - ValueError: Chat template is already added to the tokenizer. If you want to overwrite it, please set it to None FAILED tests/test_dataset_formatting.py::SetupChatFormatTestCase::test_setup_chat_format - ValueError: Chat template is already added to the tokenizer. If you want to overwrite it, please set it to None ``` Can you update the example so that they use this function correctly?
2,252
739
qgallouedec
2024-10-22T10:39:33
Lgtm, thanks @ngxson
2,252
740
ngxson
2024-10-22T10:47:07
Thanks! I don't have merge permission, so please merge when you want 🤗
2,252
741
kashif
2024-10-21T11:04:55
@gaetanlop can we use the `pad` helpers? ```py # Use pad helper to handle padding padded_query_responses = pad(query_responses, padding_value=pad_token_id, padding_side="right") padded_logitss = pad(logitss, padding_value=0, padding_side="right") ```
2,251
742
gaetanlop
2024-10-21T15:05:37
@kashif, ~~the `pad` function expects the tensor to have no leading dimension corresponding to the batch size.~~ Here is an example `query_responses`: ```python query_responses = [ torch.randint(vocab_size, (bs, seq_length1)), torch.randint(vocab_size, (bs, seq_length2)), torch.randint(vocab_size, (remaining_samples, seq_length3)) ] ``` ~~Using the `pad` function as it is would require the following change before passing the `query_responses` to the `pad` function:~~ ```python query_responses=[query_reps[i] for query_reps in query_responses for i in range(query_reps.size(0))] ``` ~~We can also change the pad function? What do you prefer?~~ After looking more closely to the pad function, you are rigth, we can use the pad function as it is, it just requires reshaping the tensor afterwards. I am gonna make the update, thanks for pointing it
2,251
743
HuggingFaceDocBuilderDev
2024-10-21T16:26:53
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2251). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,251
744
gaetanlop
2024-10-21T16:34:19
This won't work @kashif, it still requires reshaping the tensors
2,251
745
kashif
2024-10-21T16:35:13
ah damn! my bad sorry!
2,251
746
gaetanlop
2024-10-21T16:49:21
No problem, this should be fixed now
2,251
747
JiahuiSun
2024-10-27T01:37:26
I also met the same issue. I use the official example script, dpo_online.py, to train a 75b LLM with a 75b reward model. Even with 60x8 H100 GPUs, the problem still happens. Any help please?
2,250
748
lewtun
2024-10-29T05:53:16
Hello @hlnchen would you mind sharing a reproducible example that uses the `unwrap_model_for_generation()` method in a simple training loop that simulates your application?
2,250
749
KAKSIS
2024-11-08T06:46:37
I encountered a similar issue while training a 72B model on an 8x H100 (80G) setup. I’m using the Hugging Face online DPO trainer scripts from [this link](https://huggingface.co/docs/trl/main/en/online_dpo_trainer). To reduce GPU memory usage, I've substituted the reward model with a random judger, so no reward model is loaded in GPU memory. However, when running the code in zero3-offload mode, I encounter a CUDA out-of-memory (OOM) error at the unwrap_model_for_generation step, specifically in trl.trainer.online_dpo_trainer on line 395. It seems that when executing this command, each process/graphics card collects parameters distributed across other processes, resulting in OOM. In the debug model, I can observe that the memory usage of each graphics card increases directly from 20GB to 80GB at that point. Does anyone know the actual function of the command 'unwarp_madel_for_generation' in zero3 mode here are my scripts. ```python from datasets import load_dataset from trl import OnlineDPOConfig, OnlineDPOTrainer from transformers import AutoTokenizer from typing import List, Optional, Union class TestJudge(): def judge(self, prompts: List[str], completions: List[List[str]], return_scores=False) -> List[Union[int, float]]: return [0]*len(prompts) model_path = "Qwen2.5-72B-Instruct" #path to 72B model judge = TestJudge() data_path = "trl-lib/ultrafeedback-prompt"#path to dataset tokenizer = AutoTokenizer.from_pretrained(model_path, local_files_only=True) train_dataset = load_dataset(data_path, split="train") training_args = OnlineDPOConfig(output_dir="online-dpo", logging_steps=2, bf16=True, fp16=False, per_device_train_batch_size=1, max_new_tokens=2048, num_train_epochs=5, gradient_accumulation_steps=2, save_only_model=True, save_steps=2000, save_total_limit=2) trainer = OnlineDPOTrainer( model=model_path, ref_model=model_path, judge=judge, args=training_args, processing_class=tokenizer, train_dataset=train_dataset, ) trainer.train() #In OnlineDPOTrainer.__init__ #from transformers import AutoModelForCausalLM #ref_model = AutoModelForCausalLM.from_pretrained(model, local_files_only=True) #model = AutoModelForCausalLM.from_pretrained(model, local_files_only=True) ```
2,250
750
Namco0816
2024-11-13T07:38:02
It seems like that I encountered the same issue. I also use a dummy reward model which do not take any GPU memory. And the training goes smoothly at the early stage, however after monitoring it for couples of iterations, the GPU memory usage keeps growing and at a specific iteration (in my case, 15 % total training steps for 8 GPUs, 7% total training steps for 4 GPUs), the GPU OOM when performing unwrap generation. I've tried to del as much variables as possible after each iteration and also empty the caches, however not works at all.
2,250
751
yiyepiaoling0715
2024-12-30T04:57:16
same question,watiing for method to resolve it
2,250
752
Mefisto04
2024-10-21T19:31:15
hey @qgallouedec , i have made a pr for this issue #2237 , please review all the changes that i have made.
2,249
753
HuggingFaceDocBuilderDev
2024-10-24T13:08:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2249). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,249
754
qgallouedec
2024-10-24T13:22:57
Thanks for helping improving this @Mefisto04. Can you make sure to run `make precommit`? A few suggestions, but it all looks good to me.
2,249
755
Mefisto04
2024-10-24T18:37:47
hey @qgallouedec i have commits all the changes that you have provided, please review this
2,249
756
HuggingFaceDocBuilderDev
2024-10-18T14:23:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2248). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,248
757
qgallouedec
2024-10-18T10:18:01
Thanks for reporting, it's about to be fixed: #2246
2,247
758
ArcherShirou
2024-10-18T10:54:51
thanks, its work
2,247
759
HuggingFaceDocBuilderDev
2024-10-18T09:31:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2246). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,246
760
kashif
2024-10-24T08:31:33
release is out
2,245
761
HuggingFaceDocBuilderDev
2024-10-24T08:35:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2245). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,245
762
HuggingFaceDocBuilderDev
2024-10-17T11:44:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2244). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,244
763
HuggingFaceDocBuilderDev
2024-10-16T15:58:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2243). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,243
764
qgallouedec
2024-10-17T07:13:24
@kashif can you also add an example in the online dpo documentation? And a test?
2,243
765
kashif
2024-10-17T07:19:39
working on test thanks!
2,243
766
qgallouedec
2024-10-21T15:17:08
I'm just updating the doc and running some tests
2,243
767
qgallouedec
2024-10-22T11:23:13
``` # 8 GPUs accelerate launch examples/scripts/dpo_online.py \ --model_name_or_path trl-lib/pythia-1b-deduped-tldr-sft \ --judge pairrm \ --dataset_name trl-lib/tldr \ --learning_rate 5.0e-7 \ --logging_steps 25 \ --output_dir pythia-1b-tldr-online-dpo-reward \ --warmup_ratio 0.1 ``` https://wandb.ai/huggingface/huggingface/runs/usqmcs3e
2,243
768
qgallouedec
2024-10-23T15:44:00
https://wandb.ai/huggingface/huggingface/runs/mq66mdbt ``` accelerate launch examples/scripts/dpo_online.py \ --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \ --judge pair_rm \ --dataset_name trl-lib/ultrafeedback-prompt \ --learning_rate 5.0e-7 \ --logging_steps 25 \ --output_dir Qwen2.5-0.5B-Online-DPO-PairRM \ --warmup_ratio 0.1 ```
2,243
769
qgallouedec
2024-10-18T13:49:17
You can use it, feel free to report if it causes any issues.
2,242
770
zwhe99
2024-10-20T05:00:09
Thanks for the response!
2,242
771
coding-famer
2024-10-17T23:41:52
I'm interested in working on this!
2,241
772
qgallouedec
2024-10-18T13:49:57
Nice! Thanks @coding-famer. Feel free to open a PR then and request any help if needed
2,241
773
August-murr
2024-10-25T10:28:42
@lewtun After reading the paper, I noticed that the DPO checkpoints were combined with a different model rather than the reference model used in DPO training. So, I added an option in my PR to set an external model for merging instead of the reference model.
2,241
774
coding-famer
2024-10-25T18:01:36
Hi @August-murr , happy to see that you have already worked it out! However I noticed that your implementation only allows merge models in the disk after training, this could be done by user using mergekit directly after training. I think the thing here is to merge the model during the training steps/epochs?
2,241
775
August-murr
2024-10-25T18:41:13
@coding-famer The callback has an optional parameter called `merge_at_every_checkpoint`, which merges the saved checkpoint at either every step or at the end of each epoch during training.
2,241
776
coding-famer
2024-10-25T19:21:02
> @coding-famer The callback has an optional parameter called `merge_at_every_checkpoint`, which merges the saved checkpoint at either every step or at the end of each epoch during training. Sounds great!
2,241
777
HuggingFaceDocBuilderDev
2024-10-17T08:30:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2239). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,239
778
qgallouedec
2024-10-17T09:03:46
Thanks @August-murr!
2,239
779
qgallouedec
2024-10-18T14:21:33
Thanks for pointing this out, #2248 will fix it
2,238
780
reihig-ut
2024-10-24T05:07:42
Thank you for your PR! I retried the reproduction process on branch `kto-conv-data-support`, I got this error: ``` /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:479: UserWarning: When using DPODataCollatorWithPadding, you should set `max_length` in the KTOTrainer's init it will be set to `512` by default, but you should do it yourself in the future. warnings.warn( /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:489: UserWarning: When using DPODataCollatorWithPadding, you should set `max_prompt_length` in the KTOTrainer's init it will be set to `128` by default, but you should do it yourself in the future. warnings.warn( /home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py:519: UserWarning: When using DPODataCollatorWithPadding, you should set `remove_unused_columns=False` in your KTOConfig we have set it for you, but you should do it yourself in the future. warnings.warn( Traceback (most recent call last): File "/home/hoge/project/test/trl/examples/scripts/kto.py", line 97, in <module> trainer = KTOTrainer( ^^^^^^^^^^^ File "/home/hoge/miniconda3/envs/run_kto/lib/python3.11/site-packages/trl/trainer/kto_trainer.py", line 721, in __init__ super().__init__( TypeError: Trainer.__init__() got an unexpected keyword argument 'processing_class' ```
2,238
781
benchay1999
2024-10-24T07:47:50
Changing `processing_class` to `tokenizer` worked for me.
2,238
782
kashif
2024-10-24T08:44:08
should be fixed now in main with latest transformer release
2,238
783
chenyang399
2024-11-08T04:35:47
How much memory it needs to run the KTO script ? is using the KTO script must have a GPU memory more than 24G? i use the 4090 with 24G memory failed.
2,238
784
Mefisto04
2024-10-16T19:16:43
hey @qgallouedec, please review this and assign me this issue
2,237
785
qgallouedec
2024-10-18T17:23:07
Hi, thanks for reporting @Mefisto04. Feel free to open a PR if you can improve it.
2,237
786
Mefisto04
2024-10-21T19:28:56
hey @qgallouedec , i have made a pr #2249 , please review that.
2,237
787
qgallouedec
2024-10-25T16:04:41
Closed via #2249
2,237
788
HuggingFaceDocBuilderDev
2024-10-21T09:44:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2236). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,236
789
edbeeching
2024-10-24T06:39:45
HI @sergiopaniego , thanks for impementing this. Could you run `make precommit` to format the code so the quality tests pass (you may have to `pip install pre-commit`) We are discussing internally how feasible it is to hormonize this script with the other VLM training scripts, I will let you know when we have a conclusion.
2,236
790
sergiopaniego
2024-10-30T09:12:56
Updated! Any updates on the harmonization discussion? I’m happy to make any modifications needed! 😊
2,236
791
mshuffett
2024-11-04T01:57:33
@sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA? Is it possible to set this up to train on multiple GPUs?
2,236
792
sergiopaniego
2024-11-17T20:25:35
> @sergiopaniego so is this working in theory? Also OOM'ing for me needs 50 GB and my A100 only has like 40 GB or something. Is there a level I can pull to decrease the memory? Why does it need so much considering it is doing a LORA? > > Is it possible to set this up to train on multiple GPUs? Sorry for the late response @mshuffett. It still needs some polishing. While testing it, it seems like something is still missing from the artifacts for the model shared. You can see more details about it in the [README](https://github.com/2U1/Molmo-Finetune). For example, since the `grad-checkpoint` is disabled, memory consumption increases a lot. It's also not yet merged in the official transformers repo https://github.com/huggingface/transformers/pull/33962
2,236
793
qgallouedec
2024-10-18T17:18:01
This operation replaces tokens outside the attention mask with token 0. This operation has no influence on model output within the attention mask: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "Qwen/Qwen2.5-0.5B-Instruct" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) pad_token_id = tokenizer.pad_token_id input_ids = torch.tensor([[pad_token_id, pad_token_id, 1, 2, 3, 4, 5, pad_token_id]]) attention_mask = input_ids != pad_token_id # [[False, False, True, True, True, True, True, False]] position_ids = attention_mask.cumsum(1) - attention_mask.long() # [[0, 0, 1, 2, 3, 4, 5, 0]] output_wo_mask_fill = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids) input_ids = torch.masked_fill(input_ids, ~attention_mask, 0) # [[0, 0, 0, 1, 2, 3, 4, 0]] output_w_mask_fill = model(input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids) print(torch.mean(torch.abs(output_wo_mask_fill.logits - output_w_mask_fill.logits), dim=-1)) # [[0.8371, 0.8371, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 3.6457]] ``` This operation is not absolutely necessary, since invalid logits are then masked: https://github.com/huggingface/trl/blob/a67f2143c38d6520be8735463ce715ad5c281db8/trl/trainer/rloo_trainer.py#L413-L415
2,235
794
Chios-C
2024-10-19T05:46:57
Thanks for your great response.
2,235
795
HuggingFaceDocBuilderDev
2024-10-15T10:04:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2233). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,233
796
qgallouedec
2024-10-15T08:59:33
Thanks again @DhruvKadam-git. Can you update your branch?
2,232
797
DhruvKadam-git
2024-10-17T07:36:04
I have updated my branch
2,232
798
HuggingFaceDocBuilderDev
2024-10-18T17:26:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2232). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,232
799