user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-07-30 20:59:07
body
stringlengths
1
173k
issue_number
int64
1
3.81k
__index_level_0__
int64
0
11.8k
kashif
2024-10-08T12:54:59
yes i didnt take that into account
2,172
1,000
kashif
2024-10-08T14:47:10
@qgallouedec fixed the test taking padding into account
2,172
1,001
kashif
2024-10-09T11:06:22
@qgallouedec I have: ``` input_ids[15:] # 15 first tokens are padding [1, 518, 25580, 29962, 1724, 338, 2253, 1135, 22769, 29973, 518, 29914, 25580, 29962, 25685, 29889, 29871, 2] (Pdb) self.tokenizer(self.tokenizer.apply_chat_template(self.examples[0]["messages"], tokenize=False), add_special_tokens=False) {'input_ids': [1, 518, 25580, 29962, 1724, 338, 2253, 1135, 22769, 29973, 518, 29914, 25580, 29962, 25685, 29889, 29871, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} ```
2,172
1,002
kashif
2024-10-09T18:33:47
@ruijunfeng can you kindly check with the current refactoring of the datacollator, I have simplified it
2,172
1,003
ruijunfeng
2024-10-10T03:00:31
@kashif Hi there, I still found a small bug in the refactor code. I used the dataset in your unit test and print out the results like this: ```python >>> tokenizer.decode(data["input_ids"][0]) '<s><s><s><s><s><s><s><s><s><s><s><s><s><s><s><s>user: What is better than ugly?assistant: Beautiful.</s>' >>> tokenizer.decode(data["prompts"][0]) '<s><s><s><s><s><s><s>user: What is better than ugly?</s>' >>> data["labels"][0, -6:] tensor([ -100, 22137, 29901, 25685, 29889, 2]) >>> tokenizer.decode(data["labels"][0, -5:]) 'istant: Beautiful.</s>' ``` Seems like the labels mistakenly wrap part of the "assistant: " and the prompts has missed the "assistant: ". Also from my understanding, isn't prompts shouldn't include EOS token?
2,172
1,004
kashif
2024-10-10T06:42:05
just trying to reproduce this on my end, I have as output from the data collator: ``` self.tokenizer.decode(input_ids[0]) '</s></s></s></s></s></s></s></s></s></s></s></s></s></s></s><s> [INST] What is better than ugly? [/INST] Beautiful. </s>' ``` ``` self.tokenizer.decode(prompts_input_ids[0]) '</s></s></s></s></s></s><s> [INST] What is better than ugly? [/INST]' ``` and the labels are only set for the completion: ``` labels[0] tensor([ -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 25685, 29889, 29871, 2]) ``` ``` self.tokenizer.decode(labels[0][-4:]) 'Beautiful. </s>' ``` at which point are you printing the data from?
2,172
1,005
ruijunfeng
2024-10-10T10:30:32
@kashif Sorry I used a wrong version of the code. I have tried it again and the refactored code is all good.
2,172
1,006
HuggingFaceDocBuilderDev
2024-10-04T09:02:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2171). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,171
1,007
HuggingFaceDocBuilderDev
2024-10-04T08:12:34
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2170). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,170
1,008
qgallouedec
2024-10-04T08:54:20
Thanks a lot for this detailed report @ruijunfeng This is indeed a critical issue. Are you willing to submit a PR to solve it?
2,169
1,009
ruijunfeng
2024-10-04T11:07:45
> Thanks a lot for this detailed report @ruijunfeng This is indeed a critical issue. Are you willing to submit a PR to solve it? Hi there, I have submitted a PR to fix this, hope this will help😊
2,169
1,010
qgallouedec
2024-10-04T07:36:18
Thanks!
2,168
1,011
qgallouedec
2024-10-04T06:06:11
Hi thanks for reporting, can you provide the full system info? See issue template
2,167
1,012
AugustLigh
2024-10-04T06:10:30
> Hi thanks for reporting, can you provide the full system info? See issue template Of course. initially I tried to run everything on windows 11, then on WSL 2. But I get the same error. I have a CPU: i5-9300H, GPU: GTX 1050, 16 GB ram python 3.11
2,167
1,013
August-murr
2024-10-04T10:56:51
Looks like you're running out of VRAM. You could try quantizing the model to 4 bits and use PEFT, but given your hardware setup, that would still not work. I'd recommend using a cloud-based solution like Google Colab, Kaggle, or AWS, where you'll have access to better GPUs.
2,167
1,014
SrikanthChellappa
2024-10-04T07:44:58
This was an issue with the understanding. Ignore this
2,166
1,015
HuggingFaceDocBuilderDev
2024-10-03T15:40:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2165). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,165
1,016
yananchen1989
2024-10-03T20:27:45
here i test `Anthropic/hh-rlhf` and `trl-lib/ultrafeedback_binarized` in the `dataset_name`. but neither works. (i do not change anything in reward_modeling.py which is directly cloned from trl repo) ``` CUDA_VISIBLE_DEVICES=0 python ~/trl/examples/scripts/reward_modeling.py \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --dataset_name ${ds} \ --output_dir Qwen2-0.5B-Reward-LoRA \ --per_device_train_batch_size 8 \ --num_train_epochs 1 \ --gradient_checkpointing True \ --learning_rate 1.0e-4 \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 50 \ --max_length 2048 \ --use_peft \ --lora_r 32 \ --lora_alpha 16 ``` > Traceback (most recent call last): > File "/workspace/trl/examples/scripts/reward_modeling.py", line 120, in <module> > trainer.train() > File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2052, in train > return inner_training_loop( > File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2345, in _inner_training_loop > for step, inputs in enumerate(epoch_iterator): > File "/usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py", line 550, in __iter__ > current_batch = next(dataloader_iter) > File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__ > data = self._next_data() > File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 674, in _next_data > data = self._dataset_fetcher.fetch(index) # may raise StopIteration > File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch > return self.collate_fn(data) > File "/usr/local/lib/python3.10/dist-packages/trl/trainer/utils.py", line 362, in __call__ > raise ValueError( > ValueError: The features should include `input_ids_chosen`, `attention_mask_chosen`, `input_ids_rejected` and `attention_mask_rejected` > 0%| | 0/20100 [00:00<?, ?it/s]
2,164
1,017
yananchen1989
2024-10-03T20:47:00
on this pape. https://huggingface.co/docs/trl/v0.11.1/en/reward_trainer#reward-modeling I see a conflict : ![image](https://github.com/user-attachments/assets/81580059-bcf1-4e07-b279-2dec4e0f2961) ![image](https://github.com/user-attachments/assets/d73c51c0-ca7c-409f-af17-feb7d08bb05a)
2,164
1,018
qgallouedec
2024-10-04T08:28:21
> In the official document https://huggingface.co/docs/trl/main/en/reward_trainer , `The [RewardTrainer] requires a [implicit prompt preference dataset]`. > I see the example is using `trl-lib/ultrafeedback_binarized` which is not so-called "implicit prompt preference datase" as the prompt is explicitly provided in the dataset. [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) **is** implicit prompt since you don't have a prompt column. You can see that there is a common start (`{'content': 'Use the pygame library to write a version of the classic game Snake, with a unique twist', 'role': 'user'}`), this is the so called implicit prompt: ```python >>> from daataset import load_dataset Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'daataset' >>> from datasets import load_dataset >>> dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train") >>> dataset.column_names ['chosen', 'rejected', 'score_chosen', 'score_rejected'] >>> dataset[0] {'chosen': [{'content': 'Use the pygame library to write a version of the classic game Snake, with a unique twist', 'role': 'user'}, {'content': "Sure, I'd be happy to help you write a version of the classic game Snake using the pygame library! ...", 'role': 'assistant'}], 'rejected': [{'content': 'Use the pygame library to write a version of the classic game Snake, with a unique twist', 'role': 'user'}, {'content': 'Sure, here\'s an example of how to write a version of Snake game with a unique twist using the Pygame library:...', 'role': 'assistant'}], 'score_chosen': 6.0, 'score_rejected': 4.0} ```
2,164
1,019
qgallouedec
2024-10-04T08:39:14
> here i test `Anthropic/hh-rlhf` and `trl-lib/ultrafeedback_binarized` in the `dataset_name`. but neither works. The provided code works fine on my side: ``` python ../examples/scripts/reward_modeling.py \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --dataset_name trl-lib/ultrafeedback_binarized \ --output_dir Qwen2-0.5B-Reward-LoRA \ --per_device_train_batch_size 8 \ --num_train_epochs 1 \ --gradient_checkpointing True \ --learning_rate 1.0e-4 \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 50 \ --max_length 2048 \ --use_peft \ --lora_r 32 \ --lora_alpha 16 ``` If the error persists, please provide your full system info (see bug issue template)
2,164
1,020
qgallouedec
2024-10-04T08:45:13
The reward trainer data support has been recently updated (#2102) . see the latest version of the doc for more info: https://huggingface.co/docs/trl/main/en/reward_trainer
2,164
1,021
yananchen1989
2024-10-04T18:36:51
> > In the official document https://huggingface.co/docs/trl/main/en/reward_trainer , `The [RewardTrainer] requires a [implicit prompt preference dataset]`. > > I see the example is using `trl-lib/ultrafeedback_binarized` which is not so-called "implicit prompt preference datase" as the prompt is explicitly provided in the dataset. > > [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) **is** implicit prompt since you don't have a prompt column. You can see that there is a common start (`{'content': 'Use the pygame library to write a version of the classic game Snake, with a unique twist', 'role': 'user'}`), this is the so called implicit prompt: > > ```python > >>> from daataset import load_dataset > Traceback (most recent call last): > File "<stdin>", line 1, in <module> > ModuleNotFoundError: No module named 'daataset' > >>> from datasets import load_dataset > >>> dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train") > >>> dataset.column_names > ['chosen', 'rejected', 'score_chosen', 'score_rejected'] > >>> dataset[0] > {'chosen': [{'content': 'Use the pygame library to write a version of the classic game Snake, with a unique twist', 'role': 'user'}, {'content': "Sure, I'd be happy to help you write a version of the classic game Snake using the pygame library! ...", 'role': 'assistant'}], > 'rejected': [{'content': 'Use the pygame library to write a version of the classic game Snake, with a unique twist', 'role': 'user'}, {'content': 'Sure, here\'s an example of how to write a version of Snake game with a unique twist using the Pygame library:...', 'role': 'assistant'}], 'score_chosen': 6.0, 'score_rejected': 4.0} > ``` i see. got it. `trl-lib/ultrafeedback_binarized` and `Anthropic/hh-rlhf` are on the same boat.
2,164
1,022
yananchen1989
2024-10-04T18:44:52
``` CUDA_VISIBLE_DEVICES=0 python /home/ubuntu/trl/examples/scripts/reward_modeling.py \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --dataset_name trl-lib/ultrafeedback_binarized \ --output_dir Qwen2-0.5B-Reward-LoRA \ --per_device_train_batch_size 8 \ --num_train_epochs 1 \ --gradient_checkpointing True \ --learning_rate 1.0e-4 \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 50 \ --max_length 2048 \ --use_peft \ --lora_r 16 \ --lora_alpha 16 ``` error: > Traceback (most recent call last): > File "/home/ubuntu/trl/examples/scripts/reward_modeling.py", line 120, in <module> > trainer.train() > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/transformers/trainer.py", line 2052, in train > return inner_training_loop( > ^^^^^^^^^^^^^^^^^^^^ > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/transformers/trainer.py", line 2345, in _inner_training_loop > for step, inputs in enumerate(epoch_iterator): > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/accelerate/data_loader.py", line 550, in __iter__ > current_batch = next(dataloader_iter) > ^^^^^^^^^^^^^^^^^^^^^ > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 630, in __next__ > data = self._next_data() > ^^^^^^^^^^^^^^^^^ > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 673, in _next_data > data = self._dataset_fetcher.fetch(index) # may raise StopIteration > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch > return self.collate_fn(data) > ^^^^^^^^^^^^^^^^^^^^^ > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/trl/trainer/utils.py", line 362, in __call__ > raise ValueError( > ValueError: The features should include `input_ids_chosen`, `attention_mask_chosen`, `input_ids_rejected` and `attention_mask_rejected` > 0%| | 0/7767 [00:00<?, ?it/s] `trl version: 0.11.1` by the way, `trl env` does not work: > Traceback (most recent call last): > File "/opt/conda/envs/trl11/bin/trl", line 8, in <module> > sys.exit(main()) > ^^^^^^ > File "/opt/conda/envs/trl11/lib/python3.11/site-packages/trl/commands/cli.py", line 38, in main > raise ValueError( > ValueError: Please use one of the supported commands, got env - supported commands are ['sft', 'dpo', 'chat', 'kto']
2,164
1,023
yananchen1989
2024-10-04T18:45:22
python version: 3.11.10
2,164
1,024
qgallouedec
2024-10-05T14:02:43
I've downgraded to v0.11.1 and I still can't reproduce the error. > by the way, trl env does not work: `trl env` requires trl>=0.12. Can you run `transformers-cli env` instead? Can you also confirm that you have not modified the codebase?
2,164
1,025
yananchen1989
2024-10-05T15:12:34
> - `transformers` version: 4.45.1 > - Platform: Linux-5.15.0-1061-aws-x86_64-with-glibc2.31 > - Python version: 3.11.10 > - Huggingface_hub version: 0.25.1 > - Safetensors version: 0.4.5 > - Accelerate version: 0.34.2 > - Accelerate config: not found > - PyTorch version (GPU?): 2.4.0+cu121 (True) > - Tensorflow version (GPU?): not installed (NA) > - Flax version (CPU?/GPU?/TPU?): not installed (NA) > - Jax version: not installed > - JaxLib version: not installed > - Using distributed or parallel set-up in script?: <fill in> > - Using GPU in script?: <fill in> > - GPU type: NVIDIA A10G @qgallouedec
2,164
1,026
yananchen1989
2024-10-05T15:13:36
i git pull in path `/home/ubuntu/trl/` therefore everything is updated, including `examples/scripts/reward_modeling.py`
2,164
1,027
yananchen1989
2024-10-05T15:16:35
i install trl via `pip install -U trl`
2,164
1,028
qgallouedec
2024-10-07T10:18:19
I still can't reproduce, I tried to reinstall everything, but it still works. Can you try the same? Also, try clearing your cache. ```sh python3.11 -m venv env source env/bin/activate pip install trl[peft]==0.11.1 curl -O https://raw.githubusercontent.com/huggingface/trl/86ad7a7e85dc65c79bd9759097709a27ad1a58dd/examples/scripts/reward_modeling.py python reward_modeling.py \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --dataset_name trl-lib/ultrafeedback_binarized \ --output_dir Qwen2-0.5B-Reward-LoRA \ --per_device_train_batch_size 8 \ --num_train_epochs 1 \ --gradient_checkpointing True \ --learning_rate 1.0e-4 \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 50 \ --max_length 2048 \ --use_peft \ --lora_r 32 \ --lora_alpha 16 ``` ``` Some weights of Qwen2ForSequenceClassification were not initialized from the model checkpoint at Qwen/Qwen2-0.5B-Instruct and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /fsx/qgallouedec/trl/tmp/reward_modeling.py:108: UserWarning: You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT. warnings.warn( Filter: 100%|█████████████████████████████████████████████████████████████████| 62135/62135 [00:29<00:00, 2121.63 examples/s] Filter: 100%|███████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 1926.69 examples/s] /fsx/qgallouedec/trl/tmp/env/lib/python3.11/site-packages/trl/trainer/reward_trainer.py:199: UserWarning: When using RewardDataCollatorWithPadding, you should set `remove_unused_columns=False` in your RewardConfig we have set it for you, but you should do it yourself in the future. warnings.warn( 0%| | 0/7750 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. /fsx/qgallouedec/trl/tmp/env/lib/python3.11/site-packages/transformers/tokenization_utils_base.py:2855: UserWarning: `max_length` is ignored when `padding`=`True` and there is no truncation strategy. To pad to max length, use `padding='max_length'`. warnings.warn( `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`... /fsx/qgallouedec/trl/tmp/env/lib/python3.11/site-packages/torch/utils/checkpoint.py:1399: FutureWarning: `torch.cpu.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cpu', args...)` instead. with device_autocast_ctx, torch.cpu.amp.autocast(**cpu_autocast_kwargs), recompute_context: # type: ignore[attr-defined] Could not estimate the number of tokens of the input, floating-point operations will not be computed {'loss': 0.7755, 'grad_norm': 3.030179262161255, 'learning_rate': 9.967741935483872e-05, 'epoch': 0.0} {'loss': 0.71, 'grad_norm': 4.013882160186768, 'learning_rate': 9.935483870967742e-05, 'epoch': 0.01} 1%|▌ | 50/7750 [00:49<2:23:07, 1.12s/it]┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ chosen_text ┃ rejected_text ┃ logits ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ <|im_start|>system │ <|im_start|>system │ [0.5081, 0.4919] │ │ You are a helpful assistant.<|im_end|> │ You are a helpful assistant.<|im_end|> │ │ │ <|im_start|>user │ <|im_start|>user │ │ │ As an HR manager, you want to test a potential │ As an HR manager, you want to test a potential │ │ │ employee's ability to solve puzzles to determine │ employee's ability to solve puzzles to determine │ │ │ their suitability for a job. Write a Python │ their suitability for a job. Write a Python script │ │ │ script that generates a list of questions that │ that generates a list of questions that require │ │ │ require logical reasoning to answer. Your list │ logical reasoning to answer. Your list should │ │ │ should include questions related to mathematical │ include questions related to mathematical puzzles, │ │ │ puzzles, language puzzles, logic puzzles, lateral │ language puzzles, logic puzzles, lateral thinking │ │ │ thinking puzzles, and pattern recognition │ puzzles, and pattern recognition puzzles. Use the │ │ │ puzzles. Use the following code as a starting │ following code as a starting point: │ │ │ point: │ questions = { │ │ │ questions = { │ "Mathematical puzzles": ["If the value of x+y │ │ │ "Mathematical puzzles": ["If the value of x+y │ = 20 and x-y = 10, what is the value of x and y?", │ │ │ = 20 and x-y = 10, what is the value of x and │ "If a pizza has a radius of 8 inches and is cut │ │ │ y?", "If a pizza has a radius of 8 inches and is │ into 6 equal slices, what is the area of each │ ... ```
2,164
1,029
yananchen1989
2024-10-07T20:27:51
@qgallouedec reward_modeling.py from your https://raw.githubusercontent.com/huggingface/trl/86ad7a7e85dc65c79bd9759097709a27ad1a58dd/examples/scripts/reward_modeling.py does work fine. but the script from https://github.com/huggingface/trl/blob/main/examples/scripts/reward_modeling.py does not work. I do see there is a lot of difference between them
2,164
1,030
qgallouedec
2024-10-07T20:36:36
The later is the script for the dev version. You can't use trl 0.11 with it
2,164
1,031
yananchen1989
2024-10-07T21:04:29
ok, then i will wait for the dev version to be released. thanks. @qgallouedec
2,164
1,032
yananchen1989
2024-10-09T16:12:29
hi. just reopen this ticket. although `trl-lib/ultrafeedback_binarized` works fine for `reward_modeling.py`, in trl version 0.11.2 but I also see that there is something wrong when using dataset: `Anthropic/hh-rlhf`. this dataset is used as an example in https://huggingface.co/docs/trl/v0.11.2/reward_trainer error message: > Traceback (most recent call last): > File "/workspace/trl/examples/scripts/reward_modeling.py", line 140, in <module> > dataset = dataset.map( > File "/usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py", line 866, in map > { > File "/usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py", line 867, in <dictcomp> > k: dataset.map( > File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 560, in wrapper > out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 3035, in map > for rank, done, content in Dataset._map_single(**dataset_kwargs): > File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 3408, in _map_single > example = apply_function_on_filtered_inputs(example, i, offset=offset) > File "/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py", line 3300, in apply_function_on_filtered_inputs > processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) > File "/workspace/trl/examples/scripts/reward_modeling.py", line 141, in <lambda> > lambda x: {"chosen": chosen_fn(x), "rejected": rejected_fn(x)}, num_proc=config.dataset_num_proc > File "/usr/local/lib/python3.10/dist-packages/trl/extras/dataset_formatting.py", line 43, in format_dataset > return tokenizer.apply_chat_template(examples[messages_field], tokenize=False) > File "/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py", line 1875, in apply_chat_template > rendered_chat = compiled_template.render( > File "/usr/local/lib/python3.10/dist-packages/jinja2/environment.py", line 1301, in render > self.environment.handle_exception() > File "/usr/local/lib/python3.10/dist-packages/jinja2/environment.py", line 936, in handle_exception > raise rewrite_traceback_stack(source=source) > File "<template>", line 4, in top-level template code > jinja2.exceptions.UndefinedError: 'str object' has no attribute 'role'
2,164
1,033
yananchen1989
2024-10-09T16:13:25
so only chat-format preference dataset like `trl-lib/ultrafeedback_binarized` is supported in following versions ?
2,164
1,034
qgallouedec
2024-10-09T16:47:51
No, the following works fine: ``` python reward_modeling.py \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --dataset_name Anthropic/hh-rlhf \ --output_dir Qwen2-0.5B-Reward-LoRA \ --per_device_train_batch_size 8 \ --num_train_epochs 1 \ --gradient_checkpointing True \ --learning_rate 1.0e-4 \ --logging_steps 25 \ --eval_strategy steps \ --eval_steps 50 \ --max_length 2048 \ --use_peft \ --lora_r 32 \ --lora_alpha 16 ```
2,164
1,035
yananchen1989
2024-10-09T16:57:59
@qgallouedec did you checkout the branch of `v0.11-release` ?
2,164
1,036
yananchen1989
2024-10-09T16:58:48
you checkout the branch and `pip install -e . ` from the source of this branch ? and it works fine ?
2,164
1,037
qgallouedec
2024-10-09T17:01:30
Indeed in v0.11.2, the example assumes that the dataset is in conversational format.
2,164
1,038
yananchen1989
2024-10-10T14:33:45
ok, so plain-text format such as `Anthropic/hh-rlhf` is not supported anymore.
2,164
1,039
qgallouedec
2024-10-10T14:45:23
False. Previously it was not supported, now it is. dev is ahead of v0.11.2
2,164
1,040
yananchen1989
2024-10-10T14:50:53
ok, i will wait the new release and test it in the near future.
2,164
1,041
world2025
2024-11-08T02:03:41
@qgallouedec can i use data format like [openbookqa](https://huggingface.co/datasets/allenai/openbookqa),one prompt has 4-9 response like paper in InstructGPT,thanks
2,164
1,042
HuggingFaceDocBuilderDev
2024-10-03T11:14:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2163). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,163
1,043
qgallouedec
2024-10-03T14:13:31
Can be reverted when https://github.com/huggingface/transformers/pull/33911 is merged
2,163
1,044
HuggingFaceDocBuilderDev
2024-10-03T09:56:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2162). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,162
1,045
yananchen1989
2024-10-08T17:03:14
> Traceback (most recent call last): > File "/workspace/trl/examples/scripts/sft.py", line 93, in <module> > trainer = SFTTrainer( > File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f > return f(*args, **kwargs) > File "/usr/local/lib/python3.10/dist-packages/transformers/utils/deprecation.py", line 165, in wrapped_func > return func(*args, **kwargs) > File "/workspace/trl/trl/trainer/sft_trainer.py", line 409, in __init__ > super().__init__( > TypeError: Trainer.__init__() got an unexpected keyword argument 'processing_class'
2,162
1,046
yananchen1989
2024-10-08T17:06:03
trl version: 0.12.0.dev0
2,162
1,047
BUILDERlym
2024-10-08T20:09:04
> > Traceback (most recent call last): > > File "/workspace/trl/examples/scripts/sft.py", line 93, in > > trainer = SFTTrainer( > > File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f > > return f(*args, **kwargs) > > File "/usr/local/lib/python3.10/dist-packages/transformers/utils/deprecation.py", line 165, in wrapped_func > > return func(*args, **kwargs) > > File "/workspace/trl/trl/trainer/sft_trainer.py", line 409, in **init** > > super().**init**( > > TypeError: Trainer.**init**() got an unexpected keyword argument 'processing_class' same issue. checked the local source code, indeed no argument 'processing_class'. why's that? seems like current version still uses 'tokenizer'
2,162
1,048
kashif
2024-10-08T20:16:48
you need to use the main version of `transformers`
2,162
1,049
BUILDERlym
2024-10-08T20:19:14
> you need to use the main version of `transformers` got it, thanks
2,162
1,050
qgallouedec
2024-10-03T09:09:11
Origin of error, this change: https://github.com/huggingface/transformers/pull/32385 `git bisect` is a wonderfull tool.
2,161
1,051
qgallouedec
2024-10-03T09:29:44
This bug is linked to the fact that `tokenizer` will no longer be an argument of trainer, but instead, `processing_class`. Suggested migration plan: - Do the same change, eg; ```diff trainer = RewardTrainer( model=model, args=training_args, - tokenizer=tokenizer, + processing_class=tokenizer, train_dataset=dataset, peft_config=peft_config, ) ``` - Ensure backward compatibility only for `SFTTrainer` and `DPOTrainer` via: ```python def __init__( ... tokenizer: Optional[PreTrainedTokenizerBase] = None, processing_class: Optional[ Union[PreTrainedTokenizerBase, BaseImageProcessor, FeatureExtractionMixin, ProcessorMixin] ] = None, ... ): if tokenizer is not None: if processing_class is not None: raise ValueError( "You cannot specify both `tokenizer` and `processing_class` at the same time. Please use `processing_class`." ) warnings.warn( "`tokenizer` is now deprecated and will be removed in the future, please use `processing_class` instead.", FutureWarning, ) processing_class = tokenizer ```
2,161
1,052
kashif
2024-10-03T09:43:07
yes looks like a good solution
2,161
1,053
edbeeching
2024-10-03T09:52:20
Yes seems good to me, it is a shame that these lines are just duplicates from the Trainer class and there is no way to just inherit them. ``` if tokenizer is not None: if processing_class is not None: raise ValueError( "You cannot specify both `tokenizer` and `processing_class` at the same time. Please use `processing_class`." ) warnings.warn( "`tokenizer` is now deprecated and will be removed in the future, please use `processing_class` instead.", FutureWarning, ) processing_class = tokenizer ```
2,161
1,054
HuggingFaceDocBuilderDev
2024-10-03T07:42:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2160). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,160
1,055
qgallouedec
2024-10-03T07:45:30
ERROR: type should be string, got "\r\nhttps://github.com/user-attachments/assets/3977ea1d-9c2f-4cd2-833d-e4d992fc2348\r\n\r\n"
2,160
1,056
gaetanlop
2024-10-03T04:50:40
~~Still a draft PR~~ Ready
2,159
1,057
qgallouedec
2024-10-04T13:34:03
Thanks a lot @gaetanlop. Added some suggestion and open questions
2,159
1,058
qgallouedec
2024-10-04T13:34:58
Also, please make sure to run the pre-commits (`make precommit`)
2,159
1,059
gaetanlop
2024-10-07T02:37:04
Hey @qgallouedec, thanks for the review. I have added a `gold_answers` parameter to the judge function for both the `base` and `MoJs` class. Also, I have added a `safety_judge` and `factuality_judge` as described in the CGPO paper. They also have rule based judges, but in my opinion they are a little bit too tailored to specific tasks (coding/maths) to be added to the `trl` library. If you think the `factuality` and `safety` judges are also too specific to be in the lib I can remove them from the PR. For the naming of the judges, let's do `BaseBinaryJudge` for the base class and `AllTrueJudge` for the moj following your suggestions?
2,159
1,060
qgallouedec
2024-10-10T09:15:19
> Hey @qgallouedec, thanks for the review. I have added a `gold_answers` parameter to the judge function for both the `base` and `MoJs` class. Also, I have added a `safety_judge` and `factuality_judge` as described in the CGPO paper. They also have rule based judges, but in my opinion they are a little bit too tailored to specific tasks (coding/maths) to be added to the `trl` library. If you think the `factuality` and `safety` judges are also too specific to be in the lib I can remove them from the PR. Thanks a lot! Nice work! > For the naming of the judges, let's do `BaseBinaryJudge` for the base class and `AllTrueJudge` for the moj following your suggestions? LGTM. --- WDYT of having generic classes in `trl.judges` (`AllTrueJudge`, `BinaryJudge` etc.) and subclass them in `trl.trainer.cgpo_trainer` to get `SafetyConstraintJudge` and `FacultyConstraintJudge`? If in the future we need these classes elsewhere, we can still move them in `trl.judges`
2,159
1,061
gaetanlop
2024-10-11T01:56:42
Ok, I have removed the `FacultyConstraintJudge` and the `SafetyConstraintJudge` from the PR and made the required renamings. Thanks @qgallouedec for the feedback.
2,159
1,062
HuggingFaceDocBuilderDev
2024-11-15T18:14:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2159). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,159
1,063
qgallouedec
2024-11-15T18:21:02
LGTM, thanks @gaetanlop
2,159
1,064
qgallouedec
2024-11-18T15:54:19
We're having the issue ``` FAILED tests/test_nash_md_trainer.py::TestNashMDTrainer::test_nash_md_trainer_judge_training_1_conversational_prompt_only - ValueError: Cannot find pytorch_model.bin or model.safetensors in C:\Users\runneradmin\.cache\huggingface\hub\llm-blender\PairRM ``` that is mainly due to the fact that PairRM is requested for download simultaneously for different tests. This happens quite randomly, and is not related to any actual bug in the code base. I'll ignore it and merge.
2,159
1,065
Abhishek-TAMU
2024-10-04T18:42:18
CC: @kashif @qgallouedec
2,158
1,066
HuggingFaceDocBuilderDev
2024-10-06T19:35:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2158). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,158
1,067
qgallouedec
2024-10-10T08:54:50
Hi, thanks for the PR. Can you provide the link of the PR in transformers? Is it https://github.com/huggingface/transformers/pull/33932?
2,158
1,068
qgallouedec
2024-10-10T08:58:59
Could you provide a simple test to: 1. Confirm that it is a case of non-functioning. 2. Verify that this addition resolves it. It might also be helpful to add a few comments, as these lines are unclear without context.
2,158
1,069
Abhishek-TAMU
2024-10-30T23:40:22
Thank you @qgallouedec for the review. This is the related transformers [PR](https://github.com/huggingface/transformers/pull/33932) which is approved and merged. I added 2 test cases. One where Tuning fails `with Padding` and another where doesn't fail `without padding`.
2,158
1,070
Abhishek-TAMU
2024-11-06T21:01:18
@kashif @qgallouedec Could you possibly review this PR ? Thank you!
2,158
1,071
Abhishek-TAMU
2024-11-12T17:09:53
Hi @kashif @qgallouedec, could you please take another look at this PR when you get the chance? The changes in this PR are urgent for making `torch_compile` [flag in SFTTrainer](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.TrainingArguments.torch_compile) work for Llama models (LlamaForCausalLM). This is important for users who need to compile the Llama model using SFTTrainer (in padding_free mode) without any graph breaks. Thank you!
2,158
1,072
qgallouedec
2024-11-19T15:13:12
Hey @Abhishek-TAMU, to keep you posted with the current status of the PR, I am struggling reproducing the initial error. Do you have a MRE by any chance? The code from the unittest gives ``` ... File "/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.11/site-packages/torch/_dynamo/variables/functions.py", line 134, in __init__ assert isinstance( AssertionError: expected FunctionType found _lru_cache_wrapper <functools._lru_cache_wrapper object at 0x7f000e67fb60> from user code: File "/fsx/qgallouedec/transformers/src/transformers/models/llama/modeling_llama.py", line 1224, in torch_dynamo_resume_in_forward_at_1199 loss = self.loss_function(logits=logits, labels=labels, vocab_size=self.config.vocab_size, **kwargs) Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True 0%| | 0/2 [00:07<?, ?it/s] ``` and it doesn't seem related
2,158
1,073
Abhishek-TAMU
2024-11-19T22:31:19
Thank you @qgallouedec for looking into this. Sharing you the code which would produce graph break. Using latest release version of `transformers` (which doesn't have https://github.com/huggingface/transformers/pull/33932 changes) and latest `trl` including changes from this PR. If this change https://github.com/huggingface/transformers/pull/33932 is used in transformers then Graph break could be avoided. ``` import os, tempfile, torch from transformers import ( AutoModelForCausalLM, AutoTokenizer) from trl import SFTConfig, SFTTrainer from trl.trainer import DataCollatorForCompletionOnlyLM from datasets import load_dataset standard_prompt_completion_dataset = load_dataset( "trl-internal-testing/zen", "standard_prompt_completion" ) os.environ["CUDA_VISIBLE_DEVICES"]="0" os.environ["CUDA_HOME"]="/home/tuning/.local/cuda-12.1" model_id = "trl-internal-testing/tiny-random-LlamaForCausalLM" torch_dtype = getattr(torch, "bfloat16", None) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch_dtype, attn_implementation="flash_attention_2") tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True) formatted_dataset = lambda example: { "output": f"### prompt:\n{example['prompt'].strip()}\n\n### completion:\n{example['completion'].strip()}{tokenizer.eos_token}" } train_dataset = standard_prompt_completion_dataset["train"].map(formatted_dataset) response_template = "### completion:\n" data_collator = DataCollatorForCompletionOnlyLM(response_template, tokenizer=tokenizer, padding_free=True) with tempfile.TemporaryDirectory() as tmp_dir: training_args = SFTConfig( output_dir=tmp_dir, dataloader_drop_last=True, max_steps=2, per_device_train_batch_size=2, gradient_accumulation_steps=1, save_steps=2, learning_rate=1e-5, dataset_text_field="output", torch_compile=True, torch_compile_backend="inductor", torch_compile_mode="default" ) trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=train_dataset, data_collator=data_collator, args=training_args, ) # with assertRaises(Exception): trainer.train() del os.environ["CUDA_VISIBLE_DEVICES"] ```
2,158
1,074
Abhishek-TAMU
2024-11-21T18:18:54
Hi @qgallouedec, were you able to reproduce the initial error with this MRE ?
2,158
1,075
ArthurZucker
2024-12-18T14:43:32
The loss error was fixed in transformers
2,158
1,076
ArthurZucker
2024-12-23T15:39:06
LGTM!
2,158
1,077
qgallouedec
2025-01-06T14:10:28
Thanks and sorry for the delay
2,158
1,078
qgallouedec
2024-10-02T13:32:25
Thanks for contributing. I would suggest a simpler approach. Just modify https://github.com/huggingface/trl/blob/78249d9de46486a7fdb99c441ce0f52b9b0e1980/trl/commands/cli.py#L130-L142 into ```python def main(): command_name = sys.argv[1] if command_name == "chat": chat() elif command_name == "env": print_env() else: train(command_name) ``` It should be enough, what do you think?
2,157
1,079
grumpyp
2024-10-02T13:45:39
> #2101 Hi @qgallouedec Thanks for the feedback! I see your point about simplifying the approach. My initial thought was to keep the existing structure to maintain clarity for users regarding supported commands. I mean it could be done in that straight forward way but as a user I would not know what commands I could use. To find out I'd actually have to dig into the code and see what's gonna be executed e.g. how `train` works. Maybe I am overcomplicating things here. This is my first contribution so I don't know what kinda users (weather technical enough or not) are using `trl`. Either way, I will adjust the implementation based on what you think is best!
2,157
1,080
qgallouedec
2024-10-02T14:20:19
As a user I'd use ``` trl --help ``` We're currently tweaking `sys.argv` instead of a proper argparse. That's why the above command won't give anything. But in the future, I'd like to use argparse instead.
2,157
1,081
grumpyp
2024-10-02T14:25:17
@lewtun anything to say here maybe? You opened the issue and might have some additional suggestion. `trl --help` wouldn't work and is currently also not working Either way, I am happy to go with your suggestion. Please let me know how you'd want it.
2,157
1,082
qgallouedec
2024-10-02T15:03:54
> `trl --help` wouldn't work and is currently also not working In the future, we will probably move to subparser for the trl cli, so `trl --help` will output something. > Either way, I am happy to go with your suggestion. Please let me know how you'd want it. Do you mind trying with the above suggestion? Also, the critical point here is to add the tests: like this one https://github.com/huggingface/trl/blob/78249d9de46486a7fdb99c441ce0f52b9b0e1980/tests/test_cli.py#L20-L28
2,157
1,083
grumpyp
2024-10-02T22:02:32
> > `trl --help` wouldn't work and is currently also not working > > In the future, we will probably move to subparser for the trl cli, so `trl --help` will output something. > > > Either way, I am happy to go with your suggestion. Please let me know how you'd want it. > > Do you mind trying with the above suggestion? Also, the critical point here is to add the tests: like this one > > https://github.com/huggingface/trl/blob/78249d9de46486a7fdb99c441ce0f52b9b0e1980/tests/test_cli.py#L20-L28 Hi, do you want me to add a test for each model or some dynamic way? So if I understand correctly, you want to use the test you just proposed and this approach: ``` def main(): command_name = sys.argv[1] if command_name == "chat": chat() elif command_name == "env": print_env() else: train(command_name) ``` ?
2,157
1,084
qgallouedec
2024-10-03T11:48:53
> do you want me to add a test for each model or some dynamic way? For each model. The args may vary a lot so I don't think it's possible to have a generic test for all scripts > So if I understand correctly, you want to use the test you just proposed and this approach: That's right
2,157
1,085
qgallouedec
2024-12-22T12:03:25
Closing as a consequence of CLI refactoring
2,157
1,086
lewtun
2024-10-02T16:19:34
Hey @gaetanlop I think this would be a really cool addition to the library and one that is likely to be useful to the community now that online methods are becoming more common! The best part is that they used open datasets in the paper, so we should be able to verify the implementation by trying to reproduce similar training curves like this: <img width="1208" alt="Screenshot 2024-10-02 at 18 18 46" src="https://github.com/user-attachments/assets/22dbed91-e5ff-4e25-aeef-b600a560e0fa"> I suggest starting with small models from Llama or Qwen2.5 and later we can scale up on the HF cluster if needed :)
2,156
1,087
gaetanlop
2024-10-03T03:47:14
Hello @lewtun, thank you for your reply. Indeed, we should be able to verify that everything is correct by trying to reproduce there results. The judges seem to be using pretty large models though (llama 70b), so this might be complicated to run it without your cluster (I don't think small models will do the job as judges). I have implemented the reward part and the mixture of judges in separate PRs, the rest will be done in a single PR normally. I can also close the 2 other PRs and put everything in a single PR if you prefer.
2,156
1,088
kashif
2024-10-02T10:32:43
the way i understood the calibrated reward was that the scores from a reward model might not be comparable across different completions given a prompt, and thus for some completion baseline or ground-truth completion `ā` for a prompt `s` and completion `a` the calibrated reward should be: `Rcalib(s, a) = σ(reward_model(s, a) − reward_model(s, ā))` my implementation is: ```python def _compute_calib_rewards(self, completions, prompts, ground_truth_completions): context_length = prompts["input_ids"].shape[1] with torch.no_grad(): _, generated_scores, _ = get_reward( self.reward_model, completions["input_ids"], self.tokenizer.pad_token_id, context_length ) # Compute scores for ground-truth completions ground_truth_input_ids = torch.cat([prompts["input_ids"], ground_truth_completions["input_ids"]], dim=1) _, ground_truth_scores, _ = get_reward( self.reward_model, ground_truth_input_ids, self.tokenizer.pad_token_id, context_length ) if self.args.missing_eos_penalty is not None: completion_contain_eos = torch.any(completions["input_ids"] == self.tokenizer.eos_token_id, dim=-1) generated_scores[~completion_contain_eos] -= self.args.missing_eos_penalty ground_truth_contain_eos = torch.any( ground_truth_completions["input_ids"] == self.tokenizer.eos_token_id, dim=-1 ) ground_truth_scores[~ground_truth_contain_eos] -= self.args.missing_eos_penalty return F.sigmoid(generated_scores - ground_truth_scores) ```
2,155
1,089
gaetanlop
2024-10-02T12:43:44
Thanks for looking at it @kashif. Your code and mine are exactly the same except for the `missing_eos_penalty` part that I did not put in the function to be consistent with your function `get_reward` (we can handle the `missing_eos_penalty` part in the potential `CGPOTrainer` as done in the `OnlineDPOTrainer`). Apart from that, we have the same implementation. You are assuming that the `ground_truth_completion` do not contain the prompt, while I am assuming it contains it. Also, I am computing the reward for both `a` and `ā` in a single forward pass by concatenating them, while you are computing it separately for `a` and `ā`. Example using your naming conventions and assuming both `query_responses` and `baselines_responses` contain the prompt: ```python batch_size = query_responses.shape[0] concatenated_responses = torch.cat( (query_responses, baseline_responses), dim=0, ) reward_logits, final_rewards, sequence_lengths = get_reward( model, concatenated_responses, pad_token_id, context_length ) generated_scores, ground_truth_scores = final_rewards.split(batch_size, dim=0) final_rewards = F.sigmoid(generated_scores-ground_truth_scores) ``` For the returns, I am also returning all the `calibrated_logits`, and the `sequence_lengths` alongside what you are returning (final calibrated reward) to be consistent with the `get_reward` function of trl. My implementation lacked a sigmoid function for the reward_logits thought. Am I correct?
2,155
1,090
kashif
2024-10-02T12:59:22
ah right right! you are right!
2,155
1,091
kashif
2024-10-02T13:01:13
so the reason i have the stuff split is because of padding... when i join the two different completions i have to pad them together while its slightly easier to pad each completion... and then i was scared if by contacting the memory needs might be too much for largish reward models... but yes makes sense
2,155
1,092
gaetanlop
2024-10-03T03:58:41
Yes @kashif 100% agree, there are pros and cons for both methods. It also depends on the distributed training strategy you are using to train the model. In any case, I checked the `trl` code base and it seems you adopted this concatenation method in the `OnlineDPOTrainer` https://github.com/huggingface/trl/blob/78249d9de46486a7fdb99c441ce0f52b9b0e1980/trl/trainer/online_dpo_trainer.py#L416-L427 I think we should keep it this way. Wdyt?
2,155
1,093
gaetanlop
2024-10-06T19:31:26
Closing in favor of #2190
2,155
1,094
qgallouedec
2024-10-07T10:30:18
> I can probably put together a fix for trl when I have some more free time if y'all are interested, since I understand the behaviour now. Thanks for reporting, help in proposing a fix would be greatly appreciated.
2,154
1,095
kashif
2024-10-02T09:54:15
in the `kto_config.py` docstrings can you kindly add: ```py disable_dropout (`bool`, *optional*, defaults to `True`): Whether to disable dropout in the model. ```
2,153
1,096
HuggingFaceDocBuilderDev
2024-10-02T09:54:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2153). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,153
1,097
kawine
2024-10-06T06:35:54
> in the `kto_config.py` docstrings can you kindly add: > > ```python > disable_dropout (`bool`, *optional*, defaults to `True`): > Whether to disable dropout in the model. > ``` done!
2,153
1,098
HuggingFaceDocBuilderDev
2024-10-01T18:49:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2152). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,152
1,099