user
stringlengths
3
28
created_at
timestamp[us]date
2020-04-01 09:48:12
2025-03-30 02:12:16
body
stringlengths
1
173k
issue_number
int64
1
3.18k
__index_level_0__
int64
0
8.59k
Thewillman
2024-10-07T17:05:41
The rewards accuracies just float around 0.5, which means the chosen rewards in some steps can smaller than the rejected rewards
2,194
900
qgallouedec
2024-10-07T17:12:48
Sorry, but despite my best efforts I can't understand your question. You're talking about similar prompts in a list, about modifying the codebase without providing us with your modifications, about a new loss function, and about values that stagnate, and so on. Can you try to put things more clearly, providing all the necessary information and only the necessary information? Like the code you're using, the dataset, the package version you're using, the training arguments, etc. In other words, what's needed to easily replicate what you're describing? See other issues for some references...
2,194
901
Thewillman
2024-10-07T17:22:52
> Sorry, but despite my best efforts I can't understand your question. You're talking about similar prompts in a list, about modifying the codebase without providing us with your modifications, about a new loss function, and about values that stagnate, and so on. > > Can you try to put things more clearly, providing all the necessary information and only the necessary information? Like the code you're using, the dataset, the package version you're using, the training arguments, etc. In other words, what's needed to easily replicate what you're describing? See other issues for some references... Thank you for your check. I'll try to explain in as much detail as possible. I constructed a [prompt, chosen, rejected] dataset as required, and now, in order to add a new loss, I categorized the data with similar prompts into several lists and let the dataloader read them. However, I found that the training results were very poor. After removing the new loss function, I discovered that the DPO loss itself was already performing poorly. I'm not sure if this is due to the change in the way the data is loaded, because after doing this, each batch during training contains several pieces of data with similar prompts. Here is the **_tokenize** function in **dpotrainer.py** I revised, maybe it can help for understand my question: `def _tokenize( features: Dict[str, List], tokenizer: PreTrainedTokenizerBase, args: DPOConfig, processor: Optional[Callable] = None, model: Optional[PreTrainedModel] = None, ) -> Dict[str, List]: batch = defaultdict(list) prompt_list = features["prompt_list"] chosen_list = features["chosen_list"] rejected_list = features["rejected_list"] #batch_len = [] if model is None: chosen_tokens_list = [] rejected_tokens_list = [] prompt_tokens_list = [] for idx in range(len(prompt_list)): prompt = prompt_list[idx] chosen = chosen_list[idx] rejected = rejected_list[idx] list_len = len(prompt_list) images = [None]*len(prompt) prompt_tokens = _process_prompt(prompt, processor, tokenizer, images) chosen_tokens = _process_answer(prompt, chosen, processor, tokenizer, images) rejected_tokens = _process_answer(prompt, rejected, processor, tokenizer, images) prompt_len_input_ids = _adjust_prompt_length(prompt_tokens, chosen_tokens, rejected_tokens) prompt_tokens, chosen_tokens, rejected_tokens = _add_special_tokens( tokenizer, prompt_len_input_ids, prompt_tokens, chosen_tokens, rejected_tokens ) _truncate_tokens(chosen_tokens, rejected_tokens, prompt_tokens, args) prompt_tokens_list.append(prompt_tokens) chosen_tokens_list.append(chosen_tokens) rejected_tokens_list.append(rejected_tokens) _build_sequence_tokens(batch, chosen_tokens_list, args, "chosen") _build_sequence_tokens(batch, rejected_tokens_list, args, "rejected") _append_prompt_tokens_to_batch(batch, prompt_tokens_list) else: for idx in range(len(prompt_list)): prompt_list_ = prompt_list[idx] chosen_list_ = chosen_list[idx] rejected_list_ = rejected_list[idx] _tokenize_encoder_decoder( batch, tokenizer, prompt_list_, chosen_list_, rejected_list_, args, model ) batch = dict(batch) return batch`
2,194
902
Thewillman
2024-10-07T17:49:54
> > Sorry, but despite my best efforts I can't understand your question. You're talking about similar prompts in a list, about modifying the codebase without providing us with your modifications, about a new loss function, and about values that stagnate, and so on. > > Can you try to put things more clearly, providing all the necessary information and only the necessary information? Like the code you're using, the dataset, the package version you're using, the training arguments, etc. In other words, what's needed to easily replicate what you're describing? See other issues for some references... > > Thank you for your check. I'll try to explain in as much detail as possible. I constructed a [prompt, chosen, rejected] dataset as required, and now, in order to add a new loss, I categorized the data with similar prompts into several lists and let the dataloader read them. However, I found that the training results were very poor. After removing the new loss function, I discovered that the DPO loss itself was already performing poorly. I'm not sure if this is due to the change in the way the data is loaded, because after doing this, each batch during training contains several pieces of data with similar prompts. Here is the **_tokenize** function in **dpotrainer.py** I revised, maybe it can help for understand my question: `def _tokenize( features: Dict[str, List], tokenizer: PreTrainedTokenizerBase, args: DPOConfig, processor: Optional[Callable] = None, model: Optional[PreTrainedModel] = None, ) -> Dict[str, List]: > > ``` > batch = defaultdict(list) > prompt_list = features["prompt_list"] > chosen_list = features["chosen_list"] > rejected_list = features["rejected_list"] > #batch_len = [] > if model is None: > chosen_tokens_list = [] > rejected_tokens_list = [] > prompt_tokens_list = [] > for idx in range(len(prompt_list)): > prompt = prompt_list[idx] > chosen = chosen_list[idx] > rejected = rejected_list[idx] > list_len = len(prompt_list) > images = [None]*len(prompt) > prompt_tokens = _process_prompt(prompt, processor, tokenizer, images) > chosen_tokens = _process_answer(prompt, chosen, processor, tokenizer, images) > rejected_tokens = _process_answer(prompt, rejected, processor, tokenizer, images) > prompt_len_input_ids = _adjust_prompt_length(prompt_tokens, chosen_tokens, rejected_tokens) > prompt_tokens, chosen_tokens, rejected_tokens = _add_special_tokens( > tokenizer, prompt_len_input_ids, prompt_tokens, chosen_tokens, rejected_tokens > ) > _truncate_tokens(chosen_tokens, rejected_tokens, prompt_tokens, args) > prompt_tokens_list.append(prompt_tokens) > chosen_tokens_list.append(chosen_tokens) > rejected_tokens_list.append(rejected_tokens) > _build_sequence_tokens(batch, chosen_tokens_list, args, "chosen") > _build_sequence_tokens(batch, rejected_tokens_list, args, "rejected") > _append_prompt_tokens_to_batch(batch, prompt_tokens_list) > else: > for idx in range(len(prompt_list)): > prompt_list_ = prompt_list[idx] > chosen_list_ = chosen_list[idx] > rejected_list_ = rejected_list[idx] > _tokenize_encoder_decoder( > batch, tokenizer, prompt_list_, chosen_list_, rejected_list_, args, model > ) > batch = dict(batch) > return batch` > ``` I check the code in my DIY dpotrainer.py and utils.py, the code related padding and dataloading seems not wrong. Here is the tensorboard result ![accuracy](https://github.com/user-attachments/assets/f91275f6-d575-4665-9fc2-8f291d887212) ![loss](https://github.com/user-attachments/assets/b9abd307-bcfe-4d46-9834-013f241d72ea) ![margins](https://github.com/user-attachments/assets/4c8b24c5-5108-4a32-93be-0846447485a4)
2,194
903
August-murr
2024-10-08T18:08:27
This could go up on r/programmerhorror Sir, I too struggled to understand the problem. From what I gathered, I have to ask: Why would you group similar prompts together? There's probably no benefit to grouping them. If anything, you want them to be as diverse and random as possible to improve generalization. Since I don't understand what you're doing with the tokenizer, it's likely that you're overcomplicating things and probably messing up the data structure. You may need to take a step back and approach it in a simpler way.
2,194
904
HuggingFaceDocBuilderDev
2024-10-07T13:55:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2193). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,193
905
qgallouedec
2024-10-07T14:19:21
just realised I've not pushed the right commit. Now it's all good
2,193
906
alvarobartt
2024-10-07T09:33:14
Hi here @smartliuhw! When running SFT ideally you should set the tokenizer's PAD token to the EOS (End of Sentence) token, not to an unknown token; this implies that the sequences will be padded using the EOS token instead, unless the tokenizer already has an explicitly defined padding token; which is fine too. So the following should do the work: ```python if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token ``` > [!NOTE] > You need to set that right after instantiating the tokenizer i.e. before tokenizing the inputs and, so on, before training too. Also when saving the model make sure that you also save the tokenizer that you used for that training run.
2,192
907
smartliuhw
2024-10-07T16:36:21
@alvarobartt Hi! Thanks for your response! I have tried to set PAD token to the EOS token and ran another task on mistral-7B, but the loss was still abnormal--It started with 1.x, after the warmup it increased to 4.x and kept staying at the level, while Gemma have dropped to 0.02. I'm going to try the same code on Llama3 since there's no PAD token for Llama either. I will let you know if I get new result. btw, could you please help me check if there's anything wrong with my code ```python import os import sys import random import json import argparse from dataclasses import dataclass, field import torch from tqdm import tqdm import pandas as pd from trl import SFTTrainer, DataCollatorForCompletionOnlyLM from trl.trainer import ConstantLengthDataset from datasets import load_from_disk, concatenate_datasets, Dataset, DatasetDict import transformers from transformers import AutoTokenizer, AutoModelForCausalLM, HfArgumentParser from utils import get_train_data, formatting_prompts_func, formatting_constant_length_func, model_name_to_path @dataclass class ModelArguments: model_type: str = field(default=None, metadata={"help": "The model name."}) @dataclass class DataArguments: train_data: str = field(default=None, metadata={"help": "Choose the training data, split by comma."}) @dataclass class TrainingArguments(transformers.TrainingArguments): max_seq_length: int = field(default=8192, metadata={"help": "The cache directory."}) def train(): parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() # Set random seed random.seed(training_args.seed) # Load model and tokenizer model_path = model_name_to_path[model_args.model_type] tokenizer = AutoTokenizer.from_pretrained(model_path, padding_side="right") model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16) # if "mistral" in model_args.model_type.lower() or "llama" in model_args.model_type.lower(): # tokenizer.add_special_tokens({"pad_token": "<pad>"}) # model.resize_token_embeddings(len(tokenizer)) # tokenizer.pad_token = tokenizer.unk_token # tokenizer.pad_token = tokenizer.eos_token if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token # Load training data print("Loading training data...") train_data = get_train_data(data_args.train_data, training_args.seed) # set completion only data collator response_template = "### Response:" collator = DataCollatorForCompletionOnlyLM( tokenizer=tokenizer, response_template=response_template, # max_length=training_args.max_seq_length, ) # Load trainer trainer = SFTTrainer( model, training_args, tokenizer=tokenizer, train_dataset=train_data, formatting_func=formatting_prompts_func, # packing=True, data_collator=collator, max_seq_length=training_args.max_seq_length, ) # Train the model trainer.train() # Save the model trainer.save_model(training_args.output_dir) if __name__ == "__main__": train() ```
2,192
908
smartliuhw
2024-10-08T03:22:29
Hi @alvarobartt ! I have ran two more experiments on Llama3-8B and Llama2-7B, both works fine in the same code. I think maybe there's some special setting needed for mistral-7B. Do you have any idea about that?
2,192
909
zwhe99
2024-10-15T12:00:39
Hi @smartliuhw ! The doc says that `pad_token_id` should be different from `eos_token_id`. Do you know what the right way is?
2,192
910
smartliuhw
2024-10-15T12:06:36
Hi @zwhe99 ! I have done some more exps and got these findings: - The ``pad_token_id`` should set to ``eos_token_id`` - Mistral model is quite sensitive to the hy-param ``warmup_steps``, after I set it to no more than 2% steps of all , the loss drops fine - With the same code and setting, mistral model is much slower to train compared to other models Hope those findings can help you out!
2,192
911
zwhe99
2024-10-16T05:09:44
Thanks! @smartliuhw
2,192
912
HuggingFaceDocBuilderDev
2024-10-07T08:23:39
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2191). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,191
913
qgallouedec
2024-10-07T08:29:24
See https://moon-ci-docs.huggingface.co/docs/trl/pr_2191/en/clis#chat-interface ![Screenshot 2024-10-07 at 10 28 40](https://github.com/user-attachments/assets/edb74258-0174-4cec-b029-2c8a0149a39b)
2,191
914
gaetanlop
2024-10-11T02:23:02
Some judges in `CGPOTrainer` need gold answers or metadata to make decisions. This gold answer can be used to compare with the policy's output or store metadata for `rule-based judges` (examples of this metadata are in Table 4). I’m using the `DataCollatorForChatML` in the CGPOTrainer right now. Should we create a new dataset format with a prompt, completion, and gold answer in a separate PR, or should we modify DataCollatorForChatML to return the non-tokenized gold answer along with the current parameters? What do you prefer ?@qgallouedec @kashif
2,190
915
HuggingFaceDocBuilderDev
2024-10-06T19:09:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2189). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,189
916
kawine
2024-10-06T06:37:30
Some of the changes carry over from https://github.com/huggingface/trl/pull/2153 so maybe it's best to merge that one first?
2,187
917
HuggingFaceDocBuilderDev
2024-10-06T09:31:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2187). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,187
918
kashif
2024-10-06T17:00:54
@kawine the kto test(s) failing...
2,187
919
kawine
2024-10-06T17:24:47
Sorry I forgot to rewrite the tests. Will fix in an hour.
2,187
920
kawine
2024-10-06T18:56:33
@kashif The tests are passing now.
2,187
921
kashif
2024-10-06T18:58:07
very cool! checking!
2,187
922
kashif
2024-10-06T19:17:53
can i add back the `maybe_unpair_preference_dataset` logic in the example script?
2,187
923
kawine
2024-10-06T19:29:44
Sure!
2,187
924
kawine
2024-10-09T05:19:26
@kashif does this look okay? I merged the latest changes in from main
2,187
925
kashif
2024-10-09T06:10:25
yes, I believe so, the only issue is that the dpo helpers being used somehow smell bad... we need to perhaps make it more modular or simplify it... let me ask around
2,187
926
kawine
2024-10-10T03:45:33
@kashif seems that there are already tokenization helper functions in utils, so I just moved the remaining methods that both DPO and KTO tokenization depend on to utils. This removes the dependency between trainers and makes the code simpler as well -- hopefully that should be fine?
2,187
927
qgallouedec
2024-10-10T09:26:35
Thank you for this PR! I completely agree with your observations. I'm currently working on further refactoring the tokenization phase for DPO (#2209—feel free to contribute, by the way). I suggest putting this PR on hold for now, as the solution might become simpler once we've identified a more straightforward approach for DPO.
2,187
928
qgallouedec
2024-10-24T15:25:07
#2209 is now merged. We would like to do the same refactoring for KTO, if you're still interested in contributing, let us know :)
2,187
929
HuggingFaceDocBuilderDev
2024-10-07T11:59:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2186). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,186
930
Kedarnath-Rothe
2024-10-06T16:08:10
Please assign this issue to me
2,185
931
PRIYANKjakharia
2024-10-06T17:28:16
> Please assign this issue to me I appreciate your enthusiasm but I have already updated all the necessary changes. your understanding would be appreciated
2,185
932
HuggingFaceDocBuilderDev
2024-10-05T13:53:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2184). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,184
933
qgallouedec
2024-10-05T13:41:57
Thanks a lot @August-murr!
2,183
934
DhruvKadam-git
2024-10-10T14:04:21
if everything is ok then please merge it
2,182
935
qgallouedec
2024-10-05T10:17:36
I let native English speakers review this one 😉
2,181
936
kushal34712
2024-10-05T10:29:13
@qgallouedec why not sir, just check it once and merge it if possible
2,181
937
kushal34712
2024-10-06T08:09:10
@qgallouedec sir it its reviewed then merge it please
2,181
938
qgallouedec
2024-10-06T11:11:16
Hi @kushal34712 as I've written, I've requested the review of native english speakers. It will be merged when reviewed
2,181
939
HuggingFaceDocBuilderDev
2024-10-06T11:43:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2181). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,181
940
HuggingFaceDocBuilderDev
2024-10-08T09:42:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2180). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,180
941
kushal34712
2024-10-08T15:32:10
@qgallouedec made the changes requested
2,180
942
qgallouedec
2024-10-08T20:47:19
> @qgallouedec made the changes requested thanks, I let @lewtun and @edbeeching reviewing this one
2,180
943
imrankh46
2024-10-05T04:26:41
@kashif @lewtun
2,179
944
kashif
2024-10-07T10:23:04
since the logits/dictionary needs to match between the teacher and student model, I do not thinks possible to train with closed models
2,179
945
August-murr
2024-10-08T06:29:39
> since the logits/dictionary needs to match between the teacher and student model, I do not thinks possible to train with closed models Anthropic API doesn't output any logits or logprobs and [they have no plans to](https://github.com/anthropics/anthropic-sdk-python/issues/393#issuecomment-2000027696), and OpenAI only allows a max of 20 logprobs. It seems like they really don't want you to distill. OpenAI recently announced a [distillation service](https://openai.com/index/api-model-distillation/), but it's only for their own models and not open source.
2,179
946
HuggingFaceDocBuilderDev
2024-10-04T16:42:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2178). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,178
947
johko
2024-10-07T20:37:29
It sounds like a cool first issue. I'd be up for working on it to explore the TRL project structure a bit more.
2,177
948
qgallouedec
2024-10-08T08:31:22
Thanks for the proposal! Feel free to open a pull request, and we'll assist you. 😊
2,177
949
August-murr
2024-10-10T08:40:05
@qgallouedec it's mostly `dummy-GPT2-correct-vocab` Do you want to replace those too?
2,177
950
qgallouedec
2024-10-10T09:42:18
Yes. We could try using `Qwen2.5-0.5B-Instruct` directly. Although it's a much larger model, it might still work fine, as long as it doesn't slow down or disrupt the tests. If it does, we'll need to create our own tiny version of it.
2,177
951
johko
2024-10-10T20:43:01
Hey @qgallouedec , I just ran some tests and my biggest issue was the good old `Out of Memory error` when running it with `Qwen2.5-0.5B-Instruct `locally. Is there any benefit in using a larger model or would it also be feasible to use a smaller instruction tuned model, like `HuggingFaceTB/SmolLM-135M-Instruct`? A small model like that would probably also not create a lot of slowdown and is easier to test on most local machines.
2,177
952
qgallouedec
2024-10-17T07:19:33
Sorry for the late reply. No, there isn't any benefit of running the tests with larger models. Ideally we want to use tiny models for testing.
2,177
953
johko
2024-10-26T20:56:20
Hey @qgallouedec, just wanted to let you know that I'm still working on it. One more question: Should the gpt2 model only be replaced in actual trainer tests, i.e. cpo, bco, dpo etc. or in all the tests (it also occurs in e.g. callback tests)?
2,177
954
qgallouedec
2024-10-27T17:30:37
Hi, as I didn't see any PR I thought you hadn't had time to deal with it and I went ahead on my own :( I opted for a deeper rebuild, regenerating all the tiny models. I'll let you have a look at #2287. I'm always interested in feedback, though.
2,177
955
johko
2024-10-27T20:55:22
> Hi, as I didn't see any PR I thought you hadn't had time to deal with it and I went ahead on my own :( I opted for a deeper rebuild, regenerating all the tiny models. I'll let you have a look at #2287. I'm always interested in feedback, though. No problem. I had less time than I initially hoped and also didn't really update you, so no worries. It is actually interesting for me to see how you are solving this ;)
2,177
956
mayank31398
2024-10-06T06:29:12
Hey, this is expected behaviour. FSDP-1 only allows accumulation in 16-bit precision. This is not the case for FSDP-2 which allows accumulation in both 16-bit and 32-bit.
2,175
957
mayank31398
2024-10-06T06:35:25
documentation for FSDP-1: <img width="661" alt="Screenshot 2024-10-06 at 2 34 34 AM" src="https://github.com/user-attachments/assets/190b1e60-ea13-4810-a536-4f9ef62e8ae7"> documentation for FSDP-2: <img width="692" alt="Screenshot 2024-10-06 at 2 35 10 AM" src="https://github.com/user-attachments/assets/4c9e7ee1-0646-4812-9132-ad7ec862a5cd">
2,175
958
benjamin-marie
2024-10-07T05:44:18
Interesting, I didn't know this. But I don't think it matters, I would be surprised that TRL uses FSDP's reduce-scatter for single GPU training.
2,175
959
qgallouedec
2024-10-07T09:39:36
Hi, thanks for reporting this. Can you share your system info and the code you use for training?
2,175
960
benjamin-marie
2024-10-07T10:04:58
Sure, it's all in the notebook I linked to in my first post. I ran this notebook on Colab with the A100.
2,175
961
teknium1
2024-10-10T15:57:56
Someone tried in fp32 and it didnt help so it doesnt seem to be the reason https://x.com/bnjmn_marie/status/1842464802636980564
2,175
962
vigneshwaran
2024-10-11T04:51:52
Have you tried full/mixed precision AdamW optimiser?
2,175
963
benjamin-marie
2024-10-11T05:43:17
Yes: ![image](https://github.com/user-attachments/assets/d25e7b20-7551-47fe-b758-01f750636738) This configuration uses fp32 and adamw_torch.
2,175
964
fzyzcjy
2024-10-15T06:44:54
Hi, is there any updates? Thanks!
2,175
965
danielhanchen
2024-10-15T09:08:02
I'm writing up a report about this - I think I managed to fix it :) (Yes it is in fact a subtle bug!) - will tweet and post about it in like 8 - 10 hours!
2,175
966
shimmyshimmer
2024-10-15T16:52:58
We have fixed the issue guys! Tweet: https://twitter.com/UnslothAI/status/1846231235749990699 Blogpost: https://unsloth.ai/blog/gradient
2,175
967
geronimi73
2024-10-15T17:07:38
> We have fixed the issue guys! nice! feel like fixing it in TRL too?
2,175
968
shimmyshimmer
2024-10-15T17:50:17
> > We have fixed the issue guys! > > nice! feel like fixing it in TRL too? The Hugging Face team is already on it! :)
2,175
969
muellerzr
2024-10-15T17:55:10
(Somewhat, currently trying to reverse engineer a few ways you did it, you guys would be *much* faster at it I imagine if you want to beat us to it ;) As this is more than TRL, it's ground up transformers/Trainer tbh I think)
2,175
970
danielhanchen
2024-10-15T18:23:09
:) Wrote a detailed tweet about it: https://x.com/danielhanchen/status/1846235913443262891 Also Reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1g4ego7/llm_training_bug_fixes_gradient_accumulation_was/ Blog post: https://unsloth.ai/blog/gradient Also @shimmyshimmer is my brother!! :)
2,175
971
muellerzr
2024-10-15T19:08:38
Just as a fair warning, this will not be an immediate nor quick fix, since essentially this means every single model's calculation is off when doing `output.loss`, and every single model will need a custom variation of CrossEntropy (and other valid loss funcs) if you do *not* calculate the loss by hand. We are working on figuring out the best solution.
2,175
972
nahidalam
2024-10-15T23:56:18
@danielhanchen from the blog `The 2nd theory was there is in fact a bug in the loss calculation, which we find to be the case.` this bug is specifically for `CrossEntropy` loss calculation in HF `trl`? This will not be an issue if someone is using say [torch.nn.CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) ?
2,175
973
huseinzol05
2024-10-16T01:50:11
@muellerzr , i believe this only make sense padding based batch, for packing, there is no 0 / pad token in the batch, so avg cross entropy is consistent
2,175
974
danielhanchen
2024-10-16T01:59:46
@nahidalam Unfortunately this is not a HF native issue. The way gradient accumulation has been originally done in many packages even those that use Pytorch directly accidentally missed considering ignored tokens. Using CE Loss directly does not solve the issue since mean reduction does not work, and sum will cause the loss to be scaled incorrectly. @huseinzol05 Packing is also affected albeit less so since some people also do training on completions so it'll also make the loss incorrect. @muellerzr If you guys need any help on anything, ping me!
2,175
975
wongjingping
2024-10-16T04:21:02
Kudos @danielhanchen on the fix! Neat write-up as well! Back to the OP, I think the issue isn't with the `trl` library, but with the `transformers` library instead, because of how [SFTTrainer extends Trainer](https://github.com/huggingface/trl/blob/2ba3005d1c0a9ecec130108ef767d496b6d720cd/trl/trainer/sft_trainer.py#L67), how the loss is calculated in [Trainer's compute_loss](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3598-L3633), and how it is naively scaled by the number of steps [here](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L3596). I don't have a ton of context, but I _imagine_ the more principled solution would be to fix it within the `Trainer.compute_loss` function, vs say having `SFTTrainer` override the `compute_loss` method. Happy to assist with the transformers fix if anyone from HF would like to take me up on it 😄
2,175
976
qingjianbuyi
2024-10-28T13:57:37
Does DDP have the same issue? @danielhanchen
2,175
977
muellerzr
2024-10-28T14:33:55
Yes, ddp does we already have documented this + a fix is being put in (I also have an article talking about this more, tl;dr you can choose a slower option of gathering all of the inputs/counts, which causes a communication which generally isn't recommended so it's False by default)
2,175
978
burtenshaw
2024-11-25T12:29:31
Should this be closed since it's fixed in transformers? cc @qgallouedec @lewtun
2,175
979
qgallouedec
2024-11-25T20:12:04
Right @burtenshaw. Closed by https://github.com/huggingface/transformers/pull/34198
2,175
980
pminervini
2024-11-27T06:54:37
<img width="208" alt="Screenshot 2024-11-27 at 07 53 31" src="https://github.com/user-attachments/assets/077d0284-7980-407c-8c05-6db844120ed3"> Time to hit that "Close Issue" button @qgallouedec @burtenshaw! :) I thought the issue was open because of that!
2,175
981
qgallouedec
2024-11-27T07:33:43
Oops
2,175
982
surprisedPikachu007
2024-12-06T05:41:52
> @huseinzol05 Packing is also affected albeit less so since some people also do training on completions so it'll also make the loss incorrect. For language modeling task, will this be a problem even if all samples in a batch have the exact same sequence length?
2,175
983
HuggingFaceDocBuilderDev
2024-10-04T15:08:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2174). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,174
984
qgallouedec
2024-10-08T13:25:00
While working on this, I noticed that most of our documentation and notebooks are based on PPO and may be outdated, relying on older datasets, models, references, etc. I believe updating all of our examples and guides falls outside the scope of this PR. I recommend the following: - For now, keep the existing outdated documentation, guides, and notebooks as they are. - Begin a separate effort to rewrite these materials in future PR(s), primarily starting from scratch.
2,174
985
sygi
2024-10-18T08:36:39
Would it be possible to keep the "old" PPO code around until the new trainer achieves feature parity (in particular peft and arbitrary reward support) with the old one?
2,174
986
Void-025
2024-10-31T20:48:34
I would also like to have the old PPO code kept around, I was previously using the old ppotrainer.step() function to provide my own reward values for each query/response pair, but there doesn't seem to be an equivalent in PPOv2, unless I'm missing some other way of doing this?
2,174
987
qgallouedec
2024-11-01T16:50:20
You should use trl==0.11
2,174
988
corbt
2024-12-20T01:35:04
Are there guides/documentation on how to use the new PPO trainer? I'm particularly interested in how to combine it with a custom reward function (instead of an explicit reward model). Also, I'm curious about any new benefits from this change—what are we gaining with it? Thanks!
2,174
989
HuggingFaceDocBuilderDev
2024-10-04T14:39:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2173). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,173
990
kashif
2024-10-04T11:08:55
awesome @ruijunfeng can we also have a test for this?
2,172
991
ruijunfeng
2024-10-04T11:17:24
> awesome @ruijunfeng can we also have a test for this? Sure thing, I have tested it on the instruct-tuned version of Llama2 series and gemma1 series with my own dataset, and it seems to work well. Let me know if you need me to provide anything.
2,172
992
HuggingFaceDocBuilderDev
2024-10-06T19:24:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2172). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,172
993
kashif
2024-10-06T19:26:14
sorry for the misunderstanding, I meant something like: ```python class TestDataCollatorForChatML(unittest.TestCase): def setUp(self): self.tokenizer = AutoTokenizer.from_pretrained("gpt2") self.tokenizer.pad_token = self.tokenizer.eos_token self.collator = DataCollatorForChatML(tokenizer=self.tokenizer, max_length=20) def test_data_collator(self): examples = [ { "messages": [ {"role": "user", "content": "Hello!"}, {"role": "assistant", "content": "Hi there! How can I help you today?"}, {"role": "user", "content": "What's the weather like?"}, {"role": "assistant", "content": "I'm sorry, but I don't have access to real-time weather information."}, ] }, { "messages": [ {"role": "user", "content": "Tell me a joke."}, {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything!"}, ] } ] batch = self.collator(examples) self.assertIn("input_ids", batch) self.assertIn("attention_mask", batch) self.assertIn("labels", batch) self.assertIn("prompts", batch) self.assertIn("prompt_attention_mask", batch) self.assertEqual(batch["input_ids"].shape[0], 2) self.assertEqual(batch["attention_mask"].shape[0], 2) self.assertEqual(batch["labels"].shape[0], 2) self.assertEqual(batch["prompts"].shape[0], 2) self.assertEqual(batch["prompt_attention_mask"].shape[0], 2) # Check if the shapes are consistent self.assertEqual(batch["input_ids"].shape, batch["attention_mask"].shape) self.assertEqual(batch["input_ids"].shape, batch["labels"].shape) self.assertEqual(batch["prompts"].shape, batch["prompt_attention_mask"].shape) # Check if the prompts are shorter than or equal to the full input self.assertTrue((batch["prompts"].shape[1] <= batch["input_ids"].shape[1]).all()) ``` so we can explicitly check for the incorrect data processing and the fix you so kindly provided
2,172
994
ruijunfeng
2024-10-07T01:39:57
Hi there, I have run your test code, and I think your test code has a small mistake. You are using the tokenizer for GPT-2: ```python self.tokenizer = AutoTokenizer.from_pretrained("gpt2") ``` However, GPT-2 does not have a default chat_template, so it will cause an error in this line of DataCollartorForChatML ```python self.tokenizer.apply_chat_template(messages, tokenize=False). ``` I believe the correct way to test this is by manually setting the chat_template for the tokenizer, like this in your setup function: ```python tokenizer.chat_template = "{{ bos_token }}{% for message in messages %}{{ message['role'] }}: {{ message['content'] }}{% endfor %}{{ eos_token }}". ``` Alternatively, you could use a model that has been fine-tuned on instructions, such as Llama-Instructed, whose tokenizer has a default chat_template.
2,172
995
kashif
2024-10-07T07:58:01
sorry again for the misunderstanding, what i wanted to say is that you can use the above as a template to write the tests in your PR, and also do remember to do `make precommit` to fix the formatting etc
2,172
996
kashif
2024-10-08T09:02:46
@lewtun i added a test that fails on main and passes here and @ruijunfeng I pushed it into your PR
2,172
997
ruijunfeng
2024-10-08T12:41:20
@kashif and @lewtun, thank you both for adding the tests and comments. I’ve double-checked the tests and made updates to the assert statements and comments to improve consistency and clarity. Additionally, I noticed that the current check for the EOS token in input_ids only verifies its presence. I have modified it to ensure that the last token of input_ids is the EOS token for a more thorough check.
2,172
998
qgallouedec
2024-10-08T12:53:01
Hi, I've modified to tests to be better aligned with the others in the lib. The test is now failing. It seems that it's related to padding, which is not taken into account in the test. Am I right?
2,172
999