repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
18,457
closed
HFTracer.trace can now take callables and torch.nn.Module
# What does this PR do? This PR enables to use the `HFTracer` "meta-tracing" features to trace any Python callable / `torch.nn.Module`. For `transformers.PreTrainedModel`s, the method `HFTracer._generate_dummy_inputs` already takes care of creating the original dummy inputs needed to handle data-dependent control-flow in the forward pass. Now, the user can specify `dummy_inputs` directly to the `HFTracer.trace` method in order to be able to trace other things than `transformers.PreTrainedModel`s. This is useful for pattern matching for instance. This becomes possible: ```python def f(x, y, z=None): temp = x * y if z is not None: temp += z return temp traced_f = HFTracer().trace(f, dummy_inputs={"x": torch.rand(1, 2), "y": torch.rand(1, 2)}) ``` By default, if `dummy_inputs` is specified, every argument to `root` that is not in `dummy_inputs` will be considered a concrete arg (and thus added to `concrete_args`). You can disable that by setting `infer_concrete_args_from_dummy_inputs` to `False`. This is useful if want to provide custom dummy inputs for some inputs, while still keeping the `HFTracer._generate_dummy_inputs` doing the work for other inputs (provided that `root` is a `transformers.PreTrainedModel` since only this case is supported for automatic dummy inputs generation).
08-03-2022 15:29:19
08-03-2022 15:29:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,456
closed
fix ONNX support for bloom
Merged on https://github.com/huggingface/transformers/pull/18344 This PR aims to fix ONNX export of bloom. All the following tests are passing: ``` RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "bloom" RUN_SLOW=1 pytest tests/models/bloom/test_modeling_bloom.py ```
08-03-2022 15:23:14
08-03-2022 15:23:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,455
closed
understand differences in tokenization
hi, I'm trying to understand the tokenization in MarianTokenizer I run the following code from transformers import MarianTokenizer model_name='Helsinki-NLP/opus-mt-en-de' model = MarianMTModel.from_pretrained(model_name,tokenizer= tokenizer, **kwargs) tokenizer.tokenize("doctor") ['▁doctor'] with tokenizer.as_target_tokenizer(): indices = tokenizer("doctor", return_tensors="pt", padding=True)['input_ids'][0] tokens = tokenizer.convert_ids_to_tokens(indices) indices tensor([ 156, 24889, 0]) tokens ['▁do', 'ctor', '</s>'] can someone please explain the difference between the tokenization of tokenizer.as_target_tokenizer(), and the tokenization of tokenizer.tokenize()? and what is used in fact when translating? each one gives a different separation and different indices thank you Bar
08-03-2022 15:10:50
08-03-2022 15:10:50
This is because the languages you're translating between (Engish and German in this case) have different tokenization vocabularies. This implies that tokens will get tokenized differently. MarianMT models have seq2seq (encoder-decoder) architectures, and both the encoder and decoder each have their own embedding matrix. This means that the encoder will have an embedding vector for the token '▁doctor', whereas the decoder will learn an embedding vector for the token '▁do', an embedding vector for the token 'ctor', etc. Tokenization vocabularies are typically built per language (although models like BLOOM just have one large vocabulary for all language tokens).<|||||>thank you very much for your answer is it per language? some languages don't have a separate embedding matrix for the encoder and the decoder? is there a way to know in advance which language has separate matrices and which doesn't? thanks Bar <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge Can you please explain a confusion i have. Let say I have [en_ur translation](https://huggingface.co/Helsinki-NLP/opus-mt-en-ur) model. I can see both urdu and english words in vocab. This is how data is trained. ``` model_inputs = tokenizer(inputs, max_length=max_Length, truncation=True) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_Length, truncation=True) ``` So when I defined as_target_tokenizer is it read the same voab file? If yes then why we need to define this line because its the only vocab file<|||||>When using the `as_target_tokenizer` context manager, it will use the target vocabulary to tokenize the input sentence (rather than the source vocabulary). However, in v4.22 we deprecated this context manager. Now ``` with tokenizer.as_target_tokenizer(): encoded_labels = tokenizer(labels, padding=True) ``` can be replaced by: ``` encoded_labels = tokenizer(text_target=labels, padding=True) ```
transformers
18,454
closed
disable Onnx test for google/long-t5-tglobal-base
# What does this PR do? For `("longt5", "google/long-t5-tglobal-base")`, we get ```bash Floating point exception (core dumped) ``` in this call https://github.com/huggingface/transformers/blob/fc546332d7a9395323f656635362c9e0f3c4161a/src/transformers/onnx/convert.py#L404 Let's disable it for now, so other Onnx tests could be run. [Failed job run](https://github.com/huggingface/transformers/runs/6892306185?check_suite_focus=true)
08-03-2022 14:25:02
08-03-2022 14:25:02
Adding @regisss as a reviewer as this is suggested by GitHub automatically 😄 <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I will tag lewis when he is back.<|||||>Hi @lewtun ! Could you take a look at this ONNX test? Thank you.
transformers
18,453
closed
Add zero-shot obj detection notebook to docs
Adds OWL-ViT demo notebook links to the official notebooks docs. I'm currently working on adding TF support for this model and we'll be promoting it soon.
08-03-2022 13:54:49
08-03-2022 13:54:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for adding! Although I think we need to split up that long list of notebooks by modality/task. I agree! I will add another PR to organize the notebooks page.
transformers
18,452
closed
Unable to Infer on Bloom Model-2b5 using Deepspeed
### System Info I was able to load the Bloom-2b5 Model onto my Colab notebook for text generation(Inference). When I use Deepspeed to load the model and try to inference the memory is not sufficient. I don't understand because with the help of deepspeed i should be able to load the larger model or atleast the same model. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Load the 2b5 model- https://huggingface.co/bigscience/bloom-2b5 2. Infer with and without deepspeed - https://huggingface.co/docs/transformers/main_classes/deepspeed ### Expected behavior Cuda error: Insufficient memory
08-03-2022 13:28:27
08-03-2022 13:28:27
Hey @Ravisankar13, could you please provide the script you have used, the deepspeed config, as well as the full stacktrace? It will be hard to help you with so little information. cc @stas00 <|||||>@Ravisankar13, please see the work-in-progress here: https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/308 You have a variety of different working solutions there. we will soon move those here.<|||||>Thanks for your response. Let me try them and get back to you<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,451
closed
TF Examples Rewrite
This PR is a rewrite of the TF examples, including several modern methods. I'm focusing on updating everything to use modern methods like `prepare_tf_dataset` and the `evaluate` library as well as adding features and functionality I missed when I first ported them, since `transformers` TF support was much shakier when these were first written (and I didn't know the library as well). Just a draft for now, will ping reviewers when it's ready! TO DO: - [x] Draft rewrite for all scripts - [x] Test run all scripts - [x] Make sure we're handling batch sizes correctly in multi-GPU/TPU scopes - [x] Make sure we're correctly using AdamW + LR decay everywhere - [x] Make sure we're using `evaluate` instead of `load_metric` - [x] Add metadata for `push_to_hub` - [x] Add explanatory comments for things like `KerasMetricCallback`, `jit_compile` and `PushToHubCallback` where appropriate - [x] Replace all the old ad-hoc data loading code with `prepare_tf_dataset` - [x] Add explanatory links to the docs whenever we use `prepare_tf_dataset` - [x] Add example tests - [x] Make sure there's no case where we pass `optimizer=None` to `compile()` - [ ] Final manual testing - [x] ~Add HF metrics like MaskedAccuracy?~ Fixes #18334
08-03-2022 13:15:31
08-03-2022 13:15:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>This is now ready for review @sgugger @gante! I'm tracking down a couple of remaining bugs in the tests and doing some final manual checks, but almost everything should be finished by now. I realize it's a very large PR, but you can see from the checklist above what the main changes are.<|||||>@sgugger Tests are now enabled in `config.yml` and everything still looks green!<|||||>That might because your new job did not run ;-) You need to add at the end [here](https://github.com/huggingface/transformers/blob/d7e2d7b40b1070cddfe878e13705725f49a2cf1f/.circleci/config.yml#L1000) for the one at each commit and [there](https://github.com/huggingface/transformers/blob/d7e2d7b40b1070cddfe878e13705725f49a2cf1f/.circleci/config.yml#L1024) for the nigthly one ;-)<|||||>![image](https://user-images.githubusercontent.com/12866554/183910275-b295a0ef-f39e-47fe-b3a0-5001e1ecddb3.png) <|||||>@sgugger tests are now actually passing! I had to skip one - it fails because of a known issue with shape inference on small datasets in `to_tf_dataset`. There is a PR to fix that at https://github.com/huggingface/datasets/pull/4763 , we just need to wait for that to be merged before we can re-enable the test!
transformers
18,450
closed
[WIP] Add TF support for OWL-ViT
Adds TensorFlow support for the [OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) model. - Creates transformers/models/owlvit/modeling_tf_owlvit.py - Creates tests/models/owlvit/test_modeling_tf_owlvit.py
08-03-2022 13:09:13
08-03-2022 13:09:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18450). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,449
closed
Bugfix for the bloom model. The tensor is not moved to the right gpu causing error.
# What does this PR do? In the implementation of the BLOOM model, on line 307, a tensor is made but not moved to a device. By default this is cpu but if someone wants to use a gpu then this will cause the code to throw an error. @patrickvonplaten, @LysandreJik ## Before submitting - [N] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [Y] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [N] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [N] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [N] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-03-2022 11:11:29
08-03-2022 11:11:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18449). All of your documentation changes will be reflected on that endpoint.
transformers
18,448
closed
Update pinned hhub version
# What does this PR do? This PR updates the `huggingface_hub` pinned version. https://github.com/huggingface/transformers/pull/18366 uses new versions from the library which are not supported in previous versions, so we need to upgrade here.
08-03-2022 11:00:33
08-03-2022 11:00:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>I still need to update https://github.com/huggingface/transformers/blob/main/src/transformers/dependency_versions_table.py, will do once back in laptop and let you know<|||||>Thank you :hugs:
transformers
18,447
closed
'MarianTokenizer' object has no attribute 'target_encoder'
### System Info - `transformers` version: 4.19.0.dev0 - Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1 - Python version: 3.7.3 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu102 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction model_name='Helsinki-NLP/opus-mt-en-he' tokenizer = MarianTokenizer.from_pretrained(model_name) tokenizer.get_tgt_vocab() ### Expected behavior I expected to get the target vocab but instead, i got the error AttributeError: 'MarianTokenizer' object has no attribute 'target_encoder' I need to find a way to separate vocab into the source and target vocabs instead of the current vocab which contains a mix of both languages.
08-03-2022 10:50:44
08-03-2022 10:50:44
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,446
closed
Add depth estimation pipeline
### Feature request We currently have 2 monocular depth estimation models in the library, namely [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) and [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn). It would be great to have a pipeline for this task, with the following API: ``` from transformers import pipeline pipe = pipeline("depth-estimation") pipe("cats.png") ``` This pipeline could default to the https://huggingface.co/Intel/dpt-large checkpoint. Also check out the [Space](https://huggingface.co/spaces/nielsr/dpt-depth-estimation) that showcases the model. This can be implemented similar to [other pipelines](https://github.com/huggingface/transformers/tree/main/src/transformers/pipelines). For an example PR that added a pipeline, see https://github.com/huggingface/transformers/pull/11598. ### Motivation Pipelines are a great way to quickly perform inference with a model for a given task, abstracting away all the complexity. ### Your contribution I can assist with this, together with @Narsil.
08-03-2022 10:35:57
08-03-2022 10:35:57
What would be the output like @NielsRogge ? My understanding is that depth is just a gray scale image (black = infinitely far, white = infinitely close). If that's the case It seems really close to `image-segmentation` in the sense that it's generating a new image from the original image, so we should try and reuse as much as possible. Also maybe we could have something like `image-generation` to try and keep the name generic ? (And have an alias for `depth-estimation` for instance ?) <|||||>Hi @NielsRogge I would like to add this pipeline. <|||||>Hi @Narsil, I'm not sure whether we should add this to the existing `image-segmentation` pipeline. Depth estimation is basically pixel regression, rather than pixel classification (the latter is image segmentation). It would be quite confusing to add it there. Depth estimation is quite a different field, see e.g. https://paperswithcode.com/task/depth-estimation And hi @nandwalritik, thanks for your interest in this. Feel free to start a draft PR.<|||||>Thanks I will start working on it.<|||||>> I'm not sure whether we should add this to the existing image-segmentation pipeline. I said we should inspire from it, not reuse it, but I suggested using an `image-generation`one. (Just to be slightly more general) The output is a grayscale image, right ?
transformers
18,445
closed
Add zero-shot object detection pipeline
### Feature request We currently have [OWL-ViT](https://huggingface.co/docs/transformers/main/model_doc/owlvit) in the library, which is capable of performing zero-shot object detection. It would be great to have a pipeline for this task, with the following API: ``` from transformers import pipeline pipe = pipeline("zero-shot-object-detection") pipe("cats.png", ["cat", "remote"]) ``` This pipeline could default to the https://huggingface.co/google/owlvit-base-patch32 checkpoint. Also check out the [demo notebook](https://github.com/huggingface/notebooks/blob/main/examples/zeroshot_object_detection_with_owlvit.ipynb) that showcases the model. This can be implemented similar to [other pipelines](https://github.com/huggingface/transformers/tree/main/src/transformers/pipelines) (we already have one for zero-shot image classificaiton with CLIP, so it would be very similar to that one). For an example PR that added a pipeline, see https://github.com/huggingface/transformers/pull/11598. ### Motivation Pipelines are great for abstracting away all the complexity for quick inference with a model. ### Your contribution I can assist with this, together with @Narsil.
08-03-2022 10:30:37
08-03-2022 10:30:37
As seen with @alaradirik this morning, this could also be leveraging the custom pipeline feature that was implemented last week, especially if this pipeline works with a very limited number of artchitectures. cf https://github.com/huggingface/transformers/pull/18079<|||||>cc @alaradirik <|||||>Can I take this up and work on it?<|||||>Hi @MocktaiLEngineer! Of course, you can also @NielsRogge, @Narsil or me if you need any help or have any questions.<|||||>cc @sgugger as we chatted about it as well <|||||>I was going to do a custom pipeline on this today actually, as the dev advocates want more examples of it :-)<|||||>Hi @NielsRogge , if no one is working on it, can i take this up?<|||||>Hi @sahamrit, I don't think anyone is working on this right now but I'd need to double check with @NielsRogge and @sgugger <|||||>Yes you can take a stab at it. Pinging @OlivierDehaene that might be able to provide guidance too.
transformers
18,444
closed
Add stop sequence to text generation pipeline
# What does this PR do? As per the conversation in https://github.com/huggingface/transformers/issues/17562, creating this draft PR to add a stop_sequence option to text generation pipelines. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Narsil Models: All Library: - text generation: @patrickvonplaten - pipelines: @LysandreJik
08-03-2022 08:36:59
08-03-2022 08:36:59
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @Narsil. I've managed to get this working for greedy decoding and multimodal sampling. For beam-search, what would be the best approach to deal with a stop_sequence? I've assumed that if a stop_sequence appears in any of the beams then we stop the generation process. Should it instead be that we wait until each beam reaches the stop_sequence or any other stopping criteria before stopping the generation process?<|||||>> Should it instead be that we wait until each beam reaches the stop_sequence or any other stopping criteria before stopping the generation process? @KMFODA I think `eos_token_id` is already handled for beam search, see my comment on the `StoppingCriteria`. I will let others comment on the best way to do this in `.generate` but I think we don't need the criteria, just let `eos_token_id` regular logic apply (it's handled separately from `StoppingCriteria`).<|||||>For the tests removing the breakpoint should help then for code quality. ``` pip install -e .[quality] make fixup ``` Should do the trick.<|||||>@Narsil @KMFODA I'm in favor of moving it to a `StoppingCriteria`, so that all conditions that can terminate generation fall under the same class. However, it should be noted that it is not a requirement to complete the issue, i.e. to add a stop sequence to the text generation pipeline :P It is already implemented on the multiple generation strategies (e.g. [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation_utils.py#L1744) for greedy search). Also, the existing implementation is different from the current PR -- the existing implementation only checks whether the `eos_token` is present in newly generated tokens. This is because models like `GPT-2` often set `pad_token_id` to `eos_token_id`, and we don't want the pad tokens to trigger this condition.<|||||>Thanks @Narsil @gante. Okay so for the sake of deploying iteratively I've removed the `eos_token_id` from the `StoppingCriteria` and will add it as a separate PR. I've added a test for the `stop_sequence` being fed in at the pipeline level. When @Narsil's comment around wether the stop sequence should be handled in the `pipeline` or in the `generation_kwargs` is addressed I can alter this test accordingly.<|||||>> We should implement `stop_sequence` only once (probably in `generate`) but we could have 2 tests if you want to test the full pipeline too. (Probably in `tests/pipelines/test_pipelines_text_generation.py` for instance.) If we were to move `stop_sequence` to be in `generate` wouldn't we have to tokenise it first. In that case what's the reasoning behind feeding it as a `stop_sequence` instead of a `eos_token_id`?<|||||>> If we were to move stop_sequence to be in generate wouldn't we have to tokenise it first. In that case what's the reasoning behind feeding it as a stop_sequence instead of a eos_token_id? You're entirely right, oversight on my part. `eos_token_id` already does the job. So we just need to implement `stop_sequence` in the pipeline to tokenize the `stop_sequence` and produce the `eos_token_id` and just feed it to generate. So no additional code in `generate` should be needed actually. Sorry, failed to see that. <|||||>No problem I've just moved the stop_sequence back to the pipeline function and added the tests you requested in the `tests/pipelines/test_pipelines_text_generation.py` folder. This should make this PR ready for review now. When I was playing with the stop_sequence though I found that sometime when I add a specific stop_sequence the output changes and avoids mentioning the word entirely. I don't have live examples now but I just wanted to check if this is normal behaviour? If not I can find examples on public models and share it in a different issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@KMFODA I think your PR is almost ready to be merged! Would you like to try to fix the final problems and apply the review suggestions? :-) <|||||>Hey @patrickvonplaten. My apologies I was out sick over the past month. I worked on the suggestions now. Hopefully this should be good to merge now but if not let me know!<|||||>I'm happy with the PR, except for the `EndOfStringCriteria` class -- it is not being used, and it is not a good practice to add unused classes/functions. @KMFODA can you remove it for now, and perhaps reintroduce it in a follow-up PR (with use cases)? :) <|||||>Hi @gante yes of course. I had removed it locally but somehow the changes didn't push through with one of the commits. Forced changed it now. Hopefully that looks good now :).
transformers
18,443
closed
Update no trainer scripts for language modeling and image classification examples
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18437 Updated no_trainer scripts for `examples/pytorch/image-classification/run_image_classification_no_trainer.py`, `examples/pytorch/language-modeling/run_clm_no_trainer.py` and `examples/pytorch/language-modeling/run_mlm_no_trainer.py` to include`gather_for_metrics`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @muellerzr @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-03-2022 06:20:51
08-03-2022 06:20:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for your contribution!
transformers
18,442
closed
Update perf_train_gpu_one.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-03-2022 05:21:27
08-03-2022 05:21:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,441
closed
Conversion from TF BERT Checkpoint to HF Model Breaks
### System Info I'm trying to convert the BERT TF1 checkpoint provided in [MLPerf GDrive](https://drive.google.com/drive/folders/1oQF4diVHNPCclykwdvQJw8n_VIWwV0PT?usp=sharing) to a HF BERT model using the following transformer-cli command provided in [HF documentation](https://huggingface.co/docs/transformers/converting_tensorflow_models): ``` transformers-cli convert --model_type bert --tf_checkpoint model.ckpt-28252 --config bert_config.json --pytorch_dump_output pytorch_model.bin ``` but it breaks with the following error: ``` Traceback (most recent call last): File "/workdisk/vlad/composer_venv/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/commands/transformers_cli.py", line 55, in main service.run() File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/commands/convert.py", line 103, in run convert_tf_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output) File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/models/bert/convert_bert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch load_tf_weights_in_bert(model, config, tf_checkpoint_path) File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 172, in load_tf_weights_in_bert if pointer.shape != array.shape: File "/workdisk/vlad/composer_venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Embedding' object has no attribute 'shape' ``` ### Who can help? @LysandreJik @Rocketknight1 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Download the following 4 files from [MLPerf GDrive](https://drive.google.com/drive/folders/1oQF4diVHNPCclykwdvQJw8n_VIWwV0PT?usp=sharing) and put them in the same directory: - tf1_ckpt/model.ckpt-28252.data-00000-of-00001 - tf1_ckpt/model.ckpt-28252.index - tf1_ckpt/model.ckpt-28252.meta - bert_config.json 2. Run the command to convert TF checkpoint to HF BERT model: ``` transformers-cli convert --model_type bert --tf_checkpoint model.ckpt-28252 --config bert_config.json --pytorch_dump_output pytorch_model.bin ``` ### Expected behavior The command should convert TF checkpoint to HF BERT model.
08-03-2022 00:23:49
08-03-2022 00:23:49
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,440
closed
Fix model list
This PR moves GroupViT and LXMert to their correct sections. As pointed out by @NielsRogge and @LysandreJik, GroupViT and LXMert are both multimodal models.
08-02-2022 23:49:54
08-02-2022 23:49:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,439
closed
Integrate FlashAttention into HF OPT
Integrate FlashAttention. - Requires https://github.com/pytorch/pytorch/pull/81434 to work. torch._scaled_dot_product_attention is only there. - Turn on fast path or go back to slow path using fast_attention=True/False flag. - Turn on causal mask or turn it off for the fast attention path using fast_attention_causal = True/False. - Does not support attention mask or padding mask on the fast path. - Currently requires us to do an unnecessary conversion to Nestedtensor and back because the current FlashAttn implementation only takes NestedTensor. Will remove once torch._scaled_dot_product_attention supports regular tensor.
08-02-2022 22:56:49
08-02-2022 22:56:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18439). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, is there any updates? Coming from https://github.com/HazyResearch/flash-attention/blob/main/usage.md<|||||>Looking forward to the update!<|||||>> Looking forward to the update! Hey there @puyuanOT! Not working on this actively anymore. Check out [torch SDP](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html#:~:text=Scaled%20dot%20product%20attention%20attempts,for%20enabling%20and%20disabling%20implementations.) to use FlashAttn in native torch! <|||||>Thanks @erichan1 ! I will check it out.<|||||>@erichan1 Could you explain the reason for stopping to work on this feature? I think it would be a great implementation for the transformers library. Regarding the torch SDP link, could you give instructions on how to use this torch feature when using a model in Huggingface transformers? Edit: Is it the case that flash attention is now activated by default with recent versions of torch? If so, I would recommend a HuggingFace blog article to advertise this feature and explain its workings. Currently documentation is rather lacking on flash-attention support.<|||||>Within the Hugging Face ecosystem, it's possible to use BetterTransformer and the optimum library to improve model performance: [[1](https://huggingface.co/docs/optimum/bettertransformer/tutorials/convert)], [[2](https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2)]. @younesbelkada Is flash attention available yet through this? <|||||>@amyeroberts @vincentmin I'm from the PyTorch team. We decided that the best way to provide FlashAttention was to create a new module that was just the component FlashAttention covers, [Scaled Dot Product Attention](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html#:~:text=Scaled%20dot%20product%20attention%20attempts,for%20enabling%20and%20disabling%20implementations.). This is the part which does softmax(Q@K)@V, and doesn't include the in projection and out projection. Since we built this abstraction, we also decided that we could use it to offer some other implementations of SDP, including a memory efficient one that we've built in house which uses less memory than FlashAttn, but is slower. You can just directly use SDP by replacing the necessary chunk of code in your transformer definition. But I'm unsure about a way to use it with a flag you flip in HuggingFace. I'll let @younesbelkada speak to that. I believe BetterTransformer and SDP (which is part of BetterTransformer) support is already part of Optimum. <|||||>@erichan1 @amyeroberts Thank you for the clarifications. I now understand that BetterTransformer should offer the features I am looking for. I encourage you to write a blog post on Huggingface to advertise this to the world!<|||||>Hi @erichan1 @amyeroberts @vincentmin This is correct, SDPA is now part of the optimum's `BetterTransformer` API, however this is only available for decoder-based models right now. We are indeed panning to write a blogpost soon with Pytorch to publicly announce the feature soon. We will keep you posted here!<|||||>Hi, any recent updates on this blogpost for `BetterTransformer` that you mentioned earlier?<|||||>Hi @KatarinaYuan Yes the blogpost is out and is here: https://pytorch.org/blog/out-of-the-box-acceleration/<|||||>Thank you! > On Jun 14, 2023, at 3:33 AM, Younes Belkada ***@***.*** ***@***.***>> wrote: > > > Hi @KatarinaYuan <https://github.com/KatarinaYuan> > Yes the blogpost is out and is here: https://pytorch.org/blog/out-of-the-box-acceleration/ <https://pytorch.org/blog/out-of-the-box-acceleration/> > — > Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/pull/18439#issuecomment-1590637126>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AKL7G2YPEJ4QVA2DTHM5EBDXLFSOJANCNFSM55M2CJGA>. > You are receiving this because you were mentioned. > <|||||>I use the transformer trainer + FSDP llama training options, model cannot be saved, and unable to use bettertransformer.reverse() convert to original model. I don't know how to deal with this problem.<|||||>Are there any updates on the integration of FlashAttention into HuggingFace Transformers?<|||||>@EwoutH Flashattention should be used as a backend for torch.SDPA which is itself integrated into `BetterTransformer` API. Make sure to install the latest transformers and optimum libraries and run: ```python model = model.to_bettertransformer() ``` Check the blogpost: https://pytorch.org/blog/out-of-the-box-acceleration/ for reference cc @fxmarty as well<|||||>is BetterTransformer up to date with FlashAttention v2?<|||||>Hi, BetterTransformer integrates with PyTorch SDPA (for now), and PyTorch has not integrated flash v2 yet: https://github.com/pytorch/pytorch/pull/105602. Hopefully it will be there in Pytorch 2.1.
transformers
18,438
closed
Use new huggingface_hub tools for download models
# What does this PR do? This PR migrates Transformers to fully rely on `huggingface_hub` for the internal download and cache of all objects in Transformers (models, configs, tokenizers, feature extractors etc). To achieve this, a new function `cached_file` is introduced to replace the old `cached_path`, which relies on `hf_hub_download`. The whole refining of exceptions is left in this function, which allows for a lot of refactor of duplicate code across files, as well as removing some ugly try/except chains when we try several files in a row. The cache is in the new format of huggingface_hub after this PR. To avoid breaking changes, a script will automatically converts the cache of users from the old format to the new format at the first transformers import. A warning is raised for offline users, and if the move fails for any reason or is interrupted, the user can still try later on with `transformers.util.move_cache()` (did not make a CLI command of it, but it's doable if we want). **Note:** To avoid this PR being too heavy, all uses of `cached_path` and other old hub utils are not changed, only the main ones in the `from_pretrained` methods. A follow-up PR will hunt all remaining instances and remove those utils from the lib. In the CI we trust! As you can see from the tests, this all comes at zero breaking changes (detected by the CI). The only modifications to the tests are small adaptations needed for some mock tests simulating no connection. Anticipated small breaking changes are: - some error messages have slightly changed. - if a user relied on offline mode and does not update their cache while updating Transformers, it will break. They need to be online to convert their cache to the new format. For full backward compatibility with what `cached_path` used to do though, a few hacks were necessary. Some of those can be removed in the future if changes are made to `huggingface_hub`: 1. having methods to allow for enabling/disabling progress bars 2. throwing a `FileNotFoundError` instead of a `ValueError` when in offline mode and the file is not in the cache 3. having `hf_hub_download` look for files in the cache in case of connection errors (so if a user has the filed cached and hf.co is down, they still get their last updated version). For 1, I had to do a contextmanager that patches huggingface_hub. For 2, I match the exact error message for the exception I want to catch, but if no change is made in hf hub, we'll at least need a comment in bold telling the maintainers there to never update the message. For 3, a new function `try_to_load_from_cache` is created, which can definitely leave in Transformers forever if it's not deemed suitable for `huggingface_hub`.
08-02-2022 21:35:47
08-02-2022 21:35:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yay! TYSM!!!
transformers
18,437
closed
Update no_trainer scripts to include gather_for_metrics
### Feature request 🤗 Accelerate has a wrapper to help with distributed metric calculation (a tough problem!), and the `no_trainer` scripts should be updated to include it! An example can be seen [here](https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py#L163-L169), below is an example diff of what the integration would look like: ```diff - predictions, references = accelerator.gather((predictions, batch["labels"])) - # If we are in a multiprocess environment, the last batch has duplicates - if accelerator.num_processes > 1: - if step == len(eval_dataloader) - 1: - predictions = predictions[: len(eval_dataloader.dataset) - samples_seen] - references = references[: len(eval_dataloader.dataset) - samples_seen] - else: - samples_seen += references.shape[0] + predictions, references = accelerator.gather_for_metrics((predictions, batch["labels"])) ``` The list of available scripts to update include: - [x] examples/pytorch/image-classification/run_image_classification_no_trainer.py - [x] examples/pytorch/language-modeling/run_clm_no_trainer.py - [x] examples/pytorch/language-modeling/run_mlm_no_trainer.py - [x] examples/pytorch/multiple-choice/run_swag_no_trainer.py - [x] examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py - [x] examples/pytorch/question_answering/run_qa_no_trainer.py - [x] examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py - [x] examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py - [x] examples/pytorch/summarization/run_summarization_no_trainer.py ### Motivation This is a great first issue for someone who wants to learn how to use some of the latest bits in Accelerate and get an easy beginner contribution to the library 🤗 ### Your contribution If you decide to pick up this issue, feel free to ping myself (@muellerzr), @sgugger, or @pacman100 to review 🤗
08-02-2022 21:22:29
08-02-2022 21:22:29
Hi @muellerzr opened PR #18443 for first three examples in the list.<|||||>Hi @muellerzr I opened this PR https://github.com/huggingface/transformers/pull/18468 for the 4th example and ran it locally. Please let me know if there is any changes, you would like done on this example, And I'll update it and add the feedback while I work on examples 5 and 6<|||||>Hi @muellerzr I opened this PR https://github.com/huggingface/transformers/pull/18474 for examples 5,6 and 7.<|||||>@muellerzr In the 7th subtask (semantic segmentation), I think it is already updated if I am not wrong. I want to work on this issue<|||||>Hi @muellerzr I opened this PR #18877 for example 8. Please let me know if there is any changes<|||||>This issue needs to be closed. All the work is already done it seems. <|||||>Seems like [example 9](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py#L692) was already fixed but not checked off.<|||||>@muellerzr Can you close this issue?<|||||>Thanks to everyone who worked on this!
transformers
18,436
open
Update no_trainer scripts to include gradient accumulation
### Feature request 🤗 Accelerate has a gradient accumulation wrapper, and the `no_trainer` scripts should be updated to include it! An example can be seen [here](https://github.com/huggingface/accelerate/blob/main/examples/by_feature/gradient_accumulation.py), below is an example diff of what the integration would look like: ```diff - accelerator = ( - Accelerator(log_with=args.report_to, logging_dir=args.output_dir) if args.with_tracking else Accelerator() - ) + accelerator = ( + Accelerator(log_with=args.report_to, logging_dir=args.output_dir, gradient_accumulation_steps=args.gradient_accumulation_steps) if args.with_tracking else Accelerator(gradient_accumulation_steps=args.gradient_accumulation_steps) + ) ``` As well as: ```diff - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) + num_update_steps_per_epoch = len(train_dataloader) ... for step, batch in enumerate(train_dataloader): + with accelerator.accumulate(model): ``` ```diff - loss = loss / args.gradient_accumulation_steps accelerator.backward(loss) - if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1: optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) completed_steps += 1 ``` The list of available scripts to update include: - [ ] examples/pytorch/image-classification/run_image_classification_no_trainer.py - [ ] examples/pytorch/language-modeling/run_clm_no_trainer.py - [ ] examples/pytorch/language-modeling/run_mlm_no_trainer.py - [ ] examples/pytorch/multiple-choice/run_swag_no_trainer.py - [ ] examples/pytorch/question-answering/run_qa_beam_search_no_trainer.py - [ ] examples/pytorch/question_answering/run_qa_no_trainer.py - [ ] examples/pytorch/semantic-segmentation/run_semantic_segmentation_no_trainer.py - [ ] examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py - [ ] examples/pytorch/summarization/run_summarization_no_trainer.py ### Motivation This is a great first issue for someone who wants to learn how to use some of the latest bits in Accelerate and get an easy beginner contribution to the library 🤗 ### Your contribution If you decide to pick up this issue, feel free to ping myself (@muellerzr), @sgugger, or @pacman100 to review 🤗
08-02-2022 21:17:53
08-02-2022 21:17:53
Hi @muellerzr I took a go at this (accelerate seems awesome!), and implemented the changes quickly. However, I noticed some performance degredation when using the the gradient accumulation wrapper. After some debugging, I think it stems from the lr_scheduler implementation in accelerate updating learning rate at every step in training loop whereas the example script updates the learning rate every optimizer step. So I think either accelerate needs to add something like ```python # Otherwise, first make sure the optimizer was stepped. for opt in self.optimizers: if opt.step_was_skipped or not opt.gradient_state.sync_gradients: return ``` to scheduler.py implementation at line 59 Or the script should have ```python if accelerator.sync_gradients: lr_scheduler.step() ``` I think this should be changed in accelerate. Let me know what you think or if im totally off! I'll be happy to do issue + PR to fix in accelerate and I'll definetly fix the example scripts in transformers. :) <|||||>No we can't do this as then the user would have to know in advance the number of optimization steps when they create their scheduler (which they don't since Accelerate handles gradient accumulation behind the scenes). That's why the learning rate scheduler should be created with the full number of training batches prior to gradient accumulation, then stepped at each batch (which is roughly equivalent to creating it with the right number of optimization batches and step at every optimization step).<|||||>@sgugger Cool! So if I understand you comment, * learning rate scheduler should not know anything about the actual optimization steps, but assume every batch is a step - Hence, num_training_steps for the lr_scheduler is num_training_steps=math.ceil(len(train_dataloader)) * args.num_train_epochs, instead of taking gradient_accumulation_steps into account - This means that if gradient_accumulation_steps is 5, we will take 4 steps of scheduling learning rate without actually using it for gradient updates I've made a WIP pull request for the image examples/pytorch/image-classification/run_image_classification_no_trainer.py script (I'll update the rest of the scripts once i'm certain its the correct approach), * The current functionality of progress_bar / completed_steps is only increment when doing an optimization step i.e. ```python if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1: progress_bar.update(1) completed_steps += 1 ``` So to keep the functionality, we need to know if optimization step occurred here which I think we can use ```python if accelerator.sync_gradients progress_bar.update(1) completed_steps += 1 ``` but is this also something that should be kept away i.e. change logic a bit so that completed_steps == completed_batches instead of optimization_steps ? <|||||>It's going to be easier to just have the progress bar display the number of completed steps. Also, we should multiply `max_steps` by the number of gradient accumulation steps for the same reason (if the user provides it).<|||||>I think either option would work fine as well. The reason behind `sync_gradients` as part of the Accelerator is to provide this open interface to perform a check like this, so from an API design it's correct. My $0.02 is to either explain in a comment what `sync_gradients` checks briefly, or to do as Sylvain recommended here. <|||||>Hi @muellerzr opened PR https://github.com/huggingface/transformers/pull/18601 for second example in the list.<|||||>Hi @muellerzr opened a PR for 8th example on the list. Please let me know if something is wrong. (This is my first contribution ever). <|||||>Hi @muellerzr! Any script to update yet?<|||||>Hi, I believe there is an issue with this PR (Rasmusafj:issue_18436), particularly for run_mlm_no_trainer.py. I am running BERT pretraining with this script and I run with the following arguments on 8 GPUs: ` --num_warmup_steps 10000 --max_train_steps 200000 --checkpointing_steps 500 --per_device_batch_size 256 --gradient_accumulation_steps 2 ` When tracking the learning rate, the learning rate peaks at step 2500 (`completed_steps == 2500`), even though the training will stop at 200k completed_steps. My guess is the learning_rate is stepped for each of the 8 GPUs so the warmup is only actually 10k / 8 = 1.25k. Multiplied by the 2 gradient accum steps which are likely accounted for by the accumulate wrapper we end up with 2.5k warmup steps. I saw it suggested above by @Rasmusafj that we only step the learning rate when sync_gradients is true, which I believe would solve this issue for me, and bring about the right expected behavior. I saw @sgugger recommended against this, however. I am tracking the learning rate by printing `lr_scheduler.get_last_lr()[0]` every `checkpointing_steps` interval. NOTE: I am using accelerate with the deepspeed plugin.<|||||>cc @muellerzr so it's on your radar. It's True that then we use number of steps instead of number of epochs for a given training, the logic we have for the scheduler fails<|||||>I meet the same problem as @sameerreddy13<|||||>Maybe we should make it clear what does `step` mean in warmup_steps? one step fetching data from dataloader or one completed_step?<|||||>It should always be one gradient update step because that is the common assumption in literature as it is tied to the learning rate scheduler. In practice if we have batch size K and grad accum A we report the effective batch size as K * A. To fully fix this issue I did the following: ``` lr_scheduler = get_scheduler( name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps * accelerator.num_processes, num_training_steps=args.max_train_steps * accelerator.num_processes, ) ... if step % args.gradient_accumulation_steps != 0: # Gradients only accumulate with accelerator.no_sync(model): outputs = model(**batch) accelerator.backward(outputs.loss) else: # Gradients finally sync outputs = model(**batch) accelerator.backward(outputs.loss) optimizer.step() optimizer.zero_grad() if ( completed_steps < args.num_warmup_steps or lr_scheduler.get_last_lr()[0] > args.min_learning_rate ): lr_scheduler.step() ```<|||||>It's been a while since I made this change but I manually used `no_sync`. iirc there was some underlying issue with the `accelerator.accumulate(model)` . I believe when I did a validation loop inside the training loop (say every K batches you want to get validation loss) that this broke the gradient accumulation, and only one gradient accum step would happen irregardless of the configured argument. You can see this at a coarse grained level by putting a validation step inside the train loop, setting grad_accum to something like 4 and observing the training suddenly speed up after the first evaluation. <|||||>@sameerreddy13 , I agree with you. I also write a snippet about this at https://github.com/huggingface/accelerate/issues/1382#issuecomment-1534924835 with two different points: - first, I initialize my `lr_scheduler` without `*accelerate.num_processes` and not pass it to `prepare`, do you think this is equivalent to yours? - I still use `accelerator.accumulate(model)` because I didn't notice the underlying issue, if that is really the case, what about only validating after certain `completed steps` rather than certain batches ?<|||||>is this issue still open? can the relevant people mark which PRs have been are merged/w.i.p ? I see there is https://github.com/huggingface/transformers/pull/18601 from @vedant-z but it's been closed?
transformers
18,435
closed
fixing error when using sharded ddp
# What does this PR do? Fixes #18410 1. conditional logic fixed ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
08-02-2022 20:12:29
08-02-2022 20:12:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,434
closed
Update BLOOM Overview in Doc: Add programming languages
The current wording makes it sound as if the programming languages are part of the 46 natural languages. This PR adds the exact number of programming languages to avoid confusion. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. I'm not sure who to tag for this, so maybe @osanseviero @younesbelkada and @sgugger ? :smiley_cat: <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-02-2022 19:29:00
08-02-2022 19:29:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,433
closed
BlenderBot-Distil-400M training fails if the input or target length exceeds a certain threshold, even when truncation and padding is on
### System Info transformers version: 4.20.1, 4.21.0 Platform: Linux Python version: 3.7.6 Huggingface_hub version: 0.8.1 PyTorch version (GPU?): 1.10.2 (Yes) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Yes (2+ Tesla V100) Using distributed or parallel set-up in script?: No ### Who can help? @patil-suraj ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run the following script with `python script_blenderbot_length.py` ```python # The contents of script_blenderbot_length.py # To make the code crash, set CRITICAL_NUMBER=64 # To make it pass, set CRITICAL_NUMBER=63 # The code fails if EITHER the input or the target is repeated 64+ times. from __future__ import annotations import functools import typing as tp import datasets import transformers from transformers import ( DataCollatorForSeq2Seq, PreTrainedTokenizer, Seq2SeqTrainingArguments, Seq2SeqTrainer, ) CRITICAL_NUMBER = 64 increment_en = [ {"input": "One", "target": "Two"}, {"input": "Three "*2, "target": "Four "*2}, {"input": "Five "*4, "target": "Six "*4}, {"input": "Seven "*8, "target": "Eight "*8}, {"input": "Nine "*CRITICAL_NUMBER, "target": "Ten "*CRITICAL_NUMBER}, ] increment_en = increment_en * 100 def lod_to_dol(list_of_dicts: tp.List[tp.Dict[str, tp.Any]]) -> tp.Dict[str, list]: dict_of_lists = { key: [dct[key] for dct in list_of_dicts] for key in list_of_dicts[0] } return dict_of_lists increment_en = lod_to_dol(increment_en) def preprocess_function_( examples, tokenizer: PreTrainedTokenizer, max_input_length: int, max_target_length: int, ): inputs = examples["input"] targets = examples["target"] model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs def main(): tokenizer = transformers.BlenderbotTokenizer.from_pretrained("facebook/blenderbot-400M-distill") model = transformers.BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") args = Seq2SeqTrainingArguments( "script_debug", per_device_train_batch_size=4, per_device_eval_batch_size=4, fp16=True, push_to_hub=False, max_steps=10000, logging_steps=5000, save_steps=5000 ) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, padding=True) dataset = datasets.DatasetDict( { "train": datasets.Dataset.from_dict(increment_en), "test": datasets.Dataset.from_dict(increment_en), } ) preprocess_function = functools.partial( preprocess_function_, tokenizer=tokenizer, max_input_length=512, max_target_length=512 ) processed_ds = dataset.map(preprocess_function, batched=True) processed_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "labels"] ) trainer = Seq2SeqTrainer( model, args, train_dataset=processed_ds["train"], eval_dataset=processed_ds["test"], data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() if __name__ == "__main__": main() ``` Running the code when `CRITICAL_NUMBER` is set to 64 or greater leads to the bizarre series of CUDA asserts: ``` <Similar messages appear above, which are omitted for brevity> /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSi ze` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1640811797118/work/aten/src/ATen/native/cuda/Indexing.cu:699: indexSelectLargeIndex: block: [2,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. 0%| | 0/10000 [00:07<?, ?it/s] root@bolt-imq45r3c3y-8dfzr73qqa:/mnt/task_runtime# python script_blenderbot_length.py 100%|██████████████████████████| 1/1 [00:00<00:00, 5.30ba/s] 100%|██████████████████████████| 1/1 [00:00<00:00, 5.72ba/s] max_steps is given, it will override any value given in num_train_epochs Using cuda_amp half precision backend The following columns in the training set don't have a corresponding argument in `BlenderbotForConditionalGeneration.forward` and have been ignored: target, input. If target, input are not expected by `BlenderbotForConditionalGeneration.forward`, you can safely ignore this message. /miniconda/lib/python3.7/site-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning FutureWarning, ***** Running training ***** Num examples = 500 Num Epochs = 313 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 16 Gradient Accumulation steps = 1 Total optimization steps = 10000 0%| | 0/10000 [00:00<?, ?it/s]Traceback (most recent call last): File "script_blenderbot_length.py", line 101, in <module> main() File "script_blenderbot_length.py", line 97, in main trainer.train() File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1502, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1740, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2470, in training_step loss = self.compute_loss(model, inputs) File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2502, in compute_loss outputs = model(**inputs) File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/miniconda/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise raise exception RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/miniconda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 1340, in forward return_dict=return_dict, File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 1181, in forward return_dict=return_dict, File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 785, in forward output_attentions=output_attentions, File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 318, in forward output_attentions=output_attentions, File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 180, in forward query_states = self.q_proj(hidden_states) * self.scaling File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 103, in forward return F.linear(input, self.weight, self.bias) File "/miniconda/lib/python3.7/site-packages/torch/nn/functional.py", line 1848, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` ### Expected behavior The training code should not crash, especially when there are far fewer tokens than the tokenization limit.
08-02-2022 18:33:56
08-02-2022 18:33:56
Adding `padding=True` when tokenizing both the input and targets does not fix the issue.<|||||>Also, when running the script using the CPU only, I get this error: ``` root@pc:~ # CUDA_VISIBLE_DEVICES="" python script_blenderbot_length.py 100%|██████████████████████████| 1/1 [00:00<00:00, 4.95ba/s] 100%|██████████████████████████| 1/1 [00:00<00:00, 5.46ba/s] max_steps is given, it will override any value given in num_train_epochs The following columns in the training set don't have a corresponding argument in `BlenderbotForConditionalGeneration.forward` and have been ignored: target, input. If target, input are not expected by `BlenderbotForConditionalGeneration.forward`, you can safely ignore this message. /miniconda/lib/python3.7/site-packages/transformers/optimization.py:310: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning FutureWarning, ***** Running training ***** Num examples = 500 Num Epochs = 80 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 4 Gradient Accumulation steps = 1 Total optimization steps = 10000 0%| | 0/10000 [00:00<?, ?it/s] Traceback (most recent call last): File "script_blenderbot_length.py", line 103, in <module> main() File "script_blenderbot_length.py", line 99, in main trainer.train() File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1502, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1740, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2470, in training_step loss = self.compute_loss(model, inputs) File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2502, in compute_loss outputs = model(**inputs) File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 1340, in forward return_dict=return_dict, File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 1181, in forward return_dict=return_dict, File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 738, in forward embed_pos = self.embed_positions(input_shape) File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/miniconda/lib/python3.7/site-packages/transformers/models/blenderbot/modeling_blenderbot.py", line 125, in forward return super().forward(positions) File "/miniconda/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/miniconda/lib/python3.7/site-packages/torch/nn/functional.py", line 2044, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) IndexError: index out of range in self 0%| | 0/10000 [00:00<?, ?it/s] ```<|||||>I've found out why the error seems to appear. I modified `transformers/src/transformers/models/blenderbot/modeling_blenderbot.py:BlenderbotLearnedPositionalEmbedding:forward` (approximately near line 125). ```diff positions = torch.arange( past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device ) + print(positions) + print(self.weight.shape) return super().forward(positions) ``` When running the script, I get this in the output: ``` tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129]) torch.Size([128, 1280]) ``` Clearly, the positional embeddings are beyond the maximum range available. The question is why... Perhaps this can be configured in the constructor?<|||||>The length of the positions seems to be equal to `2*CRITICAL_NUMBER + 1`.<|||||>And... it goes to a maximum of the tokenizer's max_length-1, which is expected, I guess.<|||||>Ah. So the issue is that in the `BlenderbotConfig`, `max_position_embeddings` is set to 128. The publicly available weights only have position embeddings with those dimensions, so either I'd have to train from scratch or reduce the max tokenizer length to 128.<|||||>But seriously, this exception should be caught and re-raised with a more human-readable expression.<|||||>(I can contribute a fix after my internship ends, not before)<|||||>Catching and re-raising the exception during GPU training doesn't result in a more human-readable expression (It's still `RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling cublasCreate(handle)`, but at least the flood of CUDA asserts are gone). Getting a more human-readable exception seems to be only possible for CPU-only training.<|||||>cc @sgugger for usage with the `Trainer`!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,432
closed
Improve generate docstring (for TF and FLAX)
# What does this PR do? Just a continuation PR of https://github.com/huggingface/transformers/pull/18198 for TF and FLAX code ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @sgugger @gante
08-02-2022 17:26:39
08-02-2022 17:26:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>Looks like we just need a quick `make style` to be ready to merge :-)<|||||>Mmm no the formatting has done something very wrong here, we can't have that. There is likely some syntax error in the docstring. Problem looks to be in the input_ids argument at line 422 of the TF generation files, the type should all be on one line.<|||||>> Mmm no the formatting has done something very wrong here, we can't have that. There is likely some syntax error in the docstring. Problem looks to be in the input_ids argument at line 422 of the TF generation files, the type should all be on one line. `make extra_style_checks` does that automatically. What do you propose?<|||||>As I said, there is a syntax error in the docstring that makes the styling script behave erratically. The first step is to revert the changes, fix the syntax error then re-run it.<|||||>I changed this line ``` input_ids (`tf.Tensor` of shape `(batch_size, sequence_length)`, `(batch_size, sequence_length, feature_dim)` or `(batch_size, num_channels, height, width)`, *optional*): ``` to this ``` input_ids (`tf.Tensor` of shape `(batch_size, sequence_length)`, `(batch_size, sequence_length, feature_dim)` or `(batch_size, num_channels, height, width)`, *optional*): ``` but then `make extra_style_checks` reverts the change <|||||>Ah yes, just tried locally and it's due to the empty line between `Parameters:` and `input_ids`. If you remove it, then your changes should not be overwritten.<|||||>> Ah yes, just tried locally and it's due to the empty line between `Parameters:` and `input_ids`. If you remove it, then your changes should not be overwritten. Nice catch, but it still does look strange in the docs 🤔 ![image](https://user-images.githubusercontent.com/17574157/182453226-22ff1ea2-f620-43e5-958d-c19b1303878a.png) <|||||>Uhmm, there is something wrong with the automatic styler -- e.g. the pytorch generate file should not be touched at all in this PR. As Sylvain wrote, the easiest solution is to start from a new branch 🤔 <|||||>> Uhmm, there is something wrong with the automatic styler -- e.g. the pytorch generate file should not be touched at all in this PR. As Sylvain wrote, the easiest solution is to start from a new branch 🤔 [The docs seem fine now](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18432/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin), right? It's just that this test, `doc-builder style src/transformers docs/source --max_len 119 --check_only --path_to_docs docs/source`, is not allowing to have more than 119 characters per line, but we need it here.<|||||>Nope, the docs are not fine for the PyTorch side with all the changes in this PR (and as @gante mentioned that file should not be touched at all). The doc-style is completely comfortable with lines that are more than 119 chars when it identifies they are parameter introduction lines, you just needed to remove the blank line between Parameters: and the first argument in `generate`.<|||||>Ah right, it has these strange hyphens... ![image](https://user-images.githubusercontent.com/17574157/182651706-dd36208c-d597-4d91-8233-02c385709b10.png) I will close the PR then, let's disregard these changes.
transformers
18,431
closed
Make Tokenizers serializable to TF SavedModel format
### Feature request It would be great if we could serialize our tokenizers into the TF SavedModel format, so that we could deploy it to TF Serving without a handler to tokenize our inputs. ### Motivation It is frustrating to have to write a Python, JS or Rust handler every time I want to deploy a huggingface/transformers model to TF Serving, and it would be great if we could just bake everything into a serialized model and seamlessly serve it with TF Serving. ### Your contribution Sadly I can't submit a PR.
08-02-2022 16:38:38
08-02-2022 16:38:38
cc @Rocketknight1, I believe you've been looking into something similar.<|||||>Hi @piEsposito, this is something we've been working on! Right now it's only available for BERT, but we intend to expand it to other models, particularly now that we're seeing interest in it. You can use the [TFBertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.TFBertTokenizer) class for this - check out [this gist](https://gist.github.com/Rocketknight1/b479d57e3d2f94420b11ca8d319cc68f) for an example of how to use it. If you're using a different model class than BERT, or you have any difficulties when using this, please let us know! It's a recent feature in `transformers` so we're still looking for user feedback on it.<|||||>This is great, really, and exactly what I'm looking into. I'm specifically looking into doing that with CLIP, which uses BPE. Do you have anything on works on that? If not, how can I ramp-up on that and help? Thanks! <|||||>Hi @piEsposito - we think that should be possible, but we just haven't had time to implement any tokenizers of that class yet! If you look at the [source for TFBertTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert_tf.py#L67) you can see that we just use `FastBertTokenizer` from `tensorflow_text`. However, correctly implementing a BPE tokenizer that gives identical results to the existing tokenizers will probably be more complex than a single class. If you want to attempt it, feel free! We'd be happy to accept a PR. If not, we'll still work on it ourselves, but there are several competing priorities for the TF team right now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Rocketknight1 Hi, it would be great to also add support for models including RoBerta etc., as they may use different methods for tokenization (e.g. BPE, Byte-level BPE, SentencePiece). TBH, I think that the tokenizers should be independent from models, because tokenizer - model is not a 1 : 1 mapping.<|||||>@jamie0725 That's a good point! For most of our tokenizers that's how we do things - we have a separate `tokenizers` library. Right now we're still experimenting with in-graph tokenizers, but we might move them to the `tokenizers` library at a later stage, and adding tokenizers for other common models like `RoBERTa` is definitely on the list too! The main issue is just that we have a lot of competing priorities - we'll get to it eventually, but if anyone wants to submit a PR before then we'd be very happy to review it!<|||||>do you plan to add CLIPTokenizer support for TF serving?
transformers
18,430
closed
Increase notebooks page visibility
Hi, I was looking at the [notebooks page](https://huggingface.co/docs/transformers/notebooks) and I noticed that the community notebooks link is broken and some of the official notebooks are outdated and throw errors when run. It would be great to reorganize the notebooks page such that: 1) The [community page ](https://huggingface.co/docs/transformers/v4.21.0/en/community) is renamed as Community Notebooks or merged with the Notebooks page 2) We add tags or organize the community notebooks page by task and topic (fine-tuning, image classification, etc.) 3) Existing official notebooks are updated 4) Notebooks page/s are promoted on the homepage @NielsRogge @sgugger @amyeroberts @LysandreJik could you comment on this?
08-02-2022 16:18:25
08-02-2022 16:18:25
I wouldn't merge the community notebooks page with the notebooks page, just make the link work again. I think it's important to separate what we officially support from what we don't. Organizing things a bit better would certainly be welcome, as always, happy to look at a PR! Same for fixing some of the notebooks if they don't run anymore. They are not tested like the examples scripts so it's possible that some API changes broke them.<|||||>Makes sense, I will reorganize the Community page and double check the official notebooks then!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,429
closed
[TENTATIVe] Attempt to reduce number of HEAD calls during model warmup.
# What does this PR do? When doing model loading with the various tools within `transformers` there's actually a lot of duplicate HEAD calls that cost network time and duplicate usage in an unecessary fashion. For instance when doing a simple `pipeline(model="gpt2")` You're getting ``` Fetching https://huggingface.co/gpt2/resolve/main/config.json Downloading: 100%|████████████████████████████████████████████████████████████████████████| 1.39k/1.39k [00:00<00:00, 1.20MB/s] ---- Fetching https://huggingface.co/gpt2/resolve/main/config.json ---- Fetching https://huggingface.co/gpt2/resolve/main/pytorch_model.bin ---- Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json ---- Fetching https://huggingface.co/gpt2/resolve/main/config.json ---- Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json ---- Fetching https://huggingface.co/gpt2/resolve/main/vocab.json ---- Fetching https://huggingface.co/gpt2/resolve/main/merges.txt ---- Fetching https://huggingface.co/gpt2/resolve/main/tokenizer.json ---- Fetching https://huggingface.co/gpt2/resolve/main/added_tokens.json ---- Fetching https://huggingface.co/gpt2/resolve/main/special_tokens_map.json ---- Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json ---- Fetching https://huggingface.co/gpt2/resolve/main/config.json ``` So you're doing a HEAD 4 to 5 times the `config.json`, 3 times to `tokenizer_config.json`. Each of these is doing a call on the HUB which requires loading some resources which could be avoided. @Xcid In addition it adds a lot of noise within our logs since we're getting a lot of random multiple HEAD calls for the actual same code being run. Fixing it "cleanly" is hard, since there are many pathways to load the various elements and checking every single path is hard. The proposed fix is to simply introduce a `timed_cache` wrapper on top of the `request.head` function. We can keep a very short ttl since it's only to reduce duplicates when the model is unlikely to have changed. We need to keep in mind jupyter or long lived users, so we need a TTL so that model updates can still be seen and downloaded. In addition to that, it seems each code path calls the HEAD part with a different user-agent which (afaik) makes it harder to understand our user's usage. This is a tentative PR, proposed to reduce redundant network calls. If this is thought as a correct direction I will then add unit testing for this `timed_cache` function. After the PR: ``` ---- Fetching https://huggingface.co/gpt2/resolve/main/config.json ---- Fetching https://huggingface.co/gpt2/resolve/main/pytorch_model.bin ---- Fetching https://huggingface.co/gpt2/resolve/main/tokenizer_config.json ---- Fetching https://huggingface.co/gpt2/resolve/main/vocab.json ---- Fetching https://huggingface.co/gpt2/resolve/main/merges.txt ---- Fetching https://huggingface.co/gpt2/resolve/main/tokenizer.json ---- Fetching https://huggingface.co/gpt2/resolve/main/added_tokens.json ---- Fetching https://huggingface.co/gpt2/resolve/main/special_tokens_map.json ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-02-2022 16:17:48
08-02-2022 16:17:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18429). All of your documentation changes will be reflected on that endpoint.<|||||>Ok, marking as draft while the other PR is being worked on.<|||||>@sgugger If you want to take a look a the tests. Right now the tests are failing since the cache was written for the previous HEAD code. We can do the cache here or on `huggingface_hub`, I am not sure where is the most appropriate. This PR now hold a few (relatively hackish) attempts to preserve information from the `user-agent`. The one thing that's surpised me, is that the pipeline load the model use `from_auto_class/False` because it looks directly within the config to fetch the model with the correct head. While it's technically correct, I am not sure if that's correct "intentionally" or for telemetry purposes, since it is actually using a lot of magic. WDYT ?<|||||>> The caching part should go more in the huggingface_hub IMO, especially now that we rely on it for everything. But I also think people might have strong opinion on it (if a user just updated a model and don't see it downloaded for instance, they'll be mad and won't necessarily understand there is some cache to clear). I'll wait for you work for the cache, that should clear the need for it, and the tests here should cover, I'll update this PR to make sure user-agent is correct afterwards. I had a TTL of 10s for the cache which is largely enough in most cases (well you still would have multiple hit if you were actually downloading the files but .. ) Removing the need for the cache is the best solution, so let's go with that for now.<|||||>@Narsil in case it's relevant, I was observing a significant perf disparity with the following repro (all files have been cached) if I specify `TRANSFORMERS_OFFLINE=1` or not: test.py: ```python from transformers import pipeline nlp = pipeline("question-answering", model='distilbert-base-cased-distilled-squad') ``` ```bash $ time TRANSFORMERS_OFFLINE=1 python transformers_test.py vocab_file vocab.txt tokenizer_file tokenizer.json added_tokens_file added_tokens.json special_tokens_map_file special_tokens_map.json tokenizer_config_file tokenizer_config.json real 0m1.759s user 0m1.815s sys 0m2.443s $ time python transformers_test.py vocab_file vocab.txt tokenizer_file tokenizer.json added_tokens_file added_tokens.json special_tokens_map_file special_tokens_map.json tokenizer_config_file tokenizer_config.json real 0m6.187s user 0m2.184s sys 0m2.248s ``` I then searched around and stumbled upon your PR. I tried patching it down into a venv linked to my repo and saw that the time was roughly the same: ```bash (venv) ankur-m1:~/projects/layoutlm ankur$ time python transformers_test.py vocab_file vocab.txt tokenizer_file tokenizer.json added_tokens_file added_tokens.json special_tokens_map_file special_tokens_map.json tokenizer_config_file tokenizer_config.json real 0m6.034s user 0m2.019s sys 0m2.346s ``` I may have completely misunderstood the purpose of this change, so please ignore me if this comment is irrelevant, but since I was poking around with perf in a similar fashion, I thought I'd share if helpful!<|||||>@ankrgyl This PR caches files for a very small amount of time (10s) because most of the time users will want the new models when they exist. You can try to increase the TTL if you want. Also the cache isn't kept through different python sessions. You may want to check were the overhead of network is occurring too, if could be DNS issues on your end, or just high latency with the HF servers.<|||||>Ahh, okay, that makes sense. Let me dig around a bit further with my repro, and see if I can find anything useful. Thanks for the pointers.<|||||>Btw @sgugger is working on a better fix we should reduce the amount of network calls as close to 1 as possible.<|||||>@sgugger if it's helpful to have a guinea pig test your PR in the wild, I'm happy to help! For background context, the reason I'm trying to optimize this cold start is that I'm trying to use transformers in a command line utility where cold start time matters quite a bit.<|||||>The PR is #18534, but nothing will beat using offline mode with the model cached, since you are then doing 0 calls to the API.<|||||>Thanks for the pointer @sgugger. I agree, however, the disparity exists even if you pin the revision: ``` $ cat transformers_test.py from transformers import pipeline nlp = pipeline("question-answering", model='distilbert-base-cased-distilled-squad', revision="1b9d42b637aed70c9f3cd27e13b66ee9f847ed03") $ time python transformers_test.py real 0m5.680s user 0m1.694s sys 0m2.252s $ time TRANSFORMERS_OFFLINE=1 python transformers_test.py real 0m1.321s user 0m1.653s sys 0m1.997s ``` While playing around with this, I noticed issue #18537, which might be leading to extra network calls (since the revision isn't pinned for the model) in my repro. Apologies if I'm missing something obvious here, but I'd expect that (a) if the revision is specified and (b) it's cached, then there shouldn't be any network calls.<|||||>If the revision is specified as a commit sha, then yes, the cache should be used. This is not implemented by #18534 however, but could be some follow up work. The only exception is for files that don't exist, as we don't know if they haven't been downloaded yet or if they are not present. That's why we still have extra calls in #18534 and would still have extra calls in this case as well.<|||||>Got it, I pulled down your PR and ran the same test, and saw much better results: ``` $ time python transformers_test.py real 0m2.384s user 0m1.869s sys 0m2.222s $ time TRANSFORMERS_OFFLINE=1 python transformers_test.py real 0m1.588s user 0m1.722s sys 0m2.229s ``` I'd be happy to help with the follow up work/exploration if helpful. I think you could theoretically handle the "all files downloaded" case too, by caching a file that simply marks _that_ you've downloaded all files associated with a revision.<|||||>Yes there is probably some way to cache that the file does not exist for a given commit sha. Pinged a few people internally to see if they like that idea and will let you know when I hear back!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale. I'll come back to this at some poind.<|||||>There is only one call to head now once the model is cached @Narsil <|||||>You're right, closing.
transformers
18,428
closed
Accept `trust_remote_code` and ignore it in `PreTrainedModel.from_pretrained`
# What does this PR do? Hope my understanding is correct. Let me know if I should apply the same change to `ProcessorMixin`, `PreTrainedTokenizerBase` etc.
08-02-2022 15:58:19
08-02-2022 15:58:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger FYI: I applied the change to TF and Flax `PreTrainedModel` (which is necessary, at least for TF), as well as `PretrainedConfig`. Leave the tokenizer class and feature extractor class for now
transformers
18,427
closed
fix: data2vec-vision Onnx ready-made configuration.
# What does this PR do? This PR adds the missing config for the ONNX data2vec config for images. It is stated in the [docs](https://huggingface.co/docs/transformers/serialization) that there is a default config for the facebook/data2vec-vision-base but this is currently not working and gives ``` ... File "/transformers/src/transformers/onnx/features.py", line 486, in get_supported_features_for_model_type raise KeyError( KeyError: "data2vec-vision is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'ibert', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'marian', 'mbart', 'mobilebert', 'mobilevit', 'm2m-100', 'perceiver', 'resnet', 'roberta', 'roformer', 'squeezebert', 't5', 'vit', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support data2vec-vision please propose a PR or open up an issue." ``` to recreate using docker: ```bash docker run -it huggingface/transformers-all-latest-gpu /bin/bash ``` ```python python3 -m transformers.onnx --model=facebook/data2vec-vision-base onnx/ ``` Should I create an issue for this and link to it? Thanks for the help! <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-02-2022 15:39:03
08-02-2022 15:39:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>@lewtun @LysandreJik maybe you could take a look or point me to the correct person. Thanks!<|||||>cc @michaelbenayoun @JingyaHuang as Lewis is off for a few weeks :)<|||||>What is the best practice in this case for the test that failed? As far as I an see it is not related to the changes? How do I rerun it? @JingyaHuang thanks!<|||||>> What is the best practice in this case for the test that failed? As far as I an see it is not related to the changes? How do I rerun it? @JingyaHuang thanks! Can you rebase your branch with the main branch of transformers and re-launch the failed test?<|||||>LGTM! Have you ran this command? ``` RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py ``` I was able to export the model on your branch, with your command, but I want to make sure all the tests pass before merging.<|||||>> LGTM! Have you ran this command? > > ``` > RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py > ``` > > I was able to export the model on your branch, with your command, but I want to make sure all the tests pass before merging. I missed this but will run this tomorrow and fix it if it needs to!
transformers
18,426
closed
Add Speech-to-Speech Translation (S2ST)
# What does this PR do? Adds the S2ST models from the paper [Enhanced Direct Speech-to-Speech Translation Using Self-supervised Pre-training and Data Augmentation (Popuri et al. 2022)](https://arxiv.org/abs/2204.02967). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
08-02-2022 14:54:44
08-02-2022 14:54:44
Pipeline: ![pipeline](https://user-images.githubusercontent.com/93869735/201135128-13ccac3b-ab96-43f3-b906-0ed5cfc56a7b.png) This is effectively an encoder-decoder-vocoder configuration: - The feature extractor normalises the audio inputs. - The encoder maps the (normalised) audio inputs to a sequence of encoder hidden-states. - The decoder auto-regressively generates a sequence of tokens (interpreted as speech ‘hidden units’). - The vocoder maps these discrete tokens to a sequence of continuous audio outputs. The encoder-decoder portion is a standard seq2seq mapping, entirely equivalent to the [speech encoder-decoder model](https://huggingface.co/docs/transformers/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderModel) we have in Transformers. The vocoder aspect is new. <|||||>Question 1 - Explicit modelling code? The pre-trained checkpoints only use a Wav2Vec2 encoder and mBART decoder (see [enhanced_direct_s2st_discrete_units.md](https://github.com/facebookresearch/fairseq/blob/main/examples/speech_to_speech/docs/enhanced_direct_s2st_discrete_units.md#finetuned-model-checkpoints)). There are no other encoder or decoder checkpoints/architectures used. Thus, we have two options: 1. Explicitly add the modelling code for Wav2Vec2 and the mBART decoder to `modeling_speech_to_speech.py` -> there are no abstractions, all the relevant modelling code for the encoder and decoder is in one file 2. Follow what’s done in speech encoder-decoder model and add modelling code for a generic encoder and generic decoder -> there is one layer of abstraction, neither the modelling code for the encoder or decoder are in `modeling_speech_to_speech.py`, they are instead called through AutoModel: https://github.com/huggingface/transformers/blob/6dda14dc47d82f0e32df05fea8ba6444ba52b90a/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L213-L217 -> this is quite nasty as it abstracts the encoder and decoder, when really they can be in the same file if we know that the encoder is always Wav2Vec2 and the decoder always mBART. Option 1 is more clear to the user -> all the relevant code sits in the relevant modelling file. Option 2 is what we have for speech encoder decoder. It facilitates for different combinations of models should people wish to train different encoder-decoder combos themselves, but this is highly unlikely to ever happen! Option 2 gives one layer of abstraction that makes the code much harder to follow -> this is similar to what they do in fariseq, and it’s a struggle to jump around and find the right modelling files. We currently have option 2 in the PR, but my preference would be for 1 unless there are any objections.<|||||>Question 2 - Where should the vocoder go? The model is trained to predict target tokens (speech ‘hidden units’). These target tokens are converted to continuous speech by action of a vocoder. This vocoder is not trained. It is loaded standalone to the seq2seq model after training. ![Untitled Diagram](https://user-images.githubusercontent.com/93869735/201135650-b8ba3002-2e3b-4527-ab72-e671366fff30.png) Should the vocoder be included in the modelling file as part of the pre-trained model? Or should it operate in a similar way to a tokenizer (an object that isn’t trained, purely used to map generated tokens to the final output)? My preference would be to include it in the modelling file as an `nn.Module`. If we go for option 1 from the previous question (explicitly adding the Wav2Vec2 and mBART code to modling_speech_to_speech), we would then have the following structure: - Wav2Vec2 encoder - mBART decoder - CodeHiFiGAN vocoder - SpeechToSpeechTranslationModel (Wav2Vec2 encoder - mBART decoder) - SpeechToSpeechTranslationWithCodeHiFiGANVocoder (encoder-decoder-vocoder) -> this design treats the vocoder like a head to the base model <|||||>Question 3 - What about the configs? Speech encoder decoder partitions its configuration into an encoder config and decoder config (see [speech-encoder-decoder-config](https://huggingface.co/docs/transformers/model_doc/speech-encoder-decoder#transformers.SpeechEncoderDecoderConfig)): * Encoder config * Decoder config Do we do the same with the encoder-decoder-vocoder model: * Encoder config * Decoder config * Vocoder config -> note: not used for the SpeechToUnitTranslationModel, only the SpeechToSpeechWithCodeHiFiGANVocoder. We’ll have to handle it differently in each case. Or combine them into a single config file for all the modelling components (as is done with [T5Config](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Config) for example, the encoder and decoder parts of the config are prefixed by `encoder_` and `decoder_` respectively). My preference would be for combining them into a single config and prefixing by `encoder_`, `decoder_` and `vocoder_` -> I think this is cleaner than having three different sub-configs on the go.<|||||>Question 4 - How to make compatible with generation? This model currently isn’t a good fit for Transformers with regards to generation: we want to auto-regressively generate using the decoder and then pass the generation outputs through **another** stage of the model (vocoder) -> this currently isn't possible with `.generate` alone. We either have to add this functionality to generate, or override the generate method for SpeechToSpeechWithCodeHiFiGANVocoder. We're currently doing something very hacky to make this work: https://github.com/huggingface/transformers/blob/43b744283b57d156845af0eda77b806b88957395/src/transformers/models/speech_to_speech/modeling_speech_to_speech.py#L1012-L1022<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,425
closed
BartLearnedPositionalEmbedding's forward method signature obstructs private (Opacus) training of BART
### System Info -`transformers` version: 4.20.1 -Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.17 -Python version: 3.8.13 -Huggingface_hub version: 0.8.1 -PyTorch version (GPU?): 1.9.1+cu102 (False) -Tensorflow version (GPU?): not installed (NA) -Flax version (CPU?/GPU?/TPU?): not installed (NA) -Jax version: not installed -JaxLib version: not installed -Using GPU in script?: yes (NA) -Using distributed or parallel set-up in script?: no (NA) ### Who can help? Tagging @patil-suraj as BART model owner. Details: The signature of `BartLearnedPositionalEmbedding`'s forward method takes an input of type `torch.Size`, which breaks in Opacus. The reason is that Opacus makes a (reasonable) assumption that all layers take input of type `torch.Tensor`. In particular, opacus/grad_sample/grad_sample_module.py line 190 (the `capture_activations_hook` method) tries to detach the input from device via: `module.activations.append(forward_input[0].detach())` If we pass the tensor instead, this will allow fine-tuning BART-type summarization models with differential privacy. Only a few lines of code need to be changed in `modeling_bart.py`. In particular, the forward signature of `BartLearnedPositionalEmbedding.forward()` and references to this method. I already have a change implemented with BART-related tests passing. More than happy to create a PR which I can tag you in @patil-suraj. ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers.models.bart.modeling_bart import BartLearnedPositionalEmbedding from opacus.tests.grad_samples.common import GradSampleHooks_test class TestPositionalEmbedding(GradSampleHooks_test): def test_grad_sample(self): """ Verify that our custom implementation of the grad sample for huggingface's BartLearnedPositionalEmbedding layer works. Built on the test routines in opacus's library. """ register_grad_sampler() batch_size = 1 max_pos_embs = 10 embed_dim = 3 x = torch.randint(0, max_pos_embs - 1, (batch_size, embed_dim)) layer = BartLearnedPositionalEmbedding(max_pos_embs, embed_dim) self.run_test(x, layer, batch_first=True) ``` where a custom `register_grad_sampler()` method is called for `BartLearnedPositionalEmbedding` layer. ### Expected behavior Test above should pass.
08-02-2022 14:17:25
08-02-2022 14:17:25
Tagging @sgugger as you have previously shown support for private training of HF models via Opacus.<|||||>Happy to look at a PR that fixes the issue!<|||||>Fantastic, I'll create it now 👍<|||||>@donebydan Hi, have you generated the fine-tuned BART with OPACUS? I'm working on it and changed the code to a merged one. But the model generation is weird like repeating `the the..`<|||||>Hi @SeolhwaLee, we are integrating a BART with Opacus example in our [`dp-transformers`](https://github.com/microsoft/dp-transformers) library. It is [this PR](https://github.com/microsoft/dp-transformers/pull/5), but it is pending some updates to newer Opacus (1.13) and HF versions right now.
transformers
18,424
closed
Cannot replicate T5 performance on WMT14
### System Info I am trying to replicate T5 finetuning on WMT with the following hyperparameters (as close as possible to the paper https://www.jmlr.org/papers/volume21/20-074/20-074.pdf): --model_name_or_path t5-small --source_lang en --target_lang de --dataset_name stas/wmt14-en-de-pre-processed --max_source_length 512 --max_target_length 512 --val_max_target_length 512 --source_prefix="translate English to German: " --predict_with_generate --save_steps 5000 --eval_steps 5000 --learning_rate 0.001 --max_steps 262144 --optim adafactor --lr_scheduler_type constant --gradient_accumulation_steps 2 --per_device_train_batch_size 64 However, the best model performance I get is around 13 BLEU whereas in the paper reported BLEU is around 27. Any comments on how to fix this ? Script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py Environment: - `transformers` version: 4.20.1 - Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - A100 - Using distributed or parallel set-up in script?: No ### Who can help? @patrickvonplaten, @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use the script with the hyperparameters above : https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py ### Expected behavior BLEU score should be around 27.
08-02-2022 14:08:10
08-02-2022 14:08:10
Hey @ekurtulus, such a low BLEU score looks indeed suspicious! Do you have any training stats / logs / graphs to share? <|||||>Just a tip: It might be a good idea to save the predictions (here the translation) during evaluation, so we can look into them to see what might goes wrong. When saving the translation, it's better to save the source text and the label (target text) too. I do this in a manual way though, this is not directly available in the official training scripts.<|||||>Sorry for being late. I will take a look.<|||||>> Hey @ekurtulus, such a low BLEU score looks indeed suspicious! Do you have any training stats / logs / graphs to share? My experiments are on an HPC system, so since it's been a while, I unfortunately do not have the logs or the graphs.<|||||>@patrickvonplaten @patil-suraj Do you know if `--dataset_name stas/wmt14-en-de-pre-processed` (which is pre-processed using a script from fairseq) is the good dataset for T5 (En -> German)? `T5` is from Google, and in the paper, I can't find any mention of `fairseq`. I think T5 doesn't use this particular pre-processing, but I am not 100% sure. <|||||>@ekurtulus I also think the checkpoints `t5-small`, `t5-base` etc. have been trained on WMT / CNN Dailymail datasets, as shown in the code snippet below. So using those checkpoints to replicate the results (by finetuning on those datasets) doesn't really make sense IMO. ### Code snippet ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") tokenizer = AutoTokenizer.from_pretrained("t5-small") inputs = tokenizer( "translate English to German: I am a good student.", return_tensors="pt", ) outputs = model.generate(inputs["input_ids"], max_length=64, num_beams=4, early_stopping=True) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) inputs = tokenizer( "translate English to French: I am a good student.", return_tensors="pt", ) outputs = model.generate(inputs["input_ids"], max_length=64, num_beams=4, early_stopping=True) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") tokenizer = AutoTokenizer.from_pretrained("t5-base") inputs = tokenizer( """WASHINGTON (CNN) -- Doctors removed five small polyps from President Bush's colon on Saturday, and "none appeared worrisome," a White House spokesman said. The polyps were removed and sent to the National Naval Medical Center in Bethesda, Maryland, for routine microscopic examination, spokesman Scott Stanzel said. Results are expected in two to three days. All were small, less than a centimeter [half an inch] in diameter, he said. Bush is in good humor, Stanzel said, and will resume his activities at Camp David. During the procedure Vice President Dick Cheney assumed presidential power. Bush reclaimed presidential power at 9:21 a.m. after about two hours. Doctors used "monitored anesthesia care," Stanzel said, so the president was asleep, but not as deeply unconscious as with a true general anesthetic. He spoke to first lady Laura Bush -- who is in Midland, Texas, celebrating her mother's birthday -- before and after the procedure, Stanzel said. Afterward, the president played with his Scottish terriers, Barney and Miss Beazley, Stanzel said. He planned to have lunch at Camp David and have briefings with National Security Adviser Stephen Hadley and White House Chief of Staff Josh Bolten, and planned to take a bicycle ride Saturday afternoon. Cheney, meanwhile, spent the morning at his home on Maryland's eastern shore, reading and playing with his dogs, Stanzel said. Nothing occurred that required him to take official action as president before Bush reclaimed presidential power. The procedure was supervised by Dr. Richard Tubb, Bush's physician, and conducted by a multidisciplinary team from the National Naval Medical Center in Bethesda, Maryland, the White House said. Bush's last colonoscopy was in June 2002, and no abnormalities were found, White House spokesman Tony Snow said. The president's doctor had recommended a repeat procedure in about five years. A colonoscopy is the most sensitive test for colon cancer, rectal cancer and polyps, small clumps of cells that can become cancerous, according to the Mayo Clinic. Small polyps may be removed during the procedure. Snow said on Friday that Bush had polyps removed during colonoscopies before becoming president. Snow himself is undergoing chemotherapy for cancer that began in his colon and spread to his liver. Watch Snow talk about Bush's procedure and his own colon cancer » . "The president wants to encourage everybody to use surveillance," Snow said. The American Cancer Society recommends that people without high risk factors or symptoms begin getting screened for signs of colorectal cancer at age 50. E-mail to a friend .""" return_tensors="pt", ) outputs = model.generate(inputs["input_ids"], max_length=64, num_beams=4, early_stopping=True) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Outputs ```bash Ich bin ein guter Student. Je suis un bon étudiant. ``` ```bash five small polyps were removed from president Bush's colon on Saturday. none of the polyps appeared worrisome, a white house spokesman said. During the procedure, vice president Dick Cheney assumed presidential power. ```<|||||>> @patrickvonplaten @patil-suraj Do you know if `--dataset_name stas/wmt14-en-de-pre-processed` (which is pre-processed using a script from fairseq) is the good dataset for T5 (En -> German)? > > `T5` is from Google, and in the paper, I can't find any mention of `fairseq`. I think T5 doesn't use this particular pre-processing, but I am not 100% sure. Fairseq preprocessed version is suggested [at the official repository](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation).<|||||>> @ekurtulus I also think the checkpoints `t5-small`, `t5-base` etc. have been trained on WMT / CNN Dailymail datasets, as shown in the code snippet below. So using those checkpoints to replicate the results (by finetuning on those datasets) doesn't really make sense IMO. > > ### Code snippet > ```python > from transformers import AutoModelForSeq2SeqLM, AutoTokenizer > > model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") > tokenizer = AutoTokenizer.from_pretrained("t5-small") > > inputs = tokenizer( > "translate English to German: I am a good student.", > return_tensors="pt", > ) > outputs = model.generate(inputs["input_ids"], max_length=64, num_beams=4, early_stopping=True) > print(tokenizer.decode(outputs[0], skip_special_tokens=True)) > > inputs = tokenizer( > "translate English to French: I am a good student.", > return_tensors="pt", > ) > outputs = model.generate(inputs["input_ids"], max_length=64, num_beams=4, early_stopping=True) > print(tokenizer.decode(outputs[0], skip_special_tokens=True)) > > model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") > tokenizer = AutoTokenizer.from_pretrained("t5-base") > > inputs = tokenizer( > """WASHINGTON (CNN) -- Doctors removed five small polyps from President Bush's colon on Saturday, and "none appeared worrisome," a White House spokesman said. The polyps were removed and sent to the National Naval Medical Center in Bethesda, Maryland, for routine microscopic examination, spokesman Scott Stanzel said. Results are expected in two to three days. All were small, less than a centimeter [half an inch] in diameter, he said. Bush is in good humor, Stanzel said, and will resume his activities at Camp David. During the procedure Vice President Dick Cheney assumed presidential power. Bush reclaimed presidential power at 9:21 a.m. after about two hours. Doctors used "monitored anesthesia care," Stanzel said, so the president was asleep, but not as deeply unconscious as with a true general anesthetic. He spoke to first lady Laura Bush -- who is in Midland, Texas, celebrating her mother's birthday -- before and after the procedure, Stanzel said. Afterward, the president played with his Scottish terriers, Barney and Miss Beazley, Stanzel said. He planned to have lunch at Camp David and have briefings with National Security Adviser Stephen Hadley and White House Chief of Staff Josh Bolten, and planned to take a bicycle ride Saturday afternoon. Cheney, meanwhile, spent the morning at his home on Maryland's eastern shore, reading and playing with his dogs, Stanzel said. Nothing occurred that required him to take official action as president before Bush reclaimed presidential power. The procedure was supervised by Dr. Richard Tubb, Bush's physician, and conducted by a multidisciplinary team from the National Naval Medical Center in Bethesda, Maryland, the White House said. Bush's last colonoscopy was in June 2002, and no abnormalities were found, White House spokesman Tony Snow said. The president's doctor had recommended a repeat procedure in about five years. A colonoscopy is the most sensitive test for colon cancer, rectal cancer and polyps, small clumps of cells that can become cancerous, according to the Mayo Clinic. Small polyps may be removed during the procedure. Snow said on Friday that Bush had polyps removed during colonoscopies before becoming president. Snow himself is undergoing chemotherapy for cancer that began in his colon and spread to his liver. Watch Snow talk about Bush's procedure and his own colon cancer » . "The president wants to encourage everybody to use surveillance," Snow said. The American Cancer Society recommends that people without high risk factors or symptoms begin getting screened for signs of colorectal cancer at age 50. E-mail to a friend .""" > return_tensors="pt", > ) > outputs = model.generate(inputs["input_ids"], max_length=64, num_beams=4, early_stopping=True) > print(tokenizer.decode(outputs[0], skip_special_tokens=True)) > ``` > > ### Outputs > ```shell > Ich bin ein guter Student. > Je suis un bon étudiant. > ``` > > ```shell > five small polyps were removed from president Bush's colon on Saturday. none of the polyps appeared worrisome, a white house spokesman said. During the procedure, vice president Dick Cheney assumed presidential power. > ``` What checkpoint should use then ? <|||||>> > @patrickvonplaten @patil-suraj Do you know if `--dataset_name stas/wmt14-en-de-pre-processed` (which is pre-processed using a script from fairseq) is the good dataset for T5 (En -> German)? > > `T5` is from Google, and in the paper, I can't find any mention of `fairseq`. I think T5 doesn't use this particular pre-processing, but I am not 100% sure. > > Fairseq preprocessed version is suggested [at the official repository](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation). I think my colleagues @patil-suraj and @patrickvonplaten are the best persons for this question. The trainer script could work with several models (T5, Bart, etc.). Bart is from facebook/fairseq (so probably used the pre-processed dataset), but T5 is from Google. I am not 100% sure if the combination `stas/wmt14-en-de-pre-processed` + `T5` is the best choice to compare against the original T5 checkpoint performance (which seems to be trained already on the translation task). If you would like to, one thing you could try is to measure the T5 checkpoint performance against the original [WMT14 dataset](https://huggingface.co/datasets/wmt14) without any finetuning. And probably against the preprocessed dataset version too. From there, we might get better ideas.<|||||>Note that we cannot guarantee perfect replication of all models for every result in their respective paper. Given the extremely low results of your training though there is probably a bug. Here I'd suggest to try out different learning rates, learning rate schedulers (e.g. --lr_scheduler_type constant looks weird to me, I think a linear decrease makes more sense). Also note that the original model was trained on TPU with Tensorflow in bfloat16 where as here we're training on GPU with PyTorch. Good that you have a A100 - could you try simply using: - AdamW (not adafactor as we don't have the official implementation) - linear warmup + linear descent for learning rate scheduler instead? <|||||>Agree with @patrickvonplaten, especially for the `AdamW` optimizer. I think [Hugging Face Forums](https://discuss.huggingface.co/) would be a better place for this question - if you want to post there too. If a bug (say in the model or in the training script) is found, don't hesitate to report here :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I am trying to reproduce the performance of transformer-base (from attention is all you need) on WMT14. I am using FSMT because I cannot find an implementation of the transformer. I was wondering which dataset and tokenizer are the best choices. 1. `stas/wmt14-en-de-pre-processed` with `facebook/wmt19-en-de` 2. `wmt14` with `facebook/wmt19-en-de` Especially, I do not know which tokenizer should be used. Thanks in advance if you could provide some suggestions!<|||||>unstale
transformers
18,423
closed
update maskformer docs
- Updates the MaskFormer docs: _is_thing_map_ -> _label_ids_to_fuse_ See this [issue](https://github.com/huggingface/transformers/issues/18157).
08-02-2022 13:46:31
08-02-2022 13:46:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,422
closed
Fix `test_load_default_pipelines_tf` test error
# What does this PR do? My change in #18292 needs to add `tf` under `default` key (for `image-classification`), otherwise we have ```bash FAILED tests/pipelines/test_pipelines_common.py::PipelineUtilsTest::test_load_default_pipelines_tf ``` with error message ```bash else: # normal case - non-translation pipeline > model_id, revision = task_dict["default"]["model"][framework] E KeyError: 'tf' ```
08-02-2022 13:16:31
08-02-2022 13:16:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,421
closed
Change audio kwarg to images in TROCR processor
# What does this PR do? Fix a bug in TROCR processor introduced in #18325 Currently, we have [failed job run](https://github.com/huggingface/transformers/runs/7603716998?check_suite_focus=true) with error ```bash if audio is None and text is None: > raise ValueError("You need to specify either an `audio` or `text` input to process.") E ValueError: You need to specify either an `audio` or `text` input to process. ```
08-02-2022 10:03:59
08-02-2022 10:03:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,420
closed
add `transformers-cli pt-to-flax`
# What does this PR do? Following the addition of the `transformers-cli pt-to-tf` command, this uses the same scipt to convert to `FLAX`. [Since another PR](https://github.com/huggingface/transformers/pull/18419)
08-02-2022 09:33:53
08-02-2022 09:33:53
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18420). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,419
closed
Load sharded pt to flax
# What does this PR do? Add conversion to `flax` from sharded `pytroch` checkpoints. Follows #18026 which was closed to rename the branch (no really necessary, sorry for the inconvenience). Should fix #17537
08-02-2022 09:32:55
08-02-2022 09:32:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>Okay 👍🏻 Thanks for the review gonna fix that asap !
transformers
18,418
closed
Fix the hub user name in a longformer doctest checkpoint
# What does this PR do? `jpelhaw` not exist on hub, and test fails at this moment. Run locally with this PR: doctest pass for this model now.
08-02-2022 09:18:38
08-02-2022 09:18:38
cc @ArthurZucker <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Checked with the last run (May 18th -- sorry I should check doctest status much earlier), it worked with the previous checkpoint string, which suggests that the user renamed since then. I agree with you regarding `not the most strategic thing` (I raised the doubt before, but we decided to continue with this approach to see how things go)<|||||>@LysandreJik just reminded me that the migration to the new cache system on huggingface_hub will magically support repo renames so this won't be a problem in the future!
transformers
18,417
closed
run_clip.py RuntimeError
### System Info - `transformers` version: 4.22.0.dev0 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Hi, @patil-suraj When I run `run_clip.py` following the steps in the [README](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/README.md), I get an error like the following: ``` [INFO|trainer.py:2644] 2022-08-02 04:07:15,699 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-4500 [INFO|configuration_utils.py:446] 2022-08-02 04:07:15,701 >> Configuration saved in clip-roberta-finetuned/checkpoint-4500/config.json [INFO|modeling_utils.py:1567] 2022-08-02 04:07:17,602 >> Model weights saved in clip-roberta-finetuned/checkpoint-4500/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 33%|███████████████████████████████████████████████████████████████████████████████▉ | 4623/13872 [1:56:27<3:50:22, 1.49s/it]Traceback (most recent call last): File "/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 537, in <module> main() File "/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 508, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1502, in train return inner_training_loop( File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1744, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 2474, in training_step loss = self.compute_loss(model, inputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 2506, in compute_loss outputs = model(**inputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward return self.gather(outputs, self.output_device) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 181, in gather return gather(outputs, output_device, dim=self.dim) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 78, in gather res = gather_map(outputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 69, in gather_map return type(out)((k, gather_map([d[k] for d in outputs])) File "<string>", line 10, in __init__ File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/utils/generic.py", line 188, in __post_init__ for element in iterator: File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 69, in <genexpr> return type(out)((k, gather_map([d[k] for d in outputs])) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return Gather.apply(target_device, dim, *outputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py", line 75, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/comm.py", line 235, in gather return torch._C._gather(tensors, dim, destination) RuntimeError: Input tensor at index 1 has invalid shape [4, 4], but expected [4, 5] ``` How to solve this error. Thanks! ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction python run_clip.py ### Expected behavior run run_clip.py success
08-02-2022 08:23:08
08-02-2022 08:23:08
Hey @gongshaojie12, thanks for the issue! Could you please provide the full command you run? cc @ydshieh; I believe you have worked with this script in the past<|||||>@gongshaojie12 Could you also provide a bit more information. For example, do you download and use the COCO dataset as in the README?<|||||>Hey @ydshieh @LysandreJik thank you very much for your replies. My steps are as follows: 1,Create a `VisionTextDualEncoderModel` ``` from transformers import ( VisionTextDualEncoderModel, VisionTextDualEncoderProcessor, AutoTokenizer, AutoFeatureExtractor ) model = VisionTextDualEncoderModel.from_vision_text_pretrained( "openai/clip-vit-base-patch32", "roberta-base" ) tokenizer = AutoTokenizer.from_pretrained("roberta-base") feat_ext = AutoFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32") processor = VisionTextDualEncoderProcessor(feat_ext, tokenizer) model.save_pretrained("clip-roberta") processor.save_pretrained("clip-roberta") ``` 2,Manually download COCO dataset to /home/gsj/data directory ![image](https://user-images.githubusercontent.com/6407116/183041989-726d1ec6-798d-48f0-bc1d-bf88fb10c59d.png) 3,Full run command: ``` python run_clip.py \ --output_dir clip-roberta-finetuned \ --model_name_or_path clip-roberta/ \ --data_dir /home/gsj/data \ --dataset_name ydshieh/coco_dataset_script \ --dataset_config_name=2017 \ --image_column image_path \ --caption_column caption \ --remove_unused_columns=False \ --do_train \ --do_eval \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="64" \ --learning_rate="5e-5" \ --warmup_steps="0" \ --weight_decay 0.1 \ --overwrite_output_dir ```<|||||>In addition, I commented the line `image_transformations = torch.jit.script(image_transformations)` and added some `prints`. The complete `run_clip.py` is as follows: [run_clip.zip](https://github.com/huggingface/transformers/files/9267204/run_clip.zip) <|||||>Hi @gongshaojie12 , I have to change ```python train_dataset = dataset["train"][:2000] ``` to ``` train_dataset = dataset["train"] data_args.max_train_samples = 2000 ``` otherwise get an attribute error. (Ideally, we should specify this limits in the command line). Same for the validation dataset. I am wondering if you face the issue when running on the whole dataset. With the limits of `2000` and `500` (that are in your script), I am not able to reproduce.<|||||>@gongshaojie12 I want to double check if you use multiple GPUs ??<|||||>> Hi @gongshaojie12 , I have to change > > ```python > train_dataset = dataset["train"][:2000] > ``` > > to > > ``` > train_dataset = dataset["train"] > data_args.max_train_samples = 2000 > ``` > > otherwise get an attribute error. (Ideally, we should specify this limits in the command line). Same for the validation dataset. > > I am wondering if you face the issue when running on the whole dataset. With the limits of `2000` and `500` (that are in your script), I am not able to reproduce. Hi @ydshieh thank you for your reply. Because the GPU machine is in the company, I can't run it on the whole dataset right now, when I come back to the company in two days I will run on the whole dataset,and feedback the results. At the same time, after adding the code `data_args.max_train_samples = 2000`, I will also test whether it is running normally on my GPU machine<|||||>> @gongshaojie12 I want to double check if you use multiple GPUs ?? Hi @ydshieh ,yes, I used two GPUs for training<|||||>> Hi @gongshaojie12 , I have to change > > ```python > train_dataset = dataset["train"][:2000] > ``` > > to > > ``` > train_dataset = dataset["train"] > data_args.max_train_samples = 2000 > ``` > > otherwise get an attribute error. (Ideally, we should specify this limits in the command line). Same for the validation dataset. > > I am wondering if you face the issue when running on the whole dataset. With the limits of `2000` and `500` (that are in your script), I am not able to reproduce. Hi, @ydshieh When running on the whole dataset, I still get the following error: ``` [INFO|configuration_utils.py:446] 2022-08-07 22:36:34,789 >> Configuration saved in clip-roberta-finetuned/checkpoint-1500/config.json [INFO|modeling_utils.py:1567] 2022-08-07 22:36:36,673 >> Model weights saved in clip-roberta-finetuned/checkpoint-1500/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' {'loss': 0.5521, 'learning_rate': 4.2791234140715114e-05, 'epoch': 0.43} 14%|█████▏ | 2000/13872 [50:51<4:57:11, 1.50s/it][INFO|trainer.py:2644] 2022-08-07 22:49:16,799 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-2000 [INFO|configuration_utils.py:446] 2022-08-07 22:49:16,800 >> Configuration saved in clip-roberta-finetuned/checkpoint-2000/config.json [INFO|modeling_utils.py:1567] 2022-08-07 22:49:18,741 >> Model weights saved in clip-roberta-finetuned/checkpoint-2000/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' {'loss': 0.5047, 'learning_rate': 4.098904267589389e-05, 'epoch': 0.54} 18%|██████▏ | 2500/13872 [1:03:34<4:45:42, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:01:59,622 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-2500 [INFO|configuration_utils.py:446] 2022-08-07 23:01:59,624 >> Configuration saved in clip-roberta-finetuned/checkpoint-2500/config.json [INFO|modeling_utils.py:1567] 2022-08-07 23:02:01,520 >> Model weights saved in clip-roberta-finetuned/checkpoint-2500/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' {'loss': 0.4655, 'learning_rate': 3.9186851211072664e-05, 'epoch': 0.65} ^[[B^[[B^[[B 22%|███████▎ | 3000/13872 [1:16:13<4:34:09, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:14:38,286 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-3000 [INFO|configuration_utils.py:446] 2022-08-07 23:14:38,287 >> Configuration saved in clip-roberta-finetuned/checkpoint-3000/config.json [INFO|modeling_utils.py:1567] 2022-08-07 23:14:40,239 >> Model weights saved in clip-roberta-finetuned/checkpoint-3000/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' {'loss': 0.4323, 'learning_rate': 3.7384659746251445e-05, 'epoch': 0.76} ^[[B^[[B^[[B 25%|████████▌ | 3500/13872 [1:28:56<4:20:36, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:27:21,056 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-3500 [INFO|configuration_utils.py:446] 2022-08-07 23:27:21,057 >> Configuration saved in clip-roberta-finetuned/checkpoint-3500/config.json [INFO|modeling_utils.py:1567] 2022-08-07 23:27:22,967 >> Model weights saved in clip-roberta-finetuned/checkpoint-3500/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' {'loss': 0.4047, 'learning_rate': 3.558246828143022e-05, 'epoch': 0.87} 29%|█████████▊ | 4000/13872 [1:41:37<4:07:33, 1.50s/it][INFO|trainer.py:2644] 2022-08-07 23:40:02,408 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-4000 [INFO|configuration_utils.py:446] 2022-08-07 23:40:02,409 >> Configuration saved in clip-roberta-finetuned/checkpoint-4000/config.json [INFO|modeling_utils.py:1567] 2022-08-07 23:40:04,339 >> Model weights saved in clip-roberta-finetuned/checkpoint-4000/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' {'loss': 0.3859, 'learning_rate': 3.3780276816608994e-05, 'epoch': 0.97} 32%|███████████ | 4500/13872 [1:54:19<3:55:46, 1.51s/it][INFO|trainer.py:2644] 2022-08-07 23:52:44,544 >> Saving model checkpoint to clip-roberta-finetuned/checkpoint-4500 [INFO|configuration_utils.py:446] 2022-08-07 23:52:44,546 >> Configuration saved in clip-roberta-finetuned/checkpoint-4500/config.json [INFO|modeling_utils.py:1567] 2022-08-07 23:52:46,431 >> Model weights saved in clip-roberta-finetuned/checkpoint-4500/pytorch_model.bin /root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' 33%|███████████▎ | 4623/13872 [1:57:33<3:51:51, 1.50s/it]Traceback (most recent call last): File "/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 539, in <module> main() File "/home/gsj/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 510, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1502, in train return inner_training_loop( File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 1744, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 2474, in training_step loss = self.compute_loss(model, inputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/trainer.py", line 2506, in compute_loss outputs = model(**inputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 169, in forward return self.gather(outputs, self.output_device) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/data_parallel.py", line 181, in gather return gather(outputs, output_device, dim=self.dim) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 78, in gather res = gather_map(outputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 69, in gather_map return type(out)((k, gather_map([d[k] for d in outputs])) File "<string>", line 10, in __init__ File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/transformers/utils/generic.py", line 188, in __post_init__ for element in iterator: File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 69, in <genexpr> return type(out)((k, gather_map([d[k] for d in outputs])) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return Gather.apply(target_device, dim, *outputs) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/_functions.py", line 75, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/root/anaconda3/envs/h-transformers/lib/python3.9/site-packages/torch/nn/parallel/comm.py", line 235, in gather return torch._C._gather(tensors, dim, destination) RuntimeError: Input tensor at index 1 has invalid shape [4, 4], but expected [4, 5] 33%|███████████▎ | 4623/13872 [1:57:34<3:55:12, 1.53s/it] You have new mail in /var/spool/mail/root ``` Also, when adding the code `data_args.max_train_samples = 2000`, it works fine <|||||>Hi, it turns out that the last batch has only 9 examples, and it is splitted to a batch of `4` and another `5` elements (as we use 2 GPUs). This causes some issue for CLIP model. You can actually get the same issue very quickly by specifying ```python --max_train_samples=137 --max_eval_samples=137 ``` (remember to **remove the places of `2000` and `500` in your code first**) Here `137 = 128 + 9 = 2 * 64 + 9` (so we have a complete batch and a remaining batch) A quick solution is to add ``` --dataloader_drop_last True ``` <|||||>Hi, @ydshieh I got it, thanks a lot!
transformers
18,416
closed
Add missing lang tokens in M2M100Tokenizer.get_vocab
# What does this PR do? The lang tokens were missing from `M2M100Tokenizer.get_vocab`. The `get_vocab` method is updated to match other multilingual tokenizers such as `NllbTokenizer` and `MBart50Tokenizer`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @n1t0, @LysandreJik, @SaulLu
08-02-2022 07:44:22
08-02-2022 07:44:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>A friendly re-ping to @patil-suraj :hugs: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe of interest to @ArthurZucker :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Re-ping of @ArthurZucker
transformers
18,415
closed
Add Spanish translation of run_scripts.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add the Spanish translation for `run_scripts.mdx` as part of the #15947 issue. Changes include the Spanish version of the original document and the `updated _toctree.yml` file. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Task assignment [here](https://github.com/huggingface/transformers/issues/15947#issuecomment-1196245514). - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-02-2022 06:06:27
08-02-2022 06:06:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>@omarespejel, can you help me review this PR, please?<|||||>Hi @donelianc! Amazing translation. I only found a few nits in my review.<|||||>@omarespejel, thanks for your great review! I submitted the suggested changes in my previous commit. I'll keep my translator streak if you assign me `converting_tensorflow_models.mdx` 😃 <|||||>Thanks, @donelianc for the translation! @sgugger LGTM :) @donelianc thanks, I will add you for `converting_tensorflow_models.mdx` 🚀
transformers
18,414
closed
Add DocumentQuestionAnswering pipeline
# What does this PR do? This PR extends VisualQuestionAnsweringPipeline to accept `words` and `boxes` as input, passes them into the tokenizer/model (along with the question), and post-processes their `QuestionAnsweringModelOutput` response. Fixes #18380 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Narsil
08-02-2022 05:38:05
08-02-2022 05:38:05
@Narsil this is basically a skeleton implementation that I thought I'd send out sooner than later to start getting your input. I've left a few questions throughout tagged with "TODO" in the comments. The big question is how much/whether to reuse the code in QuestionAnsweringPipeline, which has a lot of overlap (notably preparing the spans and post-processing the output). For example, I could refactor out methods like `QuestionAnsweringPipeline.decode` to share the implementation, inherit from `QuestionAnsweringPipeline`, etc.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Narsil thank you for the review! Before I go in and apply the comments, I thought it might be worth discussing the required (or not) `image` argument at a high level. The reason (I think) It's important to allow users to pass in words/boxes _instead of_ an image is that users often either want to run their own OCR (e.g. using a proprietary system like Google/Microsoft) OR are extracting data from documents that have embedded text (e.g. many PDF documents, Word, Excel, etc.). Furthermore, there are a lot of pre-processing tricks that are relevant to certain OCR implementations (e.g. some try to order words by line, others by block, etc.) that have a very significant impact on BERT-inspired models like LayoutLM (because of the attention mechanism, position ids, etc.). tl;dr, having some control over words/boxes is very important if you're trying to use the pipeline in a production scenario. Now, you could argue that if they want to do this, they could use the question answering pipeline. In fact, when I started exploring HuggingFace/transformers, I did just that! The problem is that if you join everything together (into `context`), you actually lose some valuable information about how the words are separated in the document (including the distance between them). In other words -- it's very important that you retain information about which words correspond to which coordinates. I could also see an argument that if a user wants this level of control, they shouldn't use a pipeline in the first place, but the implementation of QA preprocessing and postprocessing are really compelling -- which kind of drew us to really wanting to take advantage of them vs. try to reinvent them elsewhere. Hopefully that makes sense and adds some context for why I proposed making `image` optional. I'm very open to alternate solutions too, but just wanted to clarify the use case a bit. For example, another option could be to add a new pipeline called `DocumentQuestionAnswering` (or similar) that handles inputs of this shape. Let me know your thoughts.<|||||>> @Narsil thank you for the review! Before I go in and apply the comments, I thought it might be worth discussing the required (or not) `image` argument at a high level. Yes very much so ! I think it's great to have a clear conversation about this. I will try and convey this part of the library's perspective, but having your view is great too since I probably know less about OCRs and overall document processing than you. > > The reason (I think) It's important to allow users to pass in words/boxes _instead of_ an image is that users often either want to run their own OCR (e.g. using a proprietary system like Google/Microsoft) OR are extracting data from documents that have embedded text (e.g. many PDF documents, Word, Excel, etc.). Furthermore, there are a lot of pre-processing tricks that are relevant to certain OCR implementations (e.g. some try to order words by line, others by block, etc.) that have a very significant impact on BERT-inspired models like LayoutLM (because of the attention mechanism, position ids, etc.). tl;dr, having some control over words/boxes is very important if you're trying to use the pipeline in a production scenario. This is very interesting to know ! I was under the impression that using an OCR could be streamlined much more. The fact that the OCR has much impact on the quality of the results doesn't surprise me (and the `is_split_into_words` might play a non negligible role here) > > Now, you could argue that if they want to do this, they could use the question answering pipeline. In fact, when I started exploring HuggingFace/transformers, I did just that! The problem is that if you join everything together (into `context`), you actually lose some valuable information about how the words are separated in the document (including the distance between them). In other words -- it's very important that you retain information about which words correspond to which coordinates. Yes I felt the same thing when reading your code and pondering whether it should actually belonged or not in `QA` instead of `VQA`. I think you are right, the image information is too valuable to be thrown away. > > I could also see an argument that if a user wants this level of control, they shouldn't use a pipeline in the first place, but the implementation of QA preprocessing and postprocessing are really compelling -- which kind of drew us to really wanting to take advantage of them vs. try to reinvent them elsewhere. Very reasonable ! :) > > Hopefully that makes sense and adds some context for why I proposed making `image` optional. I'm very open to alternate solutions too, but just wanted to clarify the use case a bit. Let me know your thoughts. Ok, I am going to recap the main goal of the pipeline: ``` A pipeline is a tool to make ML accessible to non ML practitioners. ``` That's the first goal, and in doing that, we don't want to hide any ML details that might hurt users unknowingly (like chunking things that can hurt output quality without being opt-in). So hide as many things as possible, when the defaults are correct, but don't hide any magic that could be use-case dependent. For instance, truncating without asking for explicit user consent (via a parameter) means the user will try and send large chunks of texts, and get an output that will correspond only to a tiny chunk of it, without him realizing it. Another secondary goal is to make them as reusable/extendable as possible, but only when it doesn't contradict any of the previous goals. With that in mind, you see why having inputs/outputs that depend on the actual model type, forces non ML practitioners to know about model types, where the goal is to try and lift that burden. If we can ensure that sending the same input, will receive the same output, it means users can jump very easily between models. So when AwesomeModelA comes out, you can just swap its name and make it work. Same goes for iterations/fine-tuning of the same model or different models and so one. Here I can see I think two solutions: 1/ We create a new pipeline (`DocumentQuestionAnsweringPipeline` ?). The set of I/O is different so we should have different pipelines for these. For this pipeline it seems the input is `boxes` + `words` (which I would call `texts` personally as OCRs probably extract full string and don't necessarily reason about words). It's easy, but puts all the burden of the OCR on the user upfront.(If OCR choice is super tricky and we cannot realistically make that choice in a general fashion for users, it's probably the way to go). 2/ We keep using `VisualQuestionAnswering` but we enable a very easy way to use a custom `OCR`: - Most users will trigger an initial error that `pytesseract` (or something else) is not present and get suggested to install it to get an easy impression about results (mention all the caveats/link to some docs on how to choose the OCR for advanced users). - When those sane defaults are present, the pipelines will use those. - For experienced users that know about how OCR can impact deeply the results we can enable easy overriding like: ```python pipe = pipeline("mymodel-id", ocr=MyOCR()) class MyOCR: def forward(self, image): ... return texts, boxes ``` What do you think ? Which solution makes most sense from your perspective ? Also regardless of choice here, we can extract whatever makes sense as an individual function within `qa` so you can reuse it, in a pipeline or anywhere else. <|||||>For Option 1, to clarify, would you be open to allowing the user to pass in (optional) words and boxes? I think this is conceptually similar to your point about audio pipelines using ffmpeg but I may be misunderstanding something. Essentially, we'd run OCR on the image if the words/boxes are not passed in. And either way, depending on the model, pass the image into the model as well. If we made the words/boxes an optional input, then users could basically assert control where they'd like to, but the simple use cases will just work out of the box. Personally, I think option 1 is the way to go. I can at least sketch out the code as a next step and we can take a look and reevaluate.<|||||>> would you be open to allowing the user to pass in (optional) words and boxes I have the same uneasiness with **any** optional inputs. Either the pipeline needs the data or it doesn't. IMO the incoming data should be as strongly typed as possible, and definitely the computation should not depend on what the user actually sent (because then it becomes really hard to reason about what actually happened on a piece of data, which OCR was used ? Were the boxes correct ? etc...). I feel like I am missing a piece of the puzzle here, so maybe we can do the other way around, let's try to devise what we would actually like to write as a user for this document processing. IMO the simplest is something like: ```python pipe = pipeline(task="visual-question-answering", model="layoutlmv3-xxx") out = pipe(image=Image.load("id_card.jpg"), question="What is this person's address ?") # out == [{answer: "24 nerdy street", score:0.8}, {"answer": "John travolta", "score": 0.1}] ``` Or maybe be a little more strict: ```python pipe = pipeline(task="visual-question-answering", model="layoutlmv3-xxx") # ValueError : This model is a document processing model, and requires an OCR to be able to use this pipeline, # please pass an OCR. For demos, you can use `from transformers.pipelines import DefaultOCR` pipe = pipeline(task="visual-question-answering", model="layoutlmv3-xxx", ocr=DefaultOCR()) out = pipe(image=Image.load("id_card.jpg"), question="What is this person's address ?") # out == [{answer: "24 nerdy street", score:0.8}, {"answer": "John travolta", "score": 0.1}, ...] ```<|||||>Ahh, okay, yes I agree that working from these examples is really helpful. Let me first be precise about what is required vs. not: - In all LayoutLM models, words and bounding boxes are technically required. The model itself requires them to be formatted a certain way (e.g. box coordinates are axis aligned and normalized between 0->1000), but it _does not_ impose where they came from. The inspiration is something like "BERT + bounding boxes". - In LayoutLMv2 and v3, the models additionally accept an image (normalized to 224x224) as input. Theoretically, the model is able to use information from the image alongside the encoded words and boxes. Notably, in LayoutLMv1, you do not need to provide the image. And furthermore, you _can_ fine tune v2 and v3 for many use cases _without_ the additional image and achieve similar or in some cases better results. - The `LayoutLMv2` and `LayoutLMv3` processor classes in `transformers` optionally accept an `apply_ocr` argument. If set to `True`, while doing feature extraction from the image, they'll also use the tesseract library to run OCR and return them back out to caller, so you can pass them into the model. There is some tricky control flow throughout these classes that branches based on whether the user provides their own OCR or not. I think part of why it's structured this way, or at least one of the advantages, is that in practice, since OCR can be costly (time and $$), many document processing practitioners will run OCR as a pre-processing step, so you can reuse its results across many invocations of extractions/questions/etc. E.g. imagine you were building an app that lets you point at a folder on your computer and then ask the files questions interactively. You'd probably implement this app by first running OCR on each file, and then re-using the OCR each time a user provides a new question as input. I think with this in mind, there are probably a few different use cases that would be ideal to capture in the pipeline. I fully recognize that some of these may qualify as "more advanced" than the scope of a pipeline, so I'm open to and appreciate your push back on where that may be the case. ### Scenario 1: Single file, single question (your example above) ```python pipe = pipeline(task="visual-question-answering", model="layoutlmv3-xxx") out = pipe(image=Image.load("id_card.jpg"), question="What is this person's address ?") # out == [{answer: "24 nerdy street", score:0.8}, {"answer": "John travolta", "score": 0.1}] ``` ### Scenario 2: Interactive REPL (this is an approximation of a real-world use case) ```python pipe = pipeline(task="visual-question-answering", model="layoutlmv3-xxx") img = Image.load("id_card.jpg") words, boxes = my_favorite_ocr(img) while True: question = input("Ask a question of the image: ") print(pipe(image=img, question=question, words=words, boxes=boxes) ``` ### Scenario 3: Mixed Media Types ```python img = rasterize("my_tax_form.pdf") words, boxes = text_extract("my_tax_form.pdf") # NOTE: in certain models, e.g. LayoutLMv1, you do not even need to rasterize/pass in the image as input in this case out = pipe(image=img, question="What is the person's income?", words=words, boxes=boxes) # out == [{answer: "$10", score:0.8}, {"answer": "$1000", "score": 0.1}] ``` I can certainly imagine some alternatives: - Words/boxes could be required inputs, and we could simply enforce that the user run OCR (or alternative) before using the pipeline. I think in this case, the image _should_ be considered optional input, simply because certain document processing models take it as input, and others don't. - Another would be to allow the user to provide a more advanced "OCR" input that could accept things like PDFs, spreadsheets, etc. and let it call out to OCR or use something else depending on the media type. I would say from experience, handling various document types is a can of worms and it prevents you from reusing pre-processed results across calls to the pipeline. (I believe this is your second suggestion). - My original suggestion: words/boxes could be optional, and when not provided, we use a default OCR implementation. One more advantage of this approach is that it's consistent with the LayoutLMv2 processor classes. So if a user starts with this pipeline, and then wants to dig one level deeper to the processor, they'll have a familiar pattern. Let me know your thoughts. In certain options (namely the first), I think it'd be a requirement for it to be a `DocumentQuestionAnsweringPipeline` since the _required_ inputs are different than the `VisualQuestionAnsweringPipeline`. In options 2 or 3, that might not be the case. I don't have a strong opinion about this but just wanted to clarify my understanding/thinking.<|||||>Ok, thanks for all the explanation ! Now I think I am starting to understand it and all the use cases you displayed really make sense ! I think we can ignore layoutlmv1 not requiring the image so we can keep the number of cases rather small. (If you really know what you're doing you could always send an empty image, or we could just make the code in such a way that sending `None` doesn't break anything without actively trying to sanitize it) Since the OCR is indeed quite costly (or can come from a non image !) I can really understand why we would need those optional `boxes` and `texts`. So let's support them. (We can make the docs extremely clear on that front) I think `example 1` should really be the focus for newcoming users, and we need to be able to support `example 2` and `example 3` to be usable in prod. And if a user sends `boxes + texts` then we can simply skip the OCR part. _Actually, wdyt about having a list of tuples instead of two lists ? Two lists enables having different sized lists which would silently break things, I usually tend to prefer arguments that cannot by design be inconsistent, and lists of tuples cannot have different sizes and will necessarily raise errors when the tuple is unwrapped, so less room for error_ I think all 3 examples could become tests so that we make sure that those cases are maintained through time. I will ping @NielsRogge which is also really involved in vision and might have other insights.<|||||>Awesome, I really appreciate you taking the time to dig into this with me. I'll sketch this all out as a next step. And I agree that we can leave the empty image (or None image) as a performance optimization for advanced users. The one thing we'll need to be careful of is that the LayoutLMv1 model gets upset if you _do_ pass in the image (i.e. it's optional for v2/v3 but not for v1 -- v1 does not accept images at all). So if the workaround is to pass in an empty image, we'll just need to figure out a way to cleverly avoid handing it to the model (e.g. implement a no-op feature extractor that takes the image as input and returns an empty dict). With all of this context in mind, do you have a preference for whether we extend the existing `VisualQuestionAnsweringPipeline` or isolate this logic into a `DocumentQuestionAnsweringPipeline`? I'm okay with either, although I am leaning a bit towards the latter so that we can be very clear with the examples/documentation about the use cases (and not muddy the waters with the `VisualQuestionAnsweringPipeline` which operates directly on the image each time). But of course, I'm open either way. > Actually, wdyt about having a list of tuples instead of two lists ? Two lists enables having different sized lists which would silently break things, I usually tend to prefer arguments that cannot by design be inconsistent, and lists of tuples cannot have different sizes and will necessarily raise errors when the tuple is unwrapped, so less room for error_ I have no concerns with this. The runtime "perf hit" of converting one format to the other is trivial compared to the other operations involved. I think it's a smart way to prevent an accidental length mismatch. > I think all 3 examples could become tests so that we make sure that those cases are maintained through time. Great point. I'm happy to contribute these. <|||||>> With all of this context in mind, do you have a preference for whether we extend the existing VisualQuestionAnsweringPipeline or isolate this logic into a DocumentQuestionAnsweringPipeline? I'm okay with either, Go with `DocumentQuestionAnsweringPipeline` for now then. In general we try to avoid adding pipelines when we can and when the set of input/output is the same as it makes discoverability and usability on hf.co easier/more consistent. But you made great points explaining core differences (especially the pdf example for instance), IMO. If we decide to revisit later or other members have different opinions, we might revisit later (we would do the lifting, and since we're committed to zero breaking change you would still be able to use your code regardless of internal decisions)<|||||>Okay great, as a next step I'll rework this PR to sketch out `DocumentQuestionAnsweringPipeline` and address some of your comments on the original change (but may not do all in the first pass, just to optimize for getting feedback sooner). Thanks again for the back and forth and look forward to the next iteration!<|||||>I just pushed an update that moves the logic into a new `DocumentQuestionAnsweringPipeline`. I still need to do a few major things: - Integrate OCR - Figure out padding (specifically -- using "return_tensors" basically requires padding, so I could either enforce it or do the `unsqueeze` trick used in the qa pipeline) - Integrate the post-processing from the QA pipeline. I did some sanity testing with a model we've trained and can confirm that it is starting to work! I think we're headed in the right direction.<|||||>> Figure out padding (specifically -- using "return_tensors" basically requires padding, so I could either enforce it or do the unsqueeze trick used in the qa pipeline) Not sure I understand, in the pipelines the padding should be done by the pipeline itself, not by the `preprocess` (It just allows for more flexible control over how things are executed). `preprocess` only processes 1 input at a time, so padding shouldn't be necessary (it might be activable, like truncating, but I don't think it should be the default)<|||||>> Not sure I understand, in the pipelines the padding should be done by the pipeline itself, not by the preprocess (It just allows for more flexible control over how things are executed). preprocess only processes 1 input at a time, so padding shouldn't be necessary (it might be activable, like truncating, but I don't think it should be the default) If I'm understanding the QA pipeline code correctly, the reason padding is relevant is that if you stride a document (e.g. one with more than 512 words), then one item that you preprocess might result multiple inputs to the model that get concatenated together in one big tensor. The question answering pipeline has to solve for this too, and it seems to do that by (a) _not_ returning tensors from `tokenize()`, and then (b) while constructing the final output, using `tensor.unsqueeze(0)` ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L355)) to effectively pad each element to the same size. I'm happy to do it that way if you prefer -- my working assumption was that the "padding" argument to the tokenizer accomplishes the same thing (but certainly may be missing some interesting implementation detail). <|||||>> If I'm understanding the QA pipeline code correctly, the reason padding is relevant is that if you stride a document (e.g. one with more than 512 words), then one item that you preprocess might result multiple inputs to the model that get concatenated together in one big tensor. The question answering pipeline has to solve for this too, and it seems to do that by (a) _not_ returning tensors from `tokenize()`, and then (b) while constructing the final output, using `tensor.unsqueeze(0)` ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L355)) to effectively pad each element to the same size. Ok, this is what I alluded to, QA solves this by using `return_overflowing_tokens` (and the padding is set to `do_no_pad` by default). https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L283 QA solves this by using `ChunkPipeline`. If you want to tackle this, you're more than welcome to it, but IMO it's going to be easier to do it in two steps, and two separate PRs. As a first step I would recommend simply not treating padding, and let sequences too long go to to the model, which will then crash. It's aligned with the "don't hide" anything policy. Some models can handle long range, most cannot, so not trying to hide that fact is IMO a good thing. We can add an auto chunking in a follow UP PR. <|||||>> As a first step I would recommend simply not treating padding, and let sequences too long go to to the model, which will then crash. It's aligned with the "don't hide" anything policy. Some models can handle long range, most cannot, so not trying to hide that fact is IMO a good thing. We can add an auto chunking in a follow UP PR. That plan works for me! I'll provide a new update shortly.<|||||>@Narsil I just updated the PR with a few changes that remove the padding/striding stuff (for now) and add some docs. The next steps are to integrate OCR and then refactor/borrow the post-processing code from the QA pipeline. I'll keep working on that but wanted to post an intermediate update in case you had a chance to take a quick look.<|||||>@Narsil another thought / question I had while working on the OCR stuff... Currently, both LayoutLMv2 and v3 have a feature extractor which _by default_ applies OCR. By incorporating OCR into the pipeline itself (which I'm doing by just borrowing their code), we essentially take over that functionality. So, a user may have to do something like this: ```python pipe = pipeline(task="visual-question-answering", model="layoutlmv3-xxx", tokenizer="layoutlmv3-xxx", feature_extractor=AutoFeatureExtractor.from_pretrained("layoutlmv3-xxx", apply_ocr=False)) out = pipe(image=Image.load("id_card.jpg"), question="What is this person's address ?") # out == [{answer: "24 nerdy street", score:0.8}, {"answer": "John travolta", "score": 0.1}] ``` Essentially, we'll want to rely on the pipeline's OCR, not the feature extractor's. However as a result, we make the user experience a bit awkward (since they have to provide "apply_ocr" `False` in one place). I can think of a few solutions to this: 1. We could rely on the user providing a feature extractor as input, and then invoke the feature extractor in `preprocess()`, essentially following the conventions that `LayoutLMv2Processor`/`LayoutLMv3Processor` do (call the feature extractor and then the tokenizer). If they provide neither a feature extractor nor words, we can provide a helpful error message that they must provide a feature extractor that returns words. One major downside to this approach is that users of models like LayoutLMv1 will _not_ ever get OCR run for them by the pipeline, but I'm open to implementing a feature extractor for LayoutLMv1 to solve this. 2. If they provide a feature extractor, we could try to check whether it'll run OCR (e.g. by checking whether its "apply_ocr" attribute is `True`). If it will, then we can rely on the feature extractor to provide words/boxes. If not, and they haven't passed in words to the pipeline, then we can run OCR. I think the major downside is depending on a non-standard flag (`apply_ocr`) in the generic pipeline code. I'm not sure how you all think about this tradeoff -- it may be fine to do. A slight variant of this is to test whether _after_ running the feature extractor, we have `words` and `boxes` available in its output. 3. We could just ignore this altogether and let the user be the expert. I.e. if they pass in a feature extractor and have not specified `apply_ocr=False`, it will run OCR twice (once in the pipeline and once in the feature extractor), which is an unnecessary perf hit, but makes no assumptions about the feature extractor itself. Let me know your thoughts.<|||||>@Narsil I think I've implemented all of what we talked about (and apologies in advance if I missed anything). To summarize: - Padding/truncation are gone. I've left them commented out, because we plan to address them as a follow up (in this or a fast-follow PR), but I'm happy to remove those comments too if you prefer. - OCR is integrated. Regarding my question just above, I went down the route of option 2, and check whether the feature extractor returned words/boxes before trying to run OCR, which the pipeline natively supports. - I refactored the tricky postprocessing parts of the QA pipeline into helper functions which I call from the document question answering pipeline. - I've copied the relevant subsets of the code (including PR #18407) and published it [here](https://huggingface.co/impira/layoutlm-document-qa) with some examples. Feel free to play around with it! As a next step, I'd appreciate a quick review from you on these major points to verify whether we're on the right track. I'd like to add the tests and more documentation next (pending your feedback on if we are in a good place with the interface/overall design). I also have a few questions regarding the tests/docs: - The existing tests for both question-answering and visual-question-answering use models published on HF. There aren't (currently that I could find) any reliablen doc qa models. I have published one [here](https://huggingface.co/impira/layoutlm-document-qa), but there's a bit of a soft dependency on PR #18407 because the model we're publishing uses LayoutLMv1. You can [access the model w/ remote code enabled](https://huggingface.co/docs/transformers/main/en/custom_models#using-a-model-with-custom-code), but I'm not sure that's advisable for a test in the repo. It'd also be good to have tests that span multiple models (e.g. v1-v3) because there are some differences in their tokenizers. - Is there any way to use a processor in a pipeline? The reason I ask is that LayoutLMv2 and v3 have some interesting logic encapsulated in their processors (e.g. LayoutLMv2 renames the input to the model from `image` to `pixel_values` and v3 to `image_features`). It'd be great to reuse the logic in those classes within the pipeline. Alternatively, I could just support LayoutLMv1 to start with and we can work on adding support for the other versions in a follow up PR. - Should I add docs anywhere other than the code itself (which I assume would show up [here](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.QuestionAnsweringPipeline))? For example a place like [here](https://huggingface.co/docs/transformers/main/en/task_summary#question-answering)) as a tutorial for how document question answering works? <|||||>@Narsil gentle nudge in case this slipped from your queue :)<|||||>> So rather than extending the VQA pipeline, it seems that the design has been updated to create a separate DocumentQuestionAnswering pipeline? Yes that's correct. > Also, I'd like to note that there's a new model I'm working on called Donut which solved DocVQA in a generative manner. Donut is generative T5-like model, which simply generates the answer given a question. Would this pipeline be able to support that model as well? The interface should support it. As input, you provide an image+question (and optional word/box pairs if you've pre-run OCR) and as output you receive an answer + start/end words. For a generative model, I could imagine either omitting the start/end or the pipeline doing its best to find it in the document if it exists. Code-wise, there may be some refactoring _within_ the pipeline implementation to best support a model like Donut. Very happy to collaborate with you on that.<|||||>@NielsRogge congrats on pushing Donut -- I just saw it come through. I've integrated it into the pipeline, and it works! The code gets a bit splintered _inside_ the pipeline which now handles the `VisionEncoderDecoderModel` case a bit differently. I would definitely appreciate feedback on how the control flow handles both cases (CC @Narsil too). I think one thing that would help is if pipelines could accept processors as input. We could potentially capture some of the LayoutLM-specific tokenization logic into a `LayoutLMProcessor` (similar to `LayoutLMv2Processor`), and then simply invoke processor-specific commands for each type of model within the pipeline. Let me know your thoughts. And feel free to take it for a spin! For example, the following commands work: ```python from transformers import AutoTokenizer, pipeline nlp = pipeline('document-question-answering', model='naver-clova-ix/donut-base-finetuned-docvqa', tokenizer=AutoTokenizer.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa"), feature_extractor='naver-clova-ix/donut-base-finetuned-docvqa') nlp("https://templates.invoicehome.com/invoice-template-us-neat-750px.png", "What is the invoice total?") ```<|||||>Hi @Narsil, thanks for the feedback. I will address your comments. I appreciate your willingness to pull down the code and get your hands dirty. Please let me know if I can help at all with that. I need to quickly rebase and fix one or two bugs which I will do ASAP (I broke a couple things while adding support for Donut). Let me roll up a couple of high level questions that are open. I would greatly appreciate your feedback on these: 1 - Is it possible to use `Processor`s in pipelines? I think _some_ (but not a whole lot) of the logic for Donut, and a whole lot of the logic for LayoutLMv2-3 is present in their processor class and would need to be duplicated here otherwise. Likewise, we could probably create a processor for LayoutLMv1 and place some of the logic there. 2 - While working some more on this in real-world scenarios, I realized that for models like Donut, which operate _only_ on the images, grouping things together by page is actually really important (so you can pass in one input per page). I think it might be useful to change the input format to either be a list of images, or something like `[(image, [(word, box)])]`, where each tuple has an image and a list of word/boxes. WDYT?<|||||>@ankrgyl Here are some tests we can integrate If you're ok (feel free to modify, the important part is to have exact values in the asserts everywhere except `run_pipeline_test`. https://github.com/huggingface/transformers/pull/18732/commits/2e8b01cd5e65aa64956e0f5e56e29ea8391c3955<|||||>> 1 - Is it possible to use Processors in pipelines? I think some (but not a whole lot) of the logic for Donut, and a whole lot of the logic for LayoutLMv2-3 is present in their processor class and would need to be duplicated here otherwise. Likewise, we could probably create a processor for LayoutLMv1 and place some of the logic there. In general, `processor` should be extremely shallow, and the real logic should actually be in `feature_extractor`. Leveraging it is not only encouraged but extremely welcome as they can contain model specific details that the pipeline then doesn't have to care aobut. > 2 - While working some more on this in real-world scenarios, I realized that for models like Donut, which operate only on the images, grouping things together by page is actually really important (so you can pass in one input per page). I think it might be useful to change the input format to either be a list of images, or something like [(image, [(word, box)])], where each tuple has an image and a list of word/boxes. WDYT? You mean sharing the question throughout the pipeline ? I don't know, I think we should focus on the simple solution first, see later for different use cases. Sending it at every step is not too hard, and the optimizations should be dwarfed compared to other issues that might occur (like feeding the GPU fast enough and image processing). Happy to be proven wrong I haven't checked (but in my experience tokenizer is rarely a bottleneck) Anything list, Dataset or generator should probably be handled by the base class, not by the pipeline directly. <|||||>@ankrgyl could you rename the PR to better describe what it does (as it doesn't seem to extend the existing VQA pipeline)? I'll do a second round of review soon. <|||||> > In general, `processor` should be extremely shallow, and the real logic should actually be in `feature_extractor`. Leveraging it is not only encouraged but extremely welcome as they can contain model specific details that the pipeline then doesn't have to care aobut. Okay got it. That makes sense > > 2 - While working some more on this in real-world scenarios, I realized that for models like Donut, which operate only on the images, grouping things together by page is actually really important (so you can pass in one input per page). I think it might be useful to change the input format to either be a list of images, or something like [(image, [(word, box)])], where each tuple has an image and a list of word/boxes. WDYT? > > You mean sharing the question throughout the pipeline ? I don't know, I think we should focus on the simple solution first, see later for different use cases. Sending it at every step is not too hard, and the optimizations should be dwarfed compared to other issues that might occur (like feeding the GPU fast enough and image processing). Happy to be proven wrong I haven't checked (but in my experience tokenizer is rarely a bottleneck) > > Anything list, Dataset or generator should probably be handled by the base class, not by the pipeline directly. No, I'm talking about the case where you're working with a document that has multiple pages. Each page consists of an image and potentially word/box pairs (if OCR is pre-run). In document processing, it's a common request to try to find an answer from more than one page (e.g. find the total from a 2 page invoice). Right now, as constructed, you can only pass one page at a time, since you can pass in at most one image. That means as a user, you'd have to run the pipeline on each page, and then pick the highest confidence answer. Ideally, this logic should live in the pipeline, because the pipeline can have some logic that picks the best answer across pages. The main reason I'm wondering about it now is that it affects the input shape. For example, if you have a 3 page document, the code could look like: ```python pages = [] for page in my_pdf.pages(): pages.append({"image": Image.load(page.image()), "word_boxes": tesseract(page.image())}) pipe(image=pages, question="What is this person's address ?") ``` I'm ok with addressing this in a follow up too, where we can extend `images` to also be an array (and expect it to be this shape). I just wanted to flag the scenario sooner than later.<|||||>> @ankrgyl Here are some tests we can integrate If you're ok (feel free to modify, the important part is to have exact values in the asserts everywhere except `run_pipeline_test`. > > [2e8b01c](https://github.com/huggingface/transformers/commit/2e8b01cd5e65aa64956e0f5e56e29ea8391c3955) Thanks @Narsil. I've incorporated these tests into a new test suite in the change and am working through them. I will work on expanding the tests next. It would be really helpful to land PR #18407 so I can include tests for LayoutLMv1 too. A couple things came up while I was integrating the tests: - I think there's something wrong with `hf-internal-testing/tiny-random-layoutlmv2`. Specifically, if you run the following (w/out any of these changes), you should see an error: ```python from transformers import AutoModel, AutoProcessor from PIL import Image processor = AutoProcessor.from_pretrained("hf-internal-testing/tiny-random-layoutlmv2") model = AutoModel.from_pretrained("hf-internal-testing/tiny-random-layoutlmv2") encoding = processor(Image.open('tests/fixtures/tests_samples/COCO/000000039769.png').convert("RGB"), "What is the meaning of life?", return_tensors="pt") o = model(**encoding) # ValueError: 'p5' is not in list ``` However, if you run with `microsoft/layoutlmv2-base-uncased` instead of `hf-internal-testing/tiny-random-layoutlmv2`, the above code works. Could there be something incorrectly configured with this model? - I was able to get the `test_large_model_pt_layoutlmv2` model to structurally work, however, the model's outputs are so low confidence that the results are inconsistent from run to run (there are several answers with the minimum possible score). I think it might be worth using a fine-tuned one like `tiennvcs/layoutlmv2-base-uncased-finetuned-docvqa` with a pinned revision. Is that ok?<|||||>> # ValueError: 'p5' is not in list The small model is just configured with much less layers, and `p5` is not expected to be there. (The goal is to have a tiny random model) I don't know what `Processor` does, but the feature_extractor was working properly if I recall. Therer still was an error in the test but further down in the forward pass because some keys were missing. Feel free to modify the tiny model as you see fit locally and propose a PR on it (or use a small tiny random model you own and we'll port it back into `hf-internal-testing`. I did remove some vision layers (including `p5`) if something is failing I would consider it a bug, but I am not super familiar with this model's internals.<|||||>> I did remove some vision layers (including `p5`) if something is failing I would consider it a bug, but I am not super familiar with this model's internals. Yes it was failing inside of the forward pass, not in the processor. I only used the processor to demonstrate that the repro did not have to do with the new code in the PR (it's not running in the test, either). I can explore the mini model but am unfortunately not very familiar with that stuff myself, either. I will take a look but may need some help if I get stuck.<|||||>@Narsil I _think_ I was able to update the mini model (see [PR](https://huggingface.co/hf-internal-testing/tiny-random-layoutlmv2/discussions/1)). I verified locally that with this update + some expected test changes, the test passes.<|||||>BTW, I'm working on a space, which illustrates the pipeline. It's currently using a frozen version of the pipeline I have saved in another repo, but once we land this PR and PR #18407 I'll update it. You can play with it here: https://huggingface.co/spaces/impira/docquery. One cool thing to notice is that with the LayoutLM model, the app reuses the OCR'd tokens across questions, so it is very fast to ask multiple questions. Here is a quick video that demonstrates: https://www.loom.com/share/61a09fb6364142c3b85dd880d8a36890.<|||||>Awesome, let me clean everything up, and update the tests as well. I agree, there's a lot more to do, and I'll keep the momentum going in follow up PRs.<|||||>@NielsRogge @Narsil @sgugger @patrickvonplaten I think we are pretty much there -- I just updated to address the outstanding comments, and the failing tests look like they are flaky (unrelated to this change). Let me know if there's anything else needed on this PR.<|||||>Thanks @sgugger. Should I open one issue with all of them or a separate issue for each TODO? Happy to do that.<|||||>I think one issue with the list of TODOs is fine.<|||||>Here is the task with follow ups: https://github.com/huggingface/transformers/issues/18926.<|||||>Thanks again for your contribution!
transformers
18,413
open
Tranformers documentation translation to Japanese 🇯🇵
Hi! Let's bring the documentation to all the Japanese-speaking community :) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: - Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). - Please translate in a gender-neutral way. - Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). - Register your translation in `ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). - Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @omarespejel and @sgugger for review. - 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) | https://github.com/huggingface/transformers/pull/21186 - [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx). - [x] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx) | https://github.com/huggingface/transformers/pull/21241 ## Tutorial section - [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx) - [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.mdx) - [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx) - [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx) - [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) - [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx) - [x] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx) | https://github.com/huggingface/transformers/pull/21084
08-02-2022 05:35:25
08-02-2022 05:35:25
cc @younesbelkada as we talked about that last week<|||||>That's great 🔥 ! Linking this issue to the one on HF course: https://github.com/huggingface/course/issues/114 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I would like to translate the 3 mdx files in the Get Started section. (index.mdx, quicktour.mdx, installation.mdx) If there is no person in charge yet.<|||||>Thanks @kambehmw for your interest in this! Sure, feel free to start working on it as no one is in charge of that yet! <|||||>@younesbelkada Thanks for the reply. Then I will be in charge of translating those three documents. Once the translation document is ready, I will make a pull request.<|||||>Thank you very much, looking forward to it!!<|||||>Hi @younesbelkada! Still working on [this PR request](https://github.com/huggingface/optimum/pull/542) but I would like to work on this issue too because I'm Japanese 🇯🇵 Can I work on [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)? By the way, the link for [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) is broken. I think [this](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.mdx) is the correct one 🙏 <|||||>I would like to work on this as well if it is OK.<|||||>Sure yes @rustinwelter , that would be great, let us know what topic would you like to pick up for translation!<|||||>Thank you @younesbelkada! Let me just try [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx) as I'm a still a bit nervous. But if it goes well and I feel comfortable with it, will you let me do others as well?<|||||>Of course yes! Don't worry all will go very well 💪 And looking forward to your contributions!<|||||>Thank you! I have sent my pull request! :)<|||||>Awesome! Thanks a lot for your contribution!<|||||>Hey all! As some people were interested in a place to discuss about translations, we opened a category in the [HF Discord server](http://hf.co/join/discord) with a category for internationalization and translation efforts, including a Japanese channel!
transformers
18,412
closed
fix: keras fit tests for segformer tf and minor refactors.
Fixes the issues as noticed in: https://github.com/huggingface/transformers/runs/7485048615?check_suite_focus=true. I don't have access to an instance having multiple GPUs at the moment, but I figured out the root cause of the issue. https://github.com/huggingface/transformers/blob/df5e4232f59e6fea08911eddd0adc965d1b59c15/tests/models/segformer/test_modeling_tf_segformer.py#L346 ^ I wasn't calling the model on some sample inputs, which is why the weights retrieved from `get_weights()` were zero. That has been fixed in this PR. I tested it locally in isolation with the following snippet (I acknowledge that it's not super clean): ```py from transformers import TFSegformerForImageClassification, TFSegformerForSemanticSegmentation, SegformerConfig import tensorflow as tf from tests.test_modeling_tf_common import floats_tensor, ids_tensor import numpy as np batch_size = 13 image_size = 64 num_channels = 3 num_encoder_blocks = 4 depths = [2, 2, 2, 2] sr_ratios = [8, 4, 2, 1] hidden_sizes = [16, 32, 64, 128] downsampling_rates = [1, 4, 8, 16] num_attention_heads = [1, 2, 4, 8] is_training = True use_labels = True hidden_act = "gelu" hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 initializer_range = 0.02 num_labels = 3 def get_config(): return SegformerConfig( image_size=image_size, num_channels=num_channels, num_encoder_blocks=num_encoder_blocks, depths=depths, hidden_sizes=hidden_sizes, num_attention_heads=num_attention_heads, hidden_act=hidden_act, hidden_dropout_prob=hidden_dropout_prob, attention_probs_dropout_prob=attention_probs_dropout_prob, initializer_range=initializer_range, num_labels=num_labels ) def prepare_config_and_inputs(for_semseg=True): pixel_values = floats_tensor([batch_size, num_channels, image_size, image_size]) if for_semseg: labels = ids_tensor([batch_size, image_size, image_size], num_labels) else: labels = tf.zeros((batch_size)) config = get_config() return config, pixel_values, labels model_classes = (TFSegformerForImageClassification, TFSegformerForSemanticSegmentation) for model_class in model_classes: if model_class == TFSegformerForSemanticSegmentation: config, pixel_values, labels = prepare_config_and_inputs(for_semseg=True) else: config, pixel_values, labels = prepare_config_and_inputs(for_semseg=False) input_for_model_fit = {"pixel_values": pixel_values, "labels": labels} model = model_class(config) model(model.dummy_inputs) model_weights = model.get_weights() model.compile(optimizer=tf.keras.optimizers.SGD(0.0), run_eagerly=True) history1 = model.fit( input_for_model_fit, validation_data=input_for_model_fit, steps_per_epoch=1, validation_steps=1, shuffle=False, ) val_loss1 = history1.history["val_loss"][0] label_names = {"labels"} labels = {key: val for key, val in input_for_model_fit.items() if key in label_names} inputs_minus_labels = {key: val for key, val in input_for_model_fit.items() if key not in label_names} # We reinitialize the model here even though our learning rate was zero # because BatchNorm updates weights by means other than gradient descent. model.set_weights(model_weights) history2 = model.fit( inputs_minus_labels, labels, validation_data=(inputs_minus_labels, labels), steps_per_epoch=1, validation_steps=1, shuffle=False, ) val_loss2 = history2.history["val_loss"][0] print(np.allclose(val_loss1, val_loss2, atol=1e-2, rtol=1e-3)) ``` @amyeroberts @Rocketknight1 @sgugger
08-02-2022 03:38:34
08-02-2022 03:38:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>Pinging @gante as this week's TF reviewer!<|||||>> (question: why is test_keras_fit entirely overwritten?) 1. The `TFSegFormerModel` class doesn't support the fit test since we can't compute loss on embeddings. 2. The labels for the rest of the two classes (semantic segmentation and image classification) have different label shapes. So, it made sense to test them in isolation. <|||||>> > (question: why is test_keras_fit entirely overwritten?) > > 1. The `TFSegFormerModel` class doesn't support the fit test since we can't compute loss on embeddings. The line `if getattr(model, "hf_compute_loss", None):` should already take care of this case, I think. > 2. The labels for the rest of the two classes (semantic segmentation and image classification) have different label shapes. Does the main issue come from the fact that `_prepare_for_class` in `tests/test_modeling_tf_common.py` lack the label preparation for `segmentation`? <|||||>> Does the main issue come from the fact that _prepare_for_class in tests/test_modeling_tf_common.py lack the label preparation for segmentation? I think so, yes. <|||||>Looks like the new `test_keras_fit()` in the base `test_modeling_tf_common` takes care of the nuances I faced when I was overriding `test_keras_fit()` (at the time of writing `modeling_tf_segformer.py`. So, I incorporated the latest changes, bypassing the complete rewrite. @ydshieh @amyeroberts @gante up for another review. <|||||>Thanks for flagging this to me! @ydshieh @gante okay to merge? <|||||>Let gante push the final approval button 😄 <|||||>@sayakpaul our CI failed in the reworked test -- can you confirm that it runs correctly? :) https://github.com/huggingface/transformers/runs/7655675934?check_suite_focus=true<|||||>@gante taking a quick look [here](https://github.com/huggingface/transformers/runs/7655675934?check_suite_focus=true#step:9:139), seems like it's happening because of the second point [here](https://github.com/huggingface/transformers/pull/18412#issuecomment-1202957292). If this is the case, I will sync with @ydshieh to add support for segmentation labels in the necessary places. Sounds good?
transformers
18,411
closed
`assertion failed: stride < max_len` when using tokenizer with text_pair
### System Info transformers 4.20.1, 4.21.0 (tested with both), MacOS 12.4, Python 3.8.12 ### Who can help? @SaulLu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The above occurs when using truncation and overflowing tokens with a sentence pair. Maybe I'm doing something stupid but here is the code that reproduces it: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilbert-dot-v5') sentences = [ "This sentence is not too long but we are going to split it anyway.", "This sentence is shorter but will still get split.", ] inputs = tokenizer( sentences[0], sentences[1], truncation=True, return_overflowing_tokens=True, max_length=6, stride=2 ) ``` When using the same sentences like this ``` inputs = tokenizer( sentences, truncation=True, return_overflowing_tokens=True, max_length=6, stride=2 ) ``` all is fine. ### Expected behavior I would expect that the truncation succeeds as max_length > stride.
08-02-2022 00:54:24
08-02-2022 00:54:24
I'd like to provide an update on my own ticket. Although I still have no idea what causes this behavior and despite probably not being relevant for most practical applications, it seems like the truncation is enforcing that the shortest of the two texts after the truncation is at least 3 tokens long. The resulting error message (see title of this ticket) is only adding to this confusion. In the following, I'll describe the experiments I ran and the outcomes that led me to the aforementioned conclusion. First, I determined the lengths of the above sentences after tokenization. ``` len(tokenizer(sentences[0])['input_ids']) # 17 when `add_special_tokens=True`, 15 otherwise len(tokenizer(sentences[1])['input_ids']) # 12 when `add_special_tokens=True`, 10 otherwise ``` Then I played around with the `max_length` parameter to the tokenizer's `__call__` method to determine if there's a setting which lets it complete the tokenization. For ``` inputs = tokenizer( sentences[0], sentences[1], truncation="longest_first", return_overflowing_tokens=True, max_length=6, stride=2, add_special_tokens=True ) ``` it was `max_length = 6` and for ``` inputs = tokenizer( sentences[0], sentences[1], truncation="longest_first", return_overflowing_tokens=True, max_length=9, stride=2, add_special_tokens=True ) ``` it was `max_length = 9`, showing the impact of the 3 special tokens added before, between, and after the two sequences. Then I tried the same experiment for `truncation="only_second"` which has different `max_length` thresholds, as the first sequence needs to fit completely. This resulted in settings of `max_length = 18` and `max_length = 21` for `add_special_tokens=False` and `add_special_tokens=True`, respectively. Based on these observations, I concluded, as mentioned in the introduction, that the shortest sequence must have at least 3 tokens after the truncation for it to be successful. I'm wondering about 2 things now: 1. Should the documentation be adjusted to make this specific? 2. Should the error message be improved as it was misleading, at least to me. Thanks for your support!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,410
closed
Sharded Multi-GPU MT5 training with the Seq2SeqTrainer fails (4.21.0)
### System Info transformers version: 4.21.0 Platform: Linux Python version: 3.7.6 Huggingface_hub version: 0.8.1 PyTorch version (GPU?): 1.10.2 (Yes) Tensorflow version (GPU?): not installed (NA) Flax version (CPU?/GPU?/TPU?): not installed (NA) Jax version: not installed JaxLib version: not installed Using GPU in script?: Yes (2+ Tesla V100) Using distributed or parallel set-up in script?: Yes When trying to fine-tune a MT5ForConditionalGeneration model using a Seq2SeqTrainer, while using multiple GPUs, I get a InternalAssert error. I am running the script using `torchrun --nproc=$NUM_GPUS script.py`. The issue appears when `$NUM_GPUS` is greater than 1. Also, it only appears when the argument `sharded_ddp: ["zero_dp_3"]` is passed to the trainer. ``` Traceback (most recent call last): File "script.py", line 475, in <module> fire.Fire(main) File "/miniconda/lib/python3.7/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/miniconda/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire target=component.__name__) File "/miniconda/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "script.py", line 447, in main train_model(model, tokenizer, cli_arguments) File "script.py", line 357, in train_model trainer.train() File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1502, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 1740, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/miniconda/lib/python3.7/site-packages/transformers/trainer.py", line 2488, in training_step loss.backward() File "/miniconda/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/miniconda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: grad.numel() == bucket_view.numel()INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1640811797118/work/torch/csrc/distributed/c10d/reducer.cpp":328, please report a bug to PyTorch. 0%| | 0/100000 [00:06<?, ?it/s] WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 660 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 662 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 663 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 661) of binary: /miniconda/bin/python Traceback (most recent call last): File "/miniconda/bin/torchrun", line 33, in <module> sys.exit(load_entry_point('torch==1.10.2', 'console_scripts', 'torchrun')()) File "/miniconda/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper return f(*args, **kwargs) File "/miniconda/lib/python3.7/site-packages/torch/distributed/run.py", line 719, in main run(args) File "/miniconda/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ script.py FAILED ------------------------------------------------------------ ``` The issue fails on `transformers[deepspeed]==4.21.0` but there are no issues in `transformers[deepspeed]==4.20.1`. The versions of Deepspeed and Fairscale are `deepspeed==0.6.5` or `deepspeed==0.6.7` and `fairscale=0.4.6` and this code was run in a Linux machine. ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python # The simplified contents of script.py # Running torchrun --nproc_per_node=1 script.py should work # Running torchrun --nproc_per_node=4 script.py should fail with a RuntimeError: grad.numel() == bucket_view.numel()INTERNAL ASSERT FAILED error. from __future__ import annotations import functools import typing as tp import datasets import transformers from transformers import ( DataCollatorForSeq2Seq, PreTrainedTokenizer, Seq2SeqTrainingArguments, Seq2SeqTrainer, ) increment_en = [ {"input": "One", "target": "Two"}, {"input": "Three", "target": "Four"}, {"input": "Five", "target": "Six"}, {"input": "Seven", "target": "Eight"}, {"input": "Nine", "target": "Ten"}, ] increment_en = increment_en * 100 def lod_to_dol(list_of_dicts: tp.List[tp.Dict[str, tp.Any]]) -> tp.Dict[str, list]: dict_of_lists = { key: [dct[key] for dct in list_of_dicts] for key in list_of_dicts[0] } return dict_of_lists increment_en = lod_to_dol(increment_en) def preprocess_function_( examples, tokenizer: PreTrainedTokenizer, max_input_length: int, max_target_length: int, ): inputs = examples["input"] targets = examples["target"] model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs def main(): tokenizer = transformers.MT5Tokenizer.from_pretrained("google/mt5-base") model = transformers.MT5ForConditionalGeneration.from_pretrained("google/mt5-base") args = Seq2SeqTrainingArguments( "script_debug", per_device_train_batch_size=4, per_device_eval_batch_size=4, fp16=False, push_to_hub=False, sharded_ddp=["zero_dp_3"], max_steps=10000, logging_steps=5000, save_steps=5000 ) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, padding=True) dataset = datasets.DatasetDict( { "train": datasets.Dataset.from_dict(increment_en), "test": datasets.Dataset.from_dict(increment_en), } ) preprocess_function = functools.partial( preprocess_function_, tokenizer=tokenizer, max_input_length=512, max_target_length=512 ) processed_ds = dataset.map(preprocess_function, batched=True) processed_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "labels"] ) trainer = Seq2SeqTrainer( model, args, train_dataset=processed_ds["train"], eval_dataset=processed_ds["test"], data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() if __name__ == "__main__": main() ``` ### Expected behavior The training code should not crash.
08-01-2022 23:17:40
08-01-2022 23:17:40
It still fails when I install `transformers` directly from the GitHub repository (as of today). Here's the traceback: ``` Traceback (most recent call last): File "script.py", line 102, in <module> main() File "script.py", line 98, in main trainer.train() File "/mnt/task_runtime/transformers/src/transformers/trainer.py", line 1506, in train ignore_keys_for_eval=ignore_keys_for_eval, File "/mnt/task_runtime/transformers/src/transformers/trainer.py", line 1744, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/mnt/task_runtime/transformers/src/transformers/trainer.py", line 2492, in training_step loss.backward() File "/miniconda/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/miniconda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: grad.numel() == bucket_view.numel()INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1640811797118/work/torch/csrc/distributed/c10d/reducer.cpp":328, please report a bug to PyTorch. 0%| | 0/10000 [00:00<?, ?it/s] ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 48181) of binary: /miniconda/bin/python Traceback (most recent call last): File "/miniconda/bin/torchrun", line 33, in <module> sys.exit(load_entry_point('torch==1.10.2', 'console_scripts', 'torchrun')()) File "/miniconda/lib/python3.7/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper return f(*args, **kwargs) File "/miniconda/lib/python3.7/site-packages/torch/distributed/run.py", line 719, in main run(args) File "/miniconda/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run )(*cmd_args) File "/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/miniconda/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent failures=result.failures, torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ script.py FAILED ------------------------------------------------------------ Failures: [1]: time : 2022-08-02_15:26:28 host : bolt-imq45r3c3y-8dfzr73qqa.bolt-pods.turi-bolt.svc.int.usmsc39.applecloud.io rank : 1 (local_rank: 1) exitcode : 1 (pid: 48182) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2022-08-02_15:26:28 host : bolt-imq45r3c3y-8dfzr73qqa.bolt-pods.turi-bolt.svc.int.usmsc39.applecloud.io rank : 0 (local_rank: 0) exitcode : 1 (pid: 48181) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ ```<|||||>Related issue: https://discuss.pytorch.org/t/multi-gpu-model-parallelism-device-error/117854/9 This issue seems to be related to how DDP is set up in a constructor somewhere, probably in the trainer's constructor when adding DDP.<|||||>Hello @shermansiu , I am unable to reproduce the error with transformers==4.22.0.dev0 main branch and fairscale==0.4.6. `sharded_ddp` has nothing to do with DeepSpeed. I get another error and it is unrelate with the integration. Therefore, please open the issue with `Fairscale` and follow it there. The issue I face is below which is different from the one you face: ```bash Traceback (most recent call last): File "script.py", line 109, in <module> main() File "script.py", line 103, in main trainer.train() File "/home/sourab/transformers/src/transformers/trainer.py", line 1502, in train return inner_training_loop( File "/home/sourab/transformers/src/transformers/trainer.py", line 1744, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/sourab/transformers/src/transformers/trainer.py", line 2492, in training_step loss.backward() File "/home/sourab/dev/lib/python3.8/site-packages/torch/_tensor.py", line 396, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/sourab/dev/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backw ard Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Function SplitWithSizesBackward0 returned an invalid gradient at index 0 - got [582401 280] but expected shape compatible with [291200640] ``` Also, if you want to leverage Fully Sharded Data Parallelism, you can use the production focused PyTorch FSDP integration in transformers by having following args: ```diff args = Seq2SeqTrainingArguments( "script_debug", per_device_train_batch_size=4, per_device_eval_batch_size=4, fp16=False, - sharded_ddp=["zero_dp_3"], + fsdp=["full_shard", "auto_wrap"], + fsdp_transformer_layer_cls_to_wrap="T5Block", max_steps=100, logging_steps=5000, save_steps=5000 ) ``` which gives below output: ```bash ***** Running training ***** Num examples = 500 Num Epochs = 2 Instantaneous batch size per device = 4 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 100 Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true" ... 100%|█████████████████████████████████████████████████████████████| 100/100 [00:26<00:00, 3.72it/s] Training completed. Do not forget to share your model on huggingface.co/models =) FullyShardedDataParallel( (_fsdp_wrapped_module): FlattenParamsWrapper( (_fpw_module): MT5ForConditionalGeneration( (shared): Embedding(250112, 768) (encoder): T5Stack( (embed_tokens): Embedding(250112, 768) (block): ModuleList( (0): FullyShardedDataParallel( (_fsdp_wrapped_module): FlattenParamsWrapper( (_fpw_module): T5Block( (layer): ModuleList( (0): T5LayerSelfAttention( (SelfAttention): T5Attention( (q): Linear(in_features=768, out_features=768, bias=False) (k): Linear(in_features=768, out_features=768, bias=False) (v): Linear(in_features=768, out_features=768, bias=False) (o): Linear(in_features=768, out_features=768, bias=False) (relative_attention_bias): Embedding(32, 12) ) (layer_norm): T5LayerNorm() (dropout): Dropout(p=0.1, inplace=False) ) (1): T5LayerFF( (DenseReluDense): T5DenseGatedActDense( ... ``` On transformers[deepspeed]==4.20.1, I don't the issue as you mentioned. I will look into it further by this week or next.<|||||>Thanks! The weird thing is that changing the fairscale version doesn't affect whether the bug appears. As you just said, I can make the bug appear by first running `pip install transformers==4.21.0` and disappear by running `pip install transformers==4.20.1`. I'll file a bug report in the FairScale repository anyway.<|||||>I was able to reproduce your `RuntimeError: Function SplitWithSizesBackward0 returned an invalid gradient at index 0 - got [582401280] but expected shape compatible with [145600320]` error by upgrading PyTorch (cudatoolkit=11.3) from `1.10.2` to `1.12.0`. I think it's still the same bug because running `torchrun --nproc_per_node=1 script.py` with `pytorch==1.12.0` works. After upgrading PyTorch to 1.12.0, I applied your FSDP patch and the code started to work. Thanks!<|||||>(FSDP is only available for PyTorch versions 1.12 and later)<|||||>Hello @shermansiu , I found the bug and raised above PR which should fix it. Can you try the above PR and confirm?<|||||>> (FSDP is only available for PyTorch versions 1.12 and later) Yes<|||||>Post applying PR, the output logs for `sharded_ddp`: ```bash 100%|█████████████████████████████████████████████████████████████| 100/100 [00:25<00:00, 3.93it/s] Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 26.4257, 'train_samples_per_second': 30.274, 'train_steps_per_second': 3.784, 'tra in_loss': 17.26375, 'epoch': 1.59} FullyShardedDataParallel( world_size=2, flatten_parameters=True, mixed_precision=False, (_fsdp_wrapped_module): FlattenParamsWrapper( (_fpw_module): MT5ForConditionalGeneration( (shared): Embedding(250112, 768) (encoder): T5Stack( (embed_tokens): Embedding(250112, 768) (block): ModuleList( (0): T5Block( (layer): ModuleList( (0): T5LayerSelfAttention( ... ``` <|||||>Yes, I can confirm that it works! ``` Training completed. Do not forget to share your model on huggingface.co/models =) {'train_runtime': 48.4985, 'train_samples_per_second': 32.991, 'train_steps_per_second': 2.062, 'train_loss': 18.418689575195312, 'epoch': 3.12} 100%|██████████████████████| 100/100 [00:48<00:00, 2.06it/s] ``` I guess I don't need to file a FairScale issue after all!<|||||>Wait... am I supposed to keep the issue open until the PR is merged?<|||||>Probably, I suppose. > [pacman100](https://github.com/pacman100) linked a pull request [1 hour ago ](https://github.com/huggingface/transformers/issues/18410#ref-pullrequest-1326351330)that will close this issue
transformers
18,409
closed
Fine-tuning a pretrained model did not follow as expected from the blog posting
### System Info - `transformers` version: 4.21.0 - Platform: Linux-5.13.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @LysandreJik, @sgugger, @stevhliu ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I was following the [blog](https://huggingface.co/docs/transformers/training) with BERT for sequential classification on my own dataset. Here is the snippet I used for the fine-tuning: ```python def tokenize_function(element): """ batchfy the tokenization :param element: a frament of dataset :type element: transformers.Dataset """ return tokenizer( element["tran"], return_attention_mask=True, add_special_tokens=True, truncation=True, max_length=CONTEXT_LENGTH, padding=True) def prepare_dataset(file_path): """ tokenize the dataset :param file_path: the location to the file :type file_path: str """ dt = pd.read_csv(file_path) # preprocessing omitted here dt = dt[["file", "tran", "label"]] dt = dt.groupby(["file", "label"])["tran"].apply(". ".join).reset_index() dt["tran"] = dt["tran"].str.lower() dt = Dataset.from_pandas(dt) tokenized_dt = dt.map(tokenize_function, batched=True) return tokenized_dt def compute_metrics(eval_pred): """ compute accuracy for the fine-tuned BERT model :param eval_pred: _description_ :type eval_pred: _type_ :return: _description_ :rtype: _type_ """ metric = load_metric("accuracy") logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") tokenized_train = prepare_dataset( "training_set.csv") tokenized_test = prepare_dataset( "test.csv") data_collator = DataCollatorWithPadding(tokenizer=tokenizer) print(tokenized_train[4].keys()) model = AutoModelForSequenceClassification.from_pretrained( "bert-base-uncased", num_labels=2) training_args = TrainingArguments( output_dir="../outputs/", num_train_epochs=EPOCHS, warmup_steps=500, fp16=True, learning_rate=1e-4, per_device_train_batch_size=BATCH_SIZE, per_device_eval_batch_size=BATCH_SIZE, evaluation_strategy='epoch', save_strategy="epoch", logging_strategy="epoch", # prediction_loss_only=True, do_train=True, do_eval=True, max_grad_norm=1.0, seed=RANDOM_SEED, data_seed=RANDOM_SEED, save_total_limit=1, load_best_model_at_end=True, report_to="none" ) if training_args.do_train: trainer = Trainer( model=model, args=training_args, data_collator=data_collator, tokenizer=tokenizer, train_dataset=tokenized_train, eval_dataset=tokenized_test, compute_metrics=compute_metrics ) trainer.train() ``` The output of `print(tokenized_train[4]` is: ```python dict_keys(['tran', 'label', 'input_ids', 'token_type_ids', 'attention_mask']) ``` ### Expected behavior The script is basically what the blog has. As expected, it should begin the fine-tuning process. Instead, I got this error: `ValueError: Target size (torch.Size([8])) must be the same as input size (torch.Size([8, 2]))`. I checked some online resource, it suggested something like `torch.unsqueeze()`, but I wonder if how to make it happen inside of the `trainer`. Also, I'm a little bit confused - did I missed something from the blog? Thanks in advance!
08-01-2022 21:01:44
08-01-2022 21:01:44
Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep issues for feature requests and bugs (in the library) only. It's hard to know what went wrong just from your code since we don't have access to the files you use, but my guess would be that your labels are floats instead of ints, so by default the model thinks you have one-hot encoded labels instead of numbers.<|||||>Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,408
closed
fix: create a copy for tokenizer object
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> To not have the object in the PreTrainedTokenizerFast and not impact its padding/truncating attribute we can just have a deep copy of the object Fixes # ([18406](https://github.com/huggingface/transformers/issues/18406)) ## Who can review? @LysandreJik @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-01-2022 18:52:43
08-01-2022 18:52:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>@YBooks , this PR creates another issue since deepcopy of tokenizer object is not possible when a Custom Pretokenizer is used. ```python tokenizer.pre_tokenizer = Custom() ``` See open issue in tokenizers here: https://github.com/huggingface/tokenizers/issues/581 I suggest using another serialization/loading scheme instead of copy.<|||||>Would like to open a PR with a fix?<|||||>Yes, I will give it a try.
transformers
18,407
closed
Add LayoutLMForQuestionAnswering model
# What does this PR do? This PR adds a `LayoutLMForQuestionAnswering` class that follows the implementations of `LayoutLMv2ForQuestionAnswering` and `LayoutLMv3ForQuestionAnswering`, so that `LayoutLM` can be fine-tuned for the question answering task. Fixes #18380 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case: https://github.com/huggingface/transformers/issues/18380 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Narsil
08-01-2022 18:49:05
08-01-2022 18:49:05
@Narsil I've left a few TODOs -- (1) supporting tensorflow, (2) filling in docs, (3) filling in tests -- which I'll gladly do. I just wanted to post sooner than later to start getting feedback on the approach.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok, for this part I will let @NielsRogge comment as I am not the best person to answer how it should be done.<|||||>@NielsRogge @Narsil gentle nudge on this PR. I plan to fix the tests + write docs as a next step but wanted to get some quick feedback about whether this approach is acceptable for including `LayoutLMForQuestionAnswering`. Appreciate your consideration!<|||||>Thanks @NielsRogge! We're discussing the pipeline part in [pull request 18414](https://github.com/huggingface/transformers/pull/18414). Would love your feedback there too!<|||||>@NielsRogge @Narsil I just updated it to include tests+documentation. If it's okay, I'd like to defer the tensorflow implementation for now (due to some personal lack of familiarity). I am failing a consistency check, however, as a result: ``` File "/Users/ankur/projects/transformers/transformers/utils/check_inits.py", line 298, in <module> check_all_inits() File "/Users/ankur/projects/transformers/transformers/utils/check_inits.py", line 238, in check_all_inits raise ValueError("\n\n".join(failures)) ValueError: Problem in src/transformers/models/layoutlm/__init__.py, both halves do not define the same objects. Differences for tf backend: LayoutLMForQuestionAnswering in _import_structure but not in TYPE_HINT. ``` Could you help me resolve this?<|||||>@NielsRogge @Narsil, I went ahead and implemented support for TensorFlow and the checks are now passing. Would appreciate a re-review.<|||||>@NielsRogge gentle nudge on this PR :)<|||||>> Thanks @NielsRogge! I just updated with your comments, added to the list of doc tests, and verified locally that they are (now) passing.<|||||>Up to you guys on that one! <|||||>@NielsRogge @Narsil I did some thinking over the weekend and think it makes sense to include them in `AutoModelForQuestionAnswering` to be consistent with `LayoutLMv2` and `v3`. We can move around the auto mapping in PR #18414. Let me know if you have any concerns with that thinking. If not, I'll proceed with merging the change in.<|||||>@Narsil @NielsRogge did you have any further questions on this PR, or is it ready to merge in?<|||||>Also happy to hold off, since we have some traction with PR #18414, and just wait to include it in the `AutoModelForDocumentQuestionAnswering` there?<|||||>Hi @Narsil @NielsRogge just wanted to bump on this -- based on the most recent round of comments on PR #18414, we removed `LayoutLMv2ForQuestionAnswering` and `LayoutLMv3ForQuestionAnswering` from `AutoModelForQuestionAnswering`, so I think it makes sense to not add `LayoutLMForQuestionAnswering` to the auto mapping, if we are about to remove it. I will go ahead and remove it and update the PR. Please let me know if it's ready to move forward. It would be very helpful to rebase PR #18414 against it for testing purposes.<|||||>This PR seems almost ready, I'd just update: * all code examples to use either `LayoutLMTokenizer` or `AutoTokenizer` * add a working code example of `LayoutLMForQuestionAnswering`/`TFLayoutLMForQuestionAnswering`, with an expected output<|||||>I actually don't have a pre-trained `TFLayoutLMForQuestionAnswering` (i.e. one with tensorflow weights), but I could use the same code and just reference the base model? I'll make the other updates now.<|||||>> I actually don't have a pre-trained TFLayoutLMForQuestionAnswering (i.e. one with tensorflow weights), but I could use the same code and just reference the base model? The Transformers library makes sure that any PyTorch model also works in the other framework, and vice versa, due to the same variable names being used. So you can just do: ``` from transformers import TFLayoutLMForQuestionAnswering model = TFLayoutLMForQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", from_pt=True) ``` and it should work (this should also normally be tested with the PT-TF cross equivalence test). You can then perhaps do `model.push_to_hub("impira/layoutlm-document-qa")` to upload the TF weights to the same repo. This way, you can remove the `from_pt` statement.<|||||>Wow, that is super cool! Okay let me give it a try.<|||||>Ok @NielsRogge I've made all of these changes. It was a really nice idea to put a fully working example in there. I've also pushed the TF weights to the hub.<|||||>@NielsRogge @Narsil the test failures are now occurring because LayoutLMForQuestionAnswering is not in any sort of auto mapping (for example, `tests/test_modeling_tf_common.py:_prepare_for_class` uses the auto mapping to determine what the expected output labels are. I'm not sure what the best way to proceed with this is. Perhaps we include it in the QuestionAnswering mapping just to keep the commit (a) consistent with LayoutLMv2-3 and (b) passing tests, and then solve the auto mapping issue properly in PR #18414?<|||||>@ankrgyl normally if you run `make fixup` and it complains about a model not being in any auto mapping, you can add it to utils/check_repo.py in the IGNORE_NON_AUTO_CONFIGURED mapping. Then, in #18414, you can remove it from this mapping and add it to the auto mapping instead.<|||||>> @ankrgyl normally if you run `make fixup` and it complains about a model not being in any auto mapping, you can add it to utils/check_repo.py in the IGNORE_NON_AUTO_CONFIGURED mapping. @NielsRogge I actually have already added it here, and it still fails the tests :(. The reason is that I've included it in `tests/models/layoutlm/test_modeling_layoutlm.py:LayoutLMModelTest.all_model_classes`. I feel like there's a tradeoff here: I can either exclude it from all tests, or put it into the QuestionAnswering auto class and then remove it shortly in PR #18414. Let me know what you think is best.<|||||>Following up on this @NielsRogge @Narsil @sgugger, could you please advise on how to proceed? It seems that if something _has_ tests then it _must_ be in an Auto model list (the failing tests are the due to `LayoutLMForQuestionAnswering` not being part of any Auto model). Please correct me if I'm wrong, but my understanding is that we have the following options for how to proceed: 1. Add `LayoutLMForQuestionAnswering` to the `AutoModelForQuestionAnswering` pipeline, which will make the tests pass. I'll remove it shortly after in https://github.com/huggingface/transformers/pull/18414. 2. Remove all tests about `LayoutLMForQuestionAnswering` and add them in https://github.com/huggingface/transformers/pull/18414. 3. Add `AutoModelForDocumentQuestionAnswering` in this PR, and then simply extend/use it in PR #18414. <|||||>To make all tests pass, you need to overwrite the `_prepare_for_class` method defined in `test_modeling_common.py`, to make sure the targets are prepared correctly for `LayoutLMForQuestionAnswering`. It can be defined as follows in `test_modeling_layoutlm.py`: ``` def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = copy.deepcopy(inputs_dict) if return_labels: if model_class in get_values(MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING): inputs_dict["labels"] = torch.zeros( self.model_tester.batch_size, dtype=torch.long, device=torch_device ) elif model_class in [ *get_values(MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING), *get_values(MODEL_FOR_MASKED_LM_MAPPING), ]: inputs_dict["labels"] = torch.zeros( (self.model_tester.batch_size, self.model_tester.seq_length), dtype=torch.long, device=torch_device ) elif model_class.__name__ == "LayoutLMForQuestionAnswering": inputs_dict["start_positions"] = torch.zeros( self.model_tester.batch_size, dtype=torch.long, device=torch_device ) inputs_dict["end_positions"] = torch.zeros( self.model_tester.batch_size, dtype=torch.long, device=torch_device ) return inputs_dict ``` This can then be removed once the model is added to an Auto mapping. The same needs to happen for the TF model.<|||||>Regarding the failing tests - you might need to rebase with the main branch. Also note that sometimes, tests which are totally unrelated to your PR fail, in which case you can ignore them.<|||||>Thanks @NielsRogge just rebased <|||||>@NielsRogge I believe all outstanding comments have been addressed. Are we ready to merge this in?<|||||>I've pinged @sgugger for a final review, however he's off this week so will be merged next week :)<|||||>Thank you for merging it in! @LysandreJik or @NielsRogge are you planning to do any sort of announcement? I'm asking because we're going to publicly announce the project we've been working on (https://github.com/impira/docquery) in the next few days, and it would be great to collaborate.<|||||>I'd like to communicate on that once the pipeline is merged, because the Space above is using that right? Also, the doc tests don't seem to pass: ``` _ [doctest] transformers.models.layoutlm.modeling_layoutlm.LayoutLMForQuestionAnswering.forward _ 1328 ... bbox.append([0] * 4) 1329 >>> encoding["bbox"] = torch.tensor([bbox]) 1330 1331 >>> word_ids = encoding.word_ids(0) 1332 >>> outputs = model(**encoding) 1333 >>> loss = outputs.loss 1334 >>> start_scores = outputs.start_logits 1335 >>> end_scores = outputs.end_logits 1336 >>> start, end = word_ids[start_scores.argmax(-1)], word_ids[end_scores.argmax(-1)] 1337 >>> print(" ".join(words[start : end + 1])) Expected: M. Hamann P. Harper, P. Martinez Got: J. S. Wigand /__w/transformers/transformers/src/transformers/models/layoutlm/modeling_layoutlm.py:1337: DocTestFailure _ [doctest] transformers.models.layoutlm.modeling_tf_layoutlm.TFLayoutLMForQuestionAnswering.call _ [15](https://github.com/huggingface/transformers/runs/8125145111?check_suite_focus=true#step:9:16)53 ... bbox.append([0] * 4) 1554 >>> encoding["bbox"] = tf.convert_to_tensor([bbox]) 1555 1556 >>> word_ids = encoding.word_ids(0) 1557 >>> outputs = model(**encoding) 1558 >>> loss = outputs.loss 1559 >>> start_scores = outputs.start_logits 1560 >>> end_scores = outputs.end_logits 1561 >>> start, end = word_ids[tf.math.argmax(start_scores, -1)[0]], word_ids[tf.math.argmax(end_scores, -1)[0]] 1562 >>> print(" ".join(words[start : end + 1])) Expected: M. Hamann P. Harper, P. Martinez Got: <BLANKLINE> ```<|||||>Hi @ankrgyl Thanks a lot for adding `(TF)LayoutLMForQuestionAnswering` ! For the doctest: - `TFLayoutLMForQuestionAnswering` seems to have issue loading the weights for `qa_outputs`. Could you check if the TF checkpoint in `impira/layoutlm-document-qa` has weights for this part, or see if you can find what goes wrong? The warning message is ```bash Some layers of TFLayoutLMForQuestionAnswering were not initialized from the model checkpoint at impira/layoutlm-document- qa and are newly initialized: ['qa_outputs'] ``` and I actually got some random results for this test. - `LayoutLMForQuestionAnswering` weight loading looks fine, but the output is different from the expected value. Could you take a look here? Here is how you can run the doctest First ```python python utils/prepare_for_doc_test.py src/transformers/utils/doc.py ``` Then for `LayoutLMForQuestionAnswering`: ```python python utils/prepare_for_doc_test.py src/transformers/models/layoutlm/modeling_layoutlm.py pytest --doctest-modules src/transformers/models/layoutlm/modeling_layoutlm.py -sv --doctest-continue-on-failure ``` For `TFLayoutLMForQuestionAnswering`: ```python python utils/prepare_for_doc_test.py src/transformers/models/layoutlm/modeling_tf_layoutlm.py pytest --doctest-modules src/transformers/models/layoutlm/modeling_tf_layoutlm.py -sv --doctest-continue-on-failure ``` Thank you again! If you have trouble on debugging this, let me know :-)<|||||>Hi @NielsRogge @ydshieh I'm very sorry about that -- what happened is that we've updated the weights on the underlying model and it's returning a different name from the same document (the question itself is slightly ambiguous). I've confirmed that if I pin the revision in the tests, they pass. I've just submitted https://github.com/huggingface/transformers/pull/18854 to resolve that. I'll investigate the weights in `impira/layoutlm-document-qa` in parallel.<|||||>> I'd like to communicate on that once the pipeline is merged, because the Space above is using that right? @NielsRogge the Space is indeed using the pipeline (and incorporates `Donut` too). It makes sense to do the announcement after that lands. We'll still do ours today but simply mention that we are working to upstream changes. Let me know if y'all have any concerns about that.<|||||>> Hi @NielsRogge @ydshieh I'm very sorry about that -- what happened is that we've updated the weights on the underlying model and it's returning a different name from the same document (the question itself is slightly ambiguous). No problem, thanks for the fix. > I've confirmed that if I pin the revision in the tests, they pass. I've just submitted #18854 to resolve that. Great!
transformers
18,406
closed
PreTrainedTokenizerFast with tokenizer object is acting on original tokenizer object
### System Info - `transformers` version: 4.21.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.8.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction - To reproduce this error, we can create a tokenizer and try to wrap it in the PreTrainedTokenizerFast ``` from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, trainers data = [ "My first sentence", "My second sentence", "My third sentence is a bit longer", "My fourth sentence is longer than the third one" ] tokenizer = Tokenizer(models.WordLevel(unk_token="<unk>")) trainer = trainers.WordLevelTrainer(vocab_size=10, special_tokens=["<unk>", "<pad>"]) tokenizer.pre_tokenizer = pre_tokenizers.Whitespace() tokenizer.train_from_iterator(data, trainer=trainer) tokenizer.enable_padding(pad_token="<pad>", pad_id=tokenizer.token_to_id("<pad>")) tokenizer.enable_truncation(max_length=5) print(tokenizer.encode(data[-1]).ids, tokenizer.padding) ``` This gives an output with len 5 and an explicit padding object - In the other hand if we load our tokenizer in the PreTrainedTokenizerFast class and print the same thing like before. ``` from transformers import PreTrainedTokenizerFast fast_tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer) fast_tokenizer(data) print(tokenizer.encode(data[-1]).ids, tokenizer.padding) ``` This gives an output with len > 5 and None in padding ### Expected behavior The expected behavior should be the same with tokenizer before loading it in the PreTrainedTokenizerFast wrapper. It should not impact the padding and the truncation part
08-01-2022 18:39:49
08-01-2022 18:39:49
cc @SaulLu <|||||>Hi @YBooks Thank you very much for the detailed issue :hugs: ! I see that you have already proposed a fix that has been merged and that solves the problem you are pointing out. If you are happy with it, is it ok if we close this issue?<|||||>Hey @SaulLu Yes sure. My pleasure<|||||>@YBooks , @SaulLu , @sgugger can we reopen this issue, since https://github.com/huggingface/transformers/pull/18408 creates another one ?
transformers
18,405
closed
Incorrect assertion in pipeline test test_dbmdz_english()
### System Info - `transformers` version: 4.22.0.dev0 - Platform: macOS-12.4-x86_64-i386-64bit - Python version: 3.9.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0 (False) - Tensorflow version (GPU?): 2.9.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.5.2 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: n - Using distributed or parallel set-up in script?: n ### Who can help? @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `RUN_SLOW=1 RUN_PIPELINE_TESTS=yes pytest tests/pipelines/test_pipelines_token_classification.py::TokenClassificationPipelineTests::test_dbmdz_english` Fails with two notable diffs: the "UN" entity offsets in the assertion don't match the offsets in the input string itself (off by two characters), and the `index` doesn't match. Output: ``` ======================================= FAILURES ======================================== __________________ TokenClassificationPipelineTests.test_dbmdz_english __________________ self = <tests.pipelines.test_pipelines_token_classification.TokenClassificationPipelineTests testMethod=test_dbmdz_english> @require_torch @slow def test_dbmdz_english(self): # Other sentence NER_MODEL = "dbmdz/bert-large-cased-finetuned-conll03-english" model = AutoModelForTokenClassification.from_pretrained(NER_MODEL) tokenizer = AutoTokenizer.from_pretrained(NER_MODEL, use_fast=True) sentence = """Enzo works at the UN""" token_classifier = pipeline("ner", model=model, tokenizer=tokenizer) output = token_classifier(sentence) > self.assertEqual( nested_simplify(output), [ {"entity": "I-PER", "score": 0.997, "word": "En", "start": 0, "end": 2, "index": 1}, {"entity": "I-PER", "score": 0.996, "word": "##zo", "start": 2, "end": 4, "index": 2}, {"entity": "I-ORG", "score": 0.999, "word": "UN", "start": 22, "end": 24, "index": 7}, ], ) E AssertionError: Lists differ: [{'en[24 chars] 0.998, 'index': 1, 'word': 'En', 'start': 0, [179 chars] 20}] != [{'en[24 chars] 0.997, 'word': 'En', 'start': 0, 'end': 2, 'i[179 chars]: 7}] E E First differing element 0: E {'ent[15 chars]'score': 0.998, 'index': 1, 'word': 'En', 'start': 0, 'end': 2} E {'ent[15 chars]'score': 0.997, 'word': 'En', 'start': 0, 'end': 2, 'index': 1} E E [{'end': 2, E 'entity': 'I-PER', E 'index': 1, E - 'score': 0.998, E ? ^ E E + 'score': 0.997, E ? ^ E E 'start': 0, E 'word': 'En'}, E {'end': 4, E 'entity': 'I-PER', E 'index': 2, E - 'score': 0.997, E ? ^ E E + 'score': 0.996, E ? ^ E E 'start': 2, E 'word': '##zo'}, E - {'end': 20, E ? ^ E E + {'end': 24, E ? ^ E E 'entity': 'I-ORG', E - 'index': 6, E ? ^ E E + 'index': 7, E ? ^ E E 'score': 0.999, E - 'start': 18, E ? ^^ E E + 'start': 22, E ? ^^ E E 'word': 'UN'}] tests/pipelines/test_pipelines_token_classification.py:284: AssertionError ``` ### Expected behavior [a green dot]
08-01-2022 17:05:20
08-01-2022 17:05:20
@davidbenton ~~Thanks for the info, I was able to reproduce on torch `1.11.0` but this is fixed on `1.12.0`.~~ ~~We're aware of some modifications within the `nn.Linear` between those two versions. The errors are relatively minor actually, but since this is a random model (meaning not trained on real data) it's much more sensitive to those tiny fluctuations. That's why the test fails on `1.11.0` while it works on `1.12.0`.~~ EDIT: I though I had tested the test and it worked on `1.12.0` but if my explanation was correct, then a diff should have had occurred on the test itself. I found out that this commit touch the file: 95113d136508dfef192a29d23344e941735d1a1d This commit actually changed the string, and makes this slow test fail. I am guessing this is an automated change. Looked at the diff, it seems only this slow test was affected. We can either rollback the string, or update the values. So the `1.11.0` vs `1.12.0` seems like it doesn't explain the difference here. @ydshieh Maybe can you confirm ? <|||||>~~Hmm, it fails for me on 1.12.0 also (CPU, env otherwise same as above). How could `end: 24` be correct for a string with length 20? Those offsets should be in `sentence` indices, right?~~ I see you're on the track now.<|||||>Pinging @sgugger too here. Thanks for reporting @davidbenton !<|||||>Since that commit, we do have this failure in Slack CI report. From the changes, I think it try to fix all `the the`. So reverting the change on the expected value is good to me. Thank you, @davidbenton <|||||>Yes, it slipped through the crack as the contributor was trying to fix typos and I didn't pay attention this one was intentional.
transformers
18,404
closed
GPT-J evaluation with multiple GPUs crashes
### System Info - `transformers` version: 4.21.0 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.4 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes (2+ RTX A6000) - Using distributed or parallel set-up in script?: Yes The issue appears when parallelizing with `python -m torch.distributed.launch --nproc_per_node=2` and also when parallelizing with `deepspeed` ### Who can help? I hope @patil-suraj, @stas00, or @sgugger. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Run the `run_clm.py` script from the examples directory: `python -m torch.distributed.launch --nproc_per_node=4 /path/to/transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path "EleutherAI/gpt-j-6B" --do_eval --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --output_dir "${output_dir}/output_fine_tune" --eval_steps 1 --evaluation_strategy steps --per_device_eval_batch_size 4 --block_size 2048` 2. The script crashes with the following error: ``` 08/01/2022 08:51:08 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:2891] 2022-08-01 08:51:22,867 >> ***** Running Evaluation ***** [INFO|trainer.py:2893] 2022-08-01 08:51:22,868 >> Num examples = 119 [INFO|trainer.py:2896] 2022-08-01 08:51:22,868 >> Batch size = 4 Traceback (most recent call last): File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 579, in <module> Traceback (most recent call last): File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 579, in <module> main() File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 545, in main main() File "/path/to/transformers/examples/pytorch/language-modeling/run_clm.py", line 545, in main metrics = trainer.evaluate() File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2758, in evaluate metrics = trainer.evaluate() File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2758, in evaluate output = eval_loop( File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2960, in evaluation_loop output = eval_loop( File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 2960, in evaluation_loop logits = self._nested_gather(logits) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 3072, in _nested_gather logits = self._nested_gather(logits) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer.py", line 3072, in _nested_gather tensors = distributed_concat(tensors) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat tensors = distributed_concat(tensors) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 178, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 181, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/transformers/trainer_pt_utils.py", line 181, in distributed_concat dist.all_gather(output_tensors, tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2068, in all_gather dist.all_gather(output_tensors, tensor) File "/nas/minlp/users/cwc/manuelc/miniconda3/envs/dsaiodocs/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2068, in all_gather work = default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be contiguous work = default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be contiguous ``` ## Some debugging * The crash only appears when the `compute_metrics` argument to `Trainer` is not `None`. In other words, replacing the line `compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,` with `compute_metrics=None` prevents the script from crashing. * It looks like the logits on Trainer line 3181 https://github.com/huggingface/transformers/blob/a9eee2ffecc874df7dd635b2c6abb246fdb318cc/src/transformers/trainer.py#L3181 are not contiguous. * If I force the tensors to be contiguous with the patch below, run_clm no longer crashes. I do not think the issue is in `Trainer`, so the patch below is not a fix. I include it only to help with debugging. ## Patch to make tensors contiguous ```diff 2850,2875d2849 < def check_contiguous(self, tensor) -> Tuple[int, int]: < if tensor is None: < return 0, 0 < if isinstance(tensor, (list, tuple)): < first = 0 < total = 0 < for t in tensor: < f, t = self.check_contiguous(t) < first += f < total += t < return first, total < else: < f = 0 < t = 1 < if tensor.is_contiguous(): < f = 1 < return f, t < < def make_contiguous(self, tensor): < if tensor is None: < return None < if isinstance(tensor, (list, tuple)): < return tuple(self.make_contiguous(t) for t in tensor) < else: < return tensor.contiguous() < 3208,3216d3181 < cont, total = self.check_contiguous(logits) < if cont != total: < print( < f"[DebugTrainer] prediction_step, no sm, outputs dict logits (cont, total)" < f"{(cont, total)}") < logits= self.make_contiguous(logits) < print( < f"[DebugTrainer] prediction_step, no sm, outputs dict, after contiguous, logits (cont, total)" < f"{self.check_contiguous(logits)}") ``` ### Expected behavior The script should finish running and report the evaluation results (loss and accuracy).
08-01-2022 16:47:48
08-01-2022 16:47:48
The issue is probably in the modeling code missing some `.contiguous()` calls.<|||||>I can reproduce the error with GPT-J. This also happens with Salesforce/codegen-16B-nl and EleutherAI/gpt-neox-20b. In all cases the error is RuntimeError: Tensors must be contiguous. The problem doesn't occur with gpt2-xl and facebook/opt-13b. This on Transformers 4.21.1 and also using 2x RTX a6000 GPUs. The problem was also reproduced by another dev training gpt-neoX-20b on 2x a6000. Could this be a6000 related?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>+1 for this issue, still having problems with `Tensors must be contiguous` error in evaluation.<|||||>I have same problem
transformers
18,403
closed
Cannot restore `sequences_scores` from `scores` and `beam_indices` returned by `t5-base`
### System Info - `transformers` version: 4.18.0 - Platform: Linux-5.10.25-nvidia-gpu-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <No> - Using distributed or parallel set-up in script?: <No> ### Who can help? I was trying to restore the logit of each generated token in `t5-base` by `beam_search`. However, I found that the `sequences_scores` cannot be computed from the generated token indices, the `beam_indices`, and the `scores` returned by `model.generate()`. Here is my script with annotations: ```python from transformers import T5ForConditionalGeneration, T5Tokenizer # load model model = T5ForConditionalGeneration.from_pretrained("t5-base") tokenizer = T5Tokenizer.from_pretrained("t5-base") inputs = tokenizer(["I love hugging face", "I love deep learning"], return_tensors="pt", truncation=True, padding="max_length", max_length=10) max_length = 10 min_length = 10 num_beams = 10 num_return_seq = 2 outputs = model.generate( input_ids=inputs.input_ids, attention_mask=inputs.attention_mask, do_sample=False, max_length=max_length, num_beams=num_beams, num_return_sequences=num_return_seq, return_dict_in_generate=True, output_scores=True ) sequences = outputs.sequences.transpose(0, 1) # num_step + 1, batch_size * num_return_seq beam_indices = torch.tensor(outputs.beam_indices).view(-1, max_length - 1) # batch_size * num_return_seq, num_step scores = torch.stack(outputs.scores, dim=0) # num_step, batch_size * num_beams, vocab_size beam_indices = beam_indices.transpose(-1, -2) # num_step, batch_size * num_return_seq # get the associated logits over the vocabulary at each step selected_distribution = scores.gather(dim=-2, index=beam_indices.unsqueeze(-1).expand(*beam_indices.shape, scores.shape[-1])) # num_step, batch_size * num_return_seq, vocab_size # get the associated logit of the selected token at each step selected_score = selected_distribution.gather(dim=-1, index=sequences[1:].unsqueeze(-1)) # output cumulative scores and sequences_scores >>> selected_score.squeeze().mean(0) <<< tensor([-0.2241, -0.3926, -0.1859, -0.4169]) >>> outputs.sequences_scores <<< tensor([-0.2017, -0.3534, -0.1674, -0.3752]) ``` Why these two scores are unequal? Are there any specific notes for computing sequence scores? Also, I think returning the token score along with the `Generate Outputs` is **useful**. Thanks in advance for your reply: @patrickvonplaten, @Narsil, @gante. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Just use the above code. ### Expected behavior I prefer to know how to get the logit of each generated token in `T5`, from which I can properly restore `sequences_scores` returned by the model. Further, I think returning the logit of each generated token is useful for developing systems trying to use that score.
08-01-2022 16:27:59
08-01-2022 16:27:59
Hi @namespace-Pt 👋 Does [this function](https://github.com/huggingface/transformers/blob/dbd9641c8c0e146c078cbee11cdefcf556f6c817/src/transformers/generation_utils.py#L804) solve your issue? I noticed that it is undocumented, so it is hard to find 😬 <|||||>@gante Thanks, the function worked. However, in the above example, how can I get the `sequences_scores` given the returned transition scores?<|||||>@namespace-Pt check these two threads: 1. https://github.com/huggingface/transformers/issues/16413 2. https://github.com/huggingface/transformers/issues/15869 TL;DR you can't directly atm unless you specify a length penalty of 0<|||||>Got that. Thank you.
transformers
18,402
closed
Update pipeline word heuristic to work with whitespace in token offsets
# What does this PR do? This change checks for whitespace in the input string at either the character preceding the token or in the first character of the token. This works with tokenizers that return offsets excluding whitespace between words or with offsets including whitespace. Fixes #18111 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-01-2022 16:11:51
08-01-2022 16:11:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>The tests failures seem to have nothing to do with this PR. @LysandreJik @ydshieh maybe ?<|||||>@LysandreJik @ydshieh @Narsil I think the test failures are because I set up CircleCI to track my fork, just to see if the tests would pass there. I didn't expect it to show here on the upstream project PR, but I think that might be what we're seeing. I've disabled that for any future commits. I'm guessing failures on a hosted, free CircleCI project are expected, right? Sorry for the CI spam.<|||||> > I'm guessing failures on a hosted, free CircleCI project are expected, right? Sorry for the CI spam. Probably yes: I saw you have `Docker / [Docker Medium]` while `transformers` uses `Docker / [Docker X-Large]`. Now we have to make the tests run under Hugging Face's CircleCI plan 😄 <|||||>Ugh, so that stopped the HF circleci from firing? I can add a "maybe this time" commit, as is traditional with CI workflows...<|||||>Ok all that's left before merging is a final approval from a core maintainer. @sgugger maybe ?
transformers
18,401
closed
Can't run the example in https://huggingface.co/transformers/v4.9.2/model_doc/blenderbot.html#transformers.BlenderbotModel
Hi, I just encountered some issue while running this example code: from transformers import BlenderbotSmallTokenizer, BlenderbotSmallForConditionalGeneration mname = 'facebook/blenderbot_small-90M' model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotSmallTokenizer.from_pretrained(mname) UTTERANCE = "My friends are cool but they eat too many carbs." print("Human: ", UTTERANCE) inputs = tokenizer([UTTERANCE], return_tensors='pt') inputs.keys() inputs.pop("token_type_ids") reply_ids = model.generate(**inputs) print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) REPLY = "I'm not sure" print("Human: ", REPLY) NEXT_UTTERANCE = ( "My friends are cool but they eat too many carbs." ) inputs = tokenizer([NEXT_UTTERANCE], return_tensors='pt') inputs.pop("token_type_ids") next_reply_ids = model.generate(**inputs) print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0]) Then I got this error: KeyError Traceback (most recent call last) Input In [1], in <cell line: 9>() 7 inputs = tokenizer([UTTERANCE], return_tensors='pt') 8 inputs.keys() 9 inputs.pop("token_type_ids") 10 # inputs.pop("input_ids") 11 reply_ids = model.generate(**inputs) File /opt/conda/lib/python3.8/_collections_abc.py:795, in MutableMapping.pop(self, key, default) 791 '''D.pop(k[,d]) -> v, remove specified key and return the corresponding value. 792 If key is not found, d is returned if given, otherwise KeyError is raised. 793 ''' 794 try: --> 795 value = self[key] 796 except KeyError: 797 if default is self.__marker: File /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:236, in BatchEncoding.__getitem__(self, item) 229 """ 230 If the key is a string, returns the value of the dict associated to `key` ('input_ids', 'attention_mask', 231 etc.). 232 233 If the key is an integer, get the `tokenizers.Encoding` for batch item with index `key`. 234 """ 235 if isinstance(item, str): --> 236 return self.data[item] 237 elif self._encodings is not None: 238 return self._encodings[item] KeyError: 'token_type_ids' I found the keys in input are: dict_keys(['input_ids', 'attention_mask']) then i change 'token_type_ides ' to 'input_ids', then i got this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [2], in <cell line: 11>() 9 # inputs.pop("token_type_ids") 10 inputs.pop("input_ids") ---> 11 reply_ids = model.generate(**inputs) 12 print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) 14 REPLY = "I'm not sure" File /opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py:27, in _DecoratorContextManager.__call__.<locals>.decorate_context(*args, **kwargs) 24 @functools.wraps(func) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) File /opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py:1182, in GenerationMixin.generate(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs) 1175 model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation( 1176 inputs_tensor, pad_token_id, eos_token_id 1177 ) 1179 if self.config.is_encoder_decoder and "encoder_outputs" not in model_kwargs: 1180 # if model is encoder decoder encoder_outputs are created 1181 # and added to `model_kwargs` -> 1182 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( 1183 inputs_tensor, model_kwargs, model_input_name 1184 ) 1186 # 4. Prepare `input_ids` which will be used for auto-regressive generation 1187 if self.config.is_encoder_decoder: File /opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py:525, in GenerationMixin._prepare_encoder_decoder_kwargs_for_generation(self, inputs_tensor, model_kwargs, model_input_name) 523 encoder_kwargs["return_dict"] = True 524 encoder_kwargs[model_input_name] = inputs_tensor --> 525 model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) 527 return model_kwargs File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs) 1106 # If we don't have any hooks, we want to skip the rest of the logic in 1107 # this function, and just call forward. 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.8/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py:780, in BlenderbotSmallEncoder.forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 773 layer_outputs = torch.utils.checkpoint.checkpoint( 774 create_custom_forward(encoder_layer), 775 hidden_states, 776 attention_mask, 777 (head_mask[idx] if head_mask is not None else None), 778 ) 779 else: --> 780 layer_outputs = encoder_layer( 781 hidden_states, 782 attention_mask, 783 layer_head_mask=(head_mask[idx] if head_mask is not None else None), 784 output_attentions=output_attentions, 785 ) 787 hidden_states = layer_outputs[0] 789 if output_attentions: File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs) 1106 # If we don't have any hooks, we want to skip the rest of the logic in 1107 # this function, and just call forward. 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.8/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py:311, in BlenderbotSmallEncoderLayer.forward(self, hidden_states, attention_mask, layer_head_mask, output_attentions) 299 """ 300 Args: 301 hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)` (...) 308 returned tensors for more detail. 309 """ 310 residual = hidden_states --> 311 hidden_states, attn_weights, _ = self.self_attn( 312 hidden_states=hidden_states, 313 attention_mask=attention_mask, 314 layer_head_mask=layer_head_mask, 315 output_attentions=output_attentions, 316 ) 317 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) 318 hidden_states = residual + hidden_states File /opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py:1110, in Module._call_impl(self, *input, **kwargs) 1106 # If we don't have any hooks, we want to skip the rest of the logic in 1107 # this function, and just call forward. 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.8/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py:225, in BlenderbotSmallAttention.forward(self, hidden_states, key_value_states, past_key_value, attention_mask, layer_head_mask, output_attentions) 223 if attention_mask is not None: 224 if attention_mask.size() != (bsz, 1, tgt_len, src_len): --> 225 raise ValueError( 226 f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}" 227 ) 228 attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask 229 attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) ValueError: Attention mask should be of size (1, 1, 1, 1), but is torch.Size([1, 1, 12, 12]) Guess should be triggered by the version of torch or transformers? Could you give me the version that can run this code properly? Best regards, Zijia
08-01-2022 15:28:48
08-01-2022 15:28:48
Remove this line: `inputs.pop("token_type_ids")` As for the `inputs.keys()`, the keys aren't assigned to any variable or used for any form of auxiliary calculation, so it effectively does nothing. In other words, just run this: ```python from transformers import BlenderbotSmallTokenizer, BlenderbotSmallForConditionalGeneration mname = 'facebook/blenderbot_small-90M' model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname) tokenizer = BlenderbotSmallTokenizer.from_pretrained(mname) UTTERANCE = "My friends are cool but they eat too many carbs." print("Human: ", UTTERANCE) inputs = tokenizer([UTTERANCE], max_length=512, truncation=True, return_tensors='pt') reply_ids = model.generate(**inputs) print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) ```<|||||>Thanks @shermansiu, the problem is solved, thanks for your prompt reply :-)<|||||>You're welcome! :smile:
transformers
18,400
closed
pytest with --forked
# What does this PR do? pytest with --forked
08-01-2022 14:53:53
08-01-2022 14:53:53
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18400). All of your documentation changes will be reflected on that endpoint.
transformers
18,399
closed
[LayoutLMv3] Fix docs
# What does this PR do? This PR fixes a non-working link and adds a link to the fine-tuning scripts.
08-01-2022 13:55:42
08-01-2022 13:55:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,398
closed
Fix ROUGE add example check and update README
# What does this PR do? This PR contains a few fixes for the examples all linked to #18381 - it adds the version check of Transformers to the `no_trainer` examples, so that the user is not surprised when the example script that uses main fails if they use a different version - it expands the table of examples for each version to get to the current version - it fixes the ROUGE metric return type since that metric was broken in https://github.com/huggingface/evaluate/pull/158 Fixes #18381
08-01-2022 13:55:27
08-01-2022 13:55:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failures are flaky, so merging.
transformers
18,397
closed
Fix DETR doc test
# What does this PR do? This PR fixes the doc test of `DetrForObjectDetection`.
08-01-2022 13:42:54
08-01-2022 13:42:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,396
closed
Add evaluate to test dependencies
# What does this PR do? This PR should fix all current failures on main coming from the examples being updated to use evaluate for the metrics. The problem is that some tests use those example scripts without installing the test requirements.
08-01-2022 12:48:00
08-01-2022 12:48:00
Seem to fix most tests so merging.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
18,395
open
GFT: Generative Fundamental Training
### Model description Hi, It is just an ad of our GFT. It has a novel attention head by fusing attention and mlp layer. It has an ultra-thin, ultra-deep architecture that maximizing model performance with minimal parameters. More interestingly, it equips a novel decoder call top-E(entropy) algorithm. Our model has 81 layers but only 300M parameters. Both Chinese and English(Biomedical) model are available. It is coded from scratch by cuda and C++, no DL framework is needed. Enjoy! ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/wangyi-fudan/GFT Chinese Model Download 链接: https://pan.baidu.com/s/1HKw83YWttCIPdnZCZ-hdOA 密码: 0r1j English Medical Model Download 链接: https://pan.baidu.com/s/1yazmdB8xFzLwXbKmXWslRA 密码: rqdc
08-01-2022 09:50:58
08-01-2022 09:50:58
transformers
18,394
closed
Add mt5 onnx config
# What does this PR do? - Added MT5 Onnx Config copied from T5 Onnx Config - Updated docs and tests files Related to #16308 , and [optimum#321](https://github.com/huggingface/optimum/issues/321) @patrickvonplaten, @patil-suraj, and @lewtun
08-01-2022 09:45:44
08-01-2022 09:45:44
I tried to convert the `google/mt5-base` model to Onnx on my local machine and it worked. ```bash python -m transformers.onnx --model=google/mt5-base convert-onnx/ 2022-08-01 12:50:24.992612: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2022-08-01 12:50:24.992654: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. /home/workstation/miniconda3/envs/transformers/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py:434: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text. warnings.warn( Some weights of the model checkpoint at google/mt5-base were not used when initializing MT5Model: ['lm_head.weight'] - This IS expected if you are initializing MT5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing MT5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Using framework PyTorch: 1.11.0+cu102 Overriding 1 configuration item(s) - use_cache -> False /home/workstation/miniconda3/envs/transformers/lib/python3.8/site-packages/transformers/modeling_utils.py:679: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if causal_mask.shape[1] < attention_mask.shape[1]: Validating ONNX model... -[✓] ONNX model output names match reference model ({'last_hidden_state'}) - Validating ONNX Model output "last_hidden_state": -[✓] (2, 8, 768) matches (2, 8, 768) -[✓] all values close (atol: 1e-05) All good, model saved at: convert-onnx/model.onnx ```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>**[Update]** Sorry, I just realized that MT5 is newly added to the ONNX test. So updating the expected values (somewhere) should work fine. PR opened: https://github.com/huggingface/transformers/pull/18560 ---- Hi @ChainYo. First of all, thank you for the contribution. It looks like this PR is (probably) the cause of 2 new test failures in our CI ``` FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_seq2seq_with_past_52_mt5_seq2seq_lm FAILED tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_seq2seq_with_past_53_mt5_seq2seq_lm_with_past ``` see [job run page](https://github.com/huggingface/transformers/runs/7759166084?check_suite_focus=true). The difference of the outputs and expected outputs are small, but it worked before. So I am wondering if this PR could have any impact on the outputs. We also have a few other test failures, so I am not 100% sure yet. But if you are interested and have some time, it would be great if you could take a look 🙏 . Otherwise, no problem! We could work on it internally :-) Thank you.
transformers
18,393
closed
Fix DeiT doc tets
# What does this PR do? Fixes failing doctest for DeiT cause by copy-pasta from the PT model. The predicted class is determinstic with the set seed. However, as RNG for pytorch and tensorflow are different, it's a different class than in the pytorch Deit doctest. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
08-01-2022 09:33:43
08-01-2022 09:33:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>From the description & the change, it seems to me that Deit has some random OP(s) used in the inference. Is this the case (and if yes, could you share which line that random op is 🙏 ) Thank you.<|||||>Sure :) `DeiTForImageClassification` loaded from the official facebook hub weights will have its head randomly initialized. You can't see it directly in this diff, but there's a comment above on line L928 mentioning this.
transformers
18,392
closed
Fixing issue where generic model types wouldn't load properly with the pipeline
# What does this PR do? When this occurs https://github.com/huggingface/transformers/issues/17929 we can provide a better error message since this is detectable at load time and the fix should happen within `transformers`. Found out 3 odd cases which have been dealt with differently: - `translation` actually uses `translation_XX_to_YY` and also relies on `task_specific_params` for some model configs. I tried cleaning that up and using `task_specific_params` only once, but the rabbit hole is deep, and it would have meant more code changes that this PR should hold. Waiting for a subsequent PR. The issue is that `translation_XX_to_YY` is not a normalized task name and is not within `NO_TOKENIZER_TASKS` nor `NO_FEATURE_EXTRACTION_TASKS` so the configuration on wether we should load or not doesn't work. - `feature-extraction`. That one is extremely special, since ALL models could in theory use that pipeline, and so we cannot enforce or detect anything statically on what should be loaded or not. - `automatic-speech-recognition` has this `speech-encoder-decoder` type of model, which do not define any `tokenizer` class, so the `type(config)` is NOT within `TOKENIZER_MAPPING` (correctly), but the first version of the check would fail when deciding staticly if we should load the tokenizer or not. The fix was to check if the user passed a tokenizer or not (if tokenizer is passed we should never try to do anything anyway) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-01-2022 09:11:00
08-01-2022 09:11:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger It seems I had completely misunderstood what was going on there. I thought it was misconfiguration while it's more of a normal state of things (Wasn't aware we had added those generic models for vision too). My new proposed PR then actually fixes the underlying issue initially created #17929 . The way I did it is keep some manual bookkeeping for these "multi model" configurations (is the name right) ? Then if we are actually using one of these models, attempt to load the `Tokenizer`/`feature_Extractor` looking exclusively at whether or not the task requires one. This should fix the original issue, and it so happens we had a test that could easily be updated to support those use cases. What do you think of this approach ? If a user created a model and forgot to upload either one of the necessary components, the pipeline will simply fail to load attempting to load one of them. I think that sort of failure mode should be OK to understand and users should be able to recover on their own. So no need for error messages now. I am still keeping the regular way to detect if we need the tokenizer for other types of configs, but then we will still fail if the AutoTokenizer/FeatureExtractor is not correctly configured. I think maybe switching entirely to `NO_TOKENIZER_TASKS` detection seems easier in the long run but I didn't want to do such a change in a small PR. (`feature-extraction` will still be a corner case)
transformers
18,391
closed
Is run_clip.py an example of fine-tune or an example of training a vision-text model from scratch?
### System Info - `transformers` version: 4.21.0 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Hi @patil-suraj I am a newer to CLIP, and while browsing [run_clip.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py), I have a question, Is `run_clip.py` an example of fine-tune or an example of training a vision-text model from scratch? Is it possible to fine-tune CLIP on my own dataset using `run_clip.py`? ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction python run_clip.py ### Expected behavior fine-tune
08-01-2022 09:10:17
08-01-2022 09:10:17
Hey @gongshaojie12, is the README available [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text) answering your questions? In the [code sample](https://github.com/huggingface/transformers/tree/main/examples/pytorch/contrastive-image-text#train-the-model), you'll see it's using @ydshieh's "coco_dataset_script" as a dataset; feel free to replace the dataset here by a similar dataset of yours to use the script on your own data.<|||||>Hi! The script is mostly for finetuning. But it's also possible to train from scratch - you just need to create a model, save it, and use it for the argument `model_name_or_path` - if this is what you like to do.<|||||>Hi, @LysandreJik @ydshieh Thank you very much for your reply, maybe the first sentence in the README (`The following example showcases how to train a CLIP-like vision-text dual encoder model using a pre-trained vision and text encoder.`) bothered me. This sentence makes me think that `run_clip.py` is training a CLIP-like model from scratch, rather than fine-tuning the existing CLIP model. If there is something wrong with my understanding, please criticize and correct me, thank you!<|||||>The doc mentions `using a pre-trained vision and text encoder.`, so I think there is no ambiguity here. It doesn't necessary finetune the original clip checkpoints thought. You can use any text and image encoders. Close this issue now. Don't hesitate if you have further question, @gongshaojie12
transformers
18,390
closed
Incorrect learning rate when using 'cosine_with_restarts' scheduler type
### System Info - `transformers` version: 4.20.1 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Start training a Roberta model for masked LM using the following parameters. All parameters are shared for the sake of context, but the parameter that fails is "lr_scheduler_type". The learning rate remains fixed at a default value. ```py training_args = TrainingArguments( output_dir='./roberta', overwrite_output_dir=True, evaluation_strategy = 'steps', num_train_epochs=100, lr_scheduler_type='cosine_with_restarts', warmup_steps=100, weight_decay=0.01, per_device_train_batch_size=20, per_device_eval_batch_size=20, save_steps=2048, eval_steps=2048, save_total_limit=3, report_to="wandb", ignore_data_skip=True, gradient_accumulation_steps=4, gradient_checkpointing=True, fp16=True ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, #prediction_loss_only=True, ) ``` ### Expected behavior The learning rate should behave like configured and like documented for 'cosine_with_restarts', instead of remaining fixed through the epochs.
08-01-2022 09:01:31
08-01-2022 09:01:31
Could you explain to us how you are seeing the learning rate being constant? Just tried your code and adding a print statement for the learning rate at every step, and I see something that changes.<|||||>I'm using weight and biases at wandb.ai to keep track of my metrics ![image](https://user-images.githubusercontent.com/5707158/182159636-2c728deb-6f23-43eb-836c-dd6ae5041a7b.png) <|||||>training_args = TrainingArguments( output_dir='./roberta', overwrite_output_dir=True, evaluation_strategy = 'steps', num_train_epochs=100, learning_rate=1e-4, weight_decay=0.01, per_device_train_batch_size=20, per_device_eval_batch_size=20, save_steps=2048, eval_steps=2048, save_total_limit=3, report_to="wandb", ignore_data_skip=True, gradient_accumulation_steps=4, gradient_checkpointing=True, fp16=True ) Results in ![image](https://user-images.githubusercontent.com/5707158/182160682-3c14ecab-57f2-4526-acb4-a24e1d127e7c.png) <|||||>So this only shows the learning rates at steps that are multiple of 500 since that is the default logging step I wouldn't trust a curve with 6 points on it.<|||||>could you please share how you add print statements for each step?<|||||>By changing the source code of the Trainer. YOu can also log more frequently by changing the `loggin_steps` argument.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,389
closed
Add a check regarding the number of occurrences of ```
# What does this PR do? We have `TFOPTForCausalLM` doctest failed due to the wrong expected value. The file `prepare_for_doc_test.py` didn't change that model file. It comes from the existence of **``decoder_input_ids```**. This PR adds a check, and also fixes all problematic places found with this new check.
08-01-2022 08:38:56
08-01-2022 08:38:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,388
closed
Mismatch between logits from generate and forward with an attention mask for most GPT models
### System Info - `transformers` version: 4.21.0 - Platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.10.0+cu113 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @patil-suraj, @patrickvonplaten, @LysandreJik ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` """ MWE showing that logits from generate match those from forward, except for the first token? """ from transformers import AutoTokenizer, AutoModelForCausalLM from torch.distributions import Categorical import torch as t #Broken: model_name = "distilgpt2" #model_name = "gpt2" #model_name = "EleutherAI/gpt-neo-125M" #model_name = "EleutherAI/gpt-neo-1.3B" #Working: #model_name = "EleutherAI/gpt-j-6B" lm = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, padding='right') tokenizer.pad_token = tokenizer.eos_token prompt = tokenizer(["big unpadded five token prompt ", "padded three token "], return_tensors='pt', padding=True, add_special_tokens=True) #generate with plain sampling (https://huggingface.co/blog/how-to-generate) result = lm.generate(prompt["input_ids"], attention_mask=prompt["attention_mask"], do_sample=True, output_scores=True, return_dict_in_generate=True, top_k=0, max_length=10) x, logits_gen = result.sequences, result.scores logits_gen = t.stack(logits_gen, 1) x_attention_mask = (x != tokenizer.eos_token_id).to(dtype=t.int64) position_ids = x_attention_mask.cumsum(-1)-1 print("Attention mask for prompt + generated text") print(x_attention_mask) print("Position IDs") print(position_ids) logits_for = lm(x, attention_mask=x_attention_mask, position_ids=position_ids).logits #we drop the last element, and the first prompt_length-1 elements to get #logits from forward to match those from generate logits_for = logits_for[:, (prompt["input_ids"].shape[-1]-1):-1] P_for = Categorical(logits = logits_for) P_gen = Categorical(logits = logits_gen) #Take only generated tokens x = x[..., prompt['input_ids'].shape[-1]:] log_prob_for = P_for.log_prob(x) log_prob_gen = P_gen.log_prob(x) print("log-probs from forward") print(log_prob_for) print("log-probs from generate") print(log_prob_gen) ``` ### Expected behavior I'm trying to get logits or log-probabilities from `generate` to match those from `forward` in the presence of a padded prompt. For GPT models, I managed to get almost everything working, by setting the `position_ids` for `forward` (see MWE script). However, there still seems to be a slight mismatch with the first token, if the prompt has an attention mask. You can see this in the returned output, from this script, which is: ``` Attention mask for prompt + generated text tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 0, 0, 1, 1, 1]]) Position IDs tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 4, 4, 5, 6, 7]]) log-probs from forward tensor([[ -8.3152, -5.5587, -3.0973], [ -2.6509, -10.6300, -7.5426]], grad_fn=<SqueezeBackward1>) log-probs from generate tensor([[ -8.3152, -5.5587, -3.0973], [ -2.7818, -10.6300, -7.5426]]) ``` Note the slightly mismatch between the bottom-left log-prob, which doesn't happen for any other log-probability. I've tried a few GPT flavour models: we get problem for `distilgpt2`, `gpt2`, `EleutherAI/gpt-neo-125M` and `EleutherAI/gpt-neo-1.3B`. But the log-probs all match for `EleutherAI/gpt-j-6B`.
08-01-2022 08:34:12
08-01-2022 08:34:12
cc @gante for generate :)<|||||>Hi @LaurenceA 👋 With decoder-only models, such as the ones you mentioned, padding should be done on the left. This is because the output is a continuation of the input prompt -- there would be gaps in the output without left padding. Our code to automatically prepare the position IDs for a given attention mask in decoder-only models has left-sided padding in mind and differs from the one you wrote in your example, hence the output mismatch :) Not being aware that left-sided padding should be used for these models is a common issue. I'm leaving this issue open as a reminder that we should add some form of warning for users. ___________________________ 👉 [example of code to prepare the position IDs](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L1014) Here's your example, with left padding and the same position IDs creation method: ```python """ MWE showing that logits from generate match those from forward, except for the first token? """ from transformers import AutoTokenizer, AutoModelForCausalLM from torch.distributions import Categorical import torch as t #Broken: model_name = "distilgpt2" #model_name = "gpt2" #model_name = "EleutherAI/gpt-neo-125M" #model_name = "EleutherAI/gpt-neo-1.3B" #Working: #model_name = "EleutherAI/gpt-j-6B" lm = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side="left") tokenizer.pad_token = tokenizer.eos_token prompt = tokenizer(["big unpadded five token prompt ", "padded three token "], return_tensors='pt', padding=True, add_special_tokens=True) #generate with plain sampling (https://huggingface.co/blog/how-to-generate) result = lm.generate(prompt["input_ids"], attention_mask=prompt["attention_mask"], do_sample=True, output_scores=True, return_dict_in_generate=True, top_k=0, max_length=10) x, logits_gen = result.sequences, result.scores logits_gen = t.stack(logits_gen, 1) x_attention_mask = (x != tokenizer.eos_token_id).to(dtype=t.int64) position_ids = x_attention_mask.cumsum(-1)-1 position_ids.masked_fill_(x_attention_mask == 0, 1) print("Attention mask for prompt + generated text") print(x_attention_mask) print("Position IDs") print(position_ids) logits_for = lm(x, attention_mask=x_attention_mask, position_ids=position_ids).logits #we drop the last element, and the first prompt_length-1 elements to get #logits from forward to match those from generate logits_for = logits_for[:, (prompt["input_ids"].shape[-1]-1):-1] P_for = Categorical(logits = logits_for) P_gen = Categorical(logits = logits_gen) #Take only generated tokens x = x[..., prompt['input_ids'].shape[-1]:] log_prob_for = P_for.log_prob(x) log_prob_gen = P_gen.log_prob(x) print("log-probs from forward") print(log_prob_for) print("log-probs from generate") print(log_prob_gen) ```<|||||>@LaurenceA if you run `generate` from the current main, you should see a warning if you don't use left-padding with decoder-only models like GPT2 :) (#19067)
transformers
18,387
closed
Fix from_pretrained kwargs forward
I don't know whether `use_auth_token`, `cache_dir` and `local_files_only` should be passed to `(cls.slow_tokenizer_class)._from_pretrained`, but I guess it should. Please correct me if anything is wrong. # What does this PR do? Fixes #18385 @sgugger @LysandreJik BTW, I find #13523 and #14508 addressing similar problems, which shows current implementation is vulnerable to this kind of problems, a refactor might be needed. A context manager might be suitable as a long-range dependency injector.
08-01-2022 07:47:35
08-01-2022 07:47:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,386
closed
Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
### System Info macos pycharm python 3.7 transformers 4.20.1 torch 1.12.0 torch-scatter. 2.0.9 tensorflow 2.3.0 ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction when i run `from transformers import TapasForQuestionAnswering`, then i meet error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV) ### Expected behavior expect nothing happened
08-01-2022 07:33:57
08-01-2022 07:33:57
It's likely that you have a mismatch between your torch and torch-scatter installs. Can you import torch-scatter on its own without the SIGSEGV? <|||||>> It's likely that you have a mismatch between your torch and torch-scatter installs. Can you import torch-scatter on its own without the SIGSEGV? hi, i have installed the right version of torch-scatter(2.0.9) as showed in [https://pytorch-geometric.com/whl/](url), but i import torch_scatter as you said and met the SIGSEGV.<|||||>sorry, i get the wrong version of torch, the problem solved, thank you for your apply.<|||||>Great, happy you could solve the problem!<|||||>Having faced the same issues and also fixed it by updating the version of pytorch
transformers
18,385
closed
`local_files_only` is not passed to `_from_pretrained` in `PreTrainedTokenizerBase.from_pretrained`
I run ```python processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32", local_files_only=True) ``` But request is still sent to get etag. https://github.com/huggingface/transformers/blob/b2e4b091f08f1aaf21855d588c6c8d284baba9eb/src/transformers/tokenization_utils_base.py#L1653-L1813 `local_files_only` is aborted when calling `_from_pretrained`, whether it is explicitly passed or implicitly set by `is_offline_mode()` check. I think this behavior is buggy. Fortunately, `transformers` check `is_offline_mode()` in `utils/hub.py/cached_path`, so I can globally and permanently force `local_files_only` as a workaround in my usecase. Only fixing `PreTrainedTokenizerBase.from_pretrained` is not enough, `_from_pretrained` doesn't pass `local_files_only` to `AutoConfig.from_pretrained` either. I'm working on a fix for it.
08-01-2022 07:05:10
08-01-2022 07:05:10
transformers
18,384
closed
HPO could be enabled by a HPO configuration file(yaml or json) instead of adding code explicitly in example.py
### Feature request now, the HPO should be enabled by adding some code in example.py(like run_glue.py) to indicate the HPO backend, metric, hp space. code like ``` def glue_hp_space(trial): return [ {"bounds": {"min": 1e-6, "max": 1e-4}, "name": "learning_rate", "type": "double"}, { "categorical_values": ["16", "32", "64", "128"], "name": "per_device_train_batch_size", "type": "categorical", }, ] def model_init(trial): return AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, compute_metrics=compute_metrics, model_init=model_init, tokenizer=tokenizer, data_collator=data_collator, ) best_trial = trainer.hyperparameter_search( direction="maximize", backend="sigopt", hp_space=glue_hp_space, n_trials=2 ) ``` could we add a configuration file and pass it as an argument to trainer for the easy usage of the HPO? yaml could like ``` HPO.yaml name: hg_glue_optimization backend: sigopt metrics: -name: accuracy strategy: optimize objective: maximize parameters: -name: learning_rate bounds: min: 1e-6 max: 1e-4 type: double -name: per_device_train_batch_size categorical_values: - 16 - 32 - 64 - 128 type: categorical_values trials: 10 ``` training_args could be responsible for the yaml parse and convert to input space format according to the different backend (Optuna, Sigopt, Wandb...) ### Motivation I am always frustrated when I need to modify the example code to enable HPO, and change it according to the different HPO backend ### Your contribution I could help submit PR after all are aligned on this point
08-01-2022 02:02:26
08-01-2022 02:02:26
@sgugger @yao-matrix @kding1 any comment?<|||||>The examples are meant to stay simple and readable so users can change them to their need (cause as mentioned in several places, they are just examples, not production apps). That's why we set the bar at training + evaluation and they don't support hyperparameters search out of the box.<|||||>> Agree that examples should stay simple and clear for easily ramp up. @sgugger , our question is actually "is it needed to unify HF's HPO configuration and run across different backends(Optuna, SigOpt etc.) ?", by supplying an unified configure interface(one example is the yaml style @sywangyi proposed). For data scientist, they can decouple their applications code from specific HPO tool; for HPO tool developer, easier for them to integrate into HF ecosystem. <|||||>@sgugger thanks for your comment, do you think it's necessary to apply the same yaml configuration in case the user is not quite familiar with so much different HPO backend?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,383
closed
Encoder Decoder Model gives same generation results after finetuning
❓ Questions & Help Hi, everyone. I am using transformers(v 4.20.1) and try to build a seq2seq model for multi-label classification. However, I found the model always gives same generation results after finetuning. I found two related issues in github but there seems to exist no solution. Here's the code. Main logic is located in init and train method. ```python class Model(pl.LightningModule): def __init__(self, decoder_tokenizer, lr=1e-4, beam_size=1, num_decoder_layers=12,): super().__init__() self.pad_id = decoder_tokenizer.pad_token_id self.bos_id = decoder_tokenizer.bos_token_id self.eos_id = decoder_tokenizer.eos_token_id self.lr = lr self.beam_size = beam_size self.decoder_tokenizer = decoder_tokenizer encoder_config = RobertaConfig.from_pretrained('roberta-base') decoder_config = RobertaConfig(bos_token_id=self.bos_id, eos_token_id=self.eos_id, pad_token_id=self.pad_id) decoder_config.num_hidden_layers = num_decoder_layers self.config = EncoderDecoderConfig.from_encoder_decoder_configs( encoder_config, decoder_config) self.model = EncoderDecoderModel(self.config) self.decoder = self.model.get_decoder() self.decoder.resize_token_embeddings(decoder_tokenizer.vocab_size) self.model.config.vocab_size = self.model.config.decoder.vocab_size nn.init.xavier_uniform_(self.decoder.resize_token_embeddings().weight) self.model.config.decoder_start_token_id = self.bos_id self.model.config.pad_token_id = self.pad_id def training_step(self, batch, batch_idx): '''batch, a dict contains input_ids: ids for the input sequence attention_mask: mask for the input sequence labels: ids for the output sequence ''' self.model.train() loss = self.model(**batch).loss self.log("train_loss", loss, on_step=True, on_epoch=True, prog_bar=True, logger=True) self.model.eval() with torch.no_grad(): predictions = self.model.generate( input_ids=batch['input_ids'], attention_mask=batch['attention_mask'], num_beams=self.beam_size, min_length=7, max_length=7, no_repeat_ngram_size=1, do_sample=False).cpu().numpy().tolist() labels=batch['labels'].cpu().numpy().tolist() for pred,label in zip(predictions,labels): logger.info(f'pred: {pred}, label: {label}') return loss def configure_optimizers(self): optimizer = torch.optim.AdamW(self.parameters(), lr=self.lr) return optimizer ``` The phenomenon is: - At the begin of the training, a sanity check starts. I sample some generation results(demonstrated below), it can be seen that the initialized-model is able to generate different predictions. ``` INFO: idx: 0, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 122, 153, 174, 1161, 1618, 102, -100, -100, -100] INFO: idx: 6, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 338, 498, 587, 2905, 102, -100, -100, -100, -100] INFO: idx: 7, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 109, 112, 143, 164, 278, 973, 102, -100, -100] INFO: idx: 9, pred: [101, 1595, 1438, 985, 3304, 3195, 800], label: [101, 109, 112, 116, 137, 174, 260, 102, -100, -100] INFO: idx: 10, pred: [101, 1595, 1438, 1135, 3886, 3698, 1406], label: [101, 107, 115, 119, 123, 310, 431, 102, -100, -100] ``` - After finetuning only 1 update, the model starts to generate same results, for both trained samples and unseen validation samples, until the end of the training(30 epochs). ``` INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 107, 119, 123, 168, 243, 306, 102, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 109, 195, 230, 587, 1617, 2375, 102] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 107, 123, 559, 716, 1376, 102, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 111, 130, 168, 183, 256, 102, -100, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 122, 142, 222, 336, 2072, 2248, 102, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 147, 159, 355, 795, 102, -100, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 111, 113, 232, 261, 651, 849, 102, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 149, 150, 730, 1356, 2940, 102, -100, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 113, 179, 211, 523, 996, 1366, 102, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 154, 1002, 1040, 102, -100, -100, -100, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 254, 984, 1238, 102, -100, -100, -100, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 132, 289, 504, 730, 895, 2450, 102, -100] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 105, 109, 137, 260, 303, 461, 888, 102] INFO: pred: [101, 425, 1385, 348, 2779, 3703, 1902], label: [101, 107, 131, 161, 205, 259, 763, 102, -100] ``` I've been stuck at the problem for almost a week, finding no solutions yet. I checked the model architecture and did find cross attention layers in decoder. I've also checked the data format and related logic, all works well, so I omitted this part for simplicity. Therefore I think the bug might exists in the model side, but I haven't found useful info from the docs or google results. Any kind of help is appreciated. Thanks very much!
08-01-2022 01:08:36
08-01-2022 01:08:36
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>> Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? > > Thanks! Of course. I'll migrate it and close this issue. Thanks for your reply!
transformers
18,382
closed
Fix custom config loading for clip model
# What does this PR do? Fixes # (issue) In the clip model, CLIPTextTransformer and CLIPVisionTransformer loads the original clip config file eventhough custom parameters are given in the new config file.
07-31-2022 22:21:03
07-31-2022 22:21:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18382). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @avinashsai, I don't understand what you're trying to do: two lines above your changes is a `self.config = config`, so you're working with the same object. Could you shed some light on what doesn't work so I can help you out? Thanks :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,381
closed
Summarisation example fails to run on given example. Missing positional argument TypeError
### System Info ``` - `transformers` version: 4.21.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.12.0+cu113 (True) - Tensorflow version (GPU?): 2.8.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger @pati ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to fine tune my own summarisation model based on the example in `transformers/examples/pytorch/summarization/run_summarization_no_trainer.py` but it when I first tried on the example given in the repository. link to [Google Colab to reproduce error](https://colab.research.google.com/drive/1Jk7-1hC6wAac8Ejh57URcalcRzzrC2Nd?usp=sharing) ``` !accelerate launch /content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` I'm getting the following error ``` Traceback (most recent call last): File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 763, in <module> main() File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 493, in main desc="Running tokenizer on dataset", File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 790, in map for k, dataset in self.items() File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 790, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2405, in map desc=desc, File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2779, in _map_single offset=offset, File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 474, in preprocess_function labels = tokenizer(text_target=targets, max_length=max_target_length, padding=padding, truncation=True) TypeError: __call__() missing 1 required positional argument: 'text' Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main args.func(args) File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 826, in launch_command simple_launcher(args) File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 358, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 't5-small', '--dataset_name', 'cnn_dailymail', '--dataset_config', '3.0.0', '--source_prefix', 'summarize: ', '--output_dir', '/root/tmp/tst-summarization']' returned non-zero exit status 1. ``` ### Expected behavior The model should start training
07-31-2022 16:40:48
07-31-2022 16:40:48
Aha, that's one for @sgugger, linked to https://github.com/huggingface/transformers/pull/18325<|||||>You need to use the main version of Transformers to use the main version of the example scripts. You can find the examples for v4.21.0 [here](https://github.com/huggingface/transformers/tree/v4.21.0/examples).<|||||>Thank you @sgugger @LysandreJik , it works perfectly now<|||||>hey, sorry to bother you again @sgugger , but, this is the output I'm getting when I'm running the script on my own dataset ``` All the weights of BartForConditionalGeneration were initialized from the model checkpoint at ainize/bart-base-cnn. If your task is similar to the task the model of the checkpoint was trained on, you can already use BartForConditionalGeneration for predictions without further training. Running tokenizer on dataset: 100% 1/1 [00:00<00:00, 215.96ba/s] Running tokenizer on dataset: 100% 1/1 [00:00<00:00, 342.76ba/s] 08/01/2022 12:58:50 - INFO - __main__ - Sample 27 of the training set: {'input_ids': [0, 6323, 34638, 251, 2788, 2], 'attention_mask': [1, 1, 1, 1, 1, 1], 'labels': [0, 12465, 765, 2788, 2]}. 08/01/2022 12:58:52 - INFO - __main__ - ***** Running training ***** 08/01/2022 12:58:52 - INFO - __main__ - Num examples = 32 08/01/2022 12:58:52 - INFO - __main__ - Num Epochs = 3 08/01/2022 12:58:52 - INFO - __main__ - Instantaneous batch size per device = 8 08/01/2022 12:58:52 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8 08/01/2022 12:58:52 - INFO - __main__ - Gradient Accumulation steps = 1 08/01/2022 12:58:52 - INFO - __main__ - Total optimization steps = 12 33% 4/12 [00:01<00:01, 4.60it/s]08/01/2022 12:58:54 - INFO - absl - Using default tokenizer. Traceback (most recent call last): File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 764, in <module> main() File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 711, in main result = {key: value.mid.fmeasure * 100 for key, value in result.items()} File "/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py", line 711, in <dictcomp> result = {key: value.mid.fmeasure * 100 for key, value in result.items()} AttributeError: 'numpy.float64' object has no attribute 'mid' 33% 4/12 [00:01<00:03, 2.11it/s] Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main args.func(args) File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 826, in launch_command simple_launcher(args) File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 358, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'ainize/bart-base-cnn', '--train_file', '/content/test.csv', '--validation_file', '/content/test.csv', '--summary_column', 'Summary', '--text_column', 'Text', '--output_dir', '/content/model']' returned non-zero exit status 1. ``` The code I'm using to launch the script is ``` !accelerate launch /content/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py \ --model_name_or_path ainize/bart-base-cnn \ --train_file /content/test.csv \ --validation_file /content/test.csv \ --summary_column Summary \ --text_column Text \ --output_dir /content/model ``` the test.csv file is below [test.csv](https://github.com/huggingface/transformers/files/9234094/test.csv) <|||||>Yes, it looks like `evaluate` decided to break the rouge metric. Sending a fix!
transformers
18,380
closed
LayoutLM-based visual question answering model, weights, and pipeline
### Feature request Question answering is an important problem for both text and documents. The question-answering pipeline makes it very easy to work with plain text and includes helpful utilities (like [post-processing start/end candidates](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/question_answering.py#L510)). It'd be amazing for question answering on documents to be _that_ easy. The primary goal of this feature request is to extend either the question answering or visual question answering pipeline to be as easy to use as, for example, the [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) model. LayoutLM is a great model architecture for solving this problem and @NielsRogge's [notebook example](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb) even shows you how to fine tune the model for this use case. I think it'd be very powerful for a number of use cases if it were _as easy_ to use LayoutLM for document question answering as it is to use BERT-like models for text question answering. This will require a few additions, all of which I have working code for that I'd be happy to contribute: 1. Extend the `QuestionAnsweringPipeline` or `VisualQuestionAnsweringPipeline` pipeline to support document inputs. I _think_ the latter would be the right pipeline, since it already takes an image as input, but ideally could also take a list of words+bounding boxes as input (in case users want to run their own OCR). 2. Hook up `LayoutLMv2ForQuestionAnswering` and `LayoutLMv3ForQuestionAnswering` to the pipeline. Ideally, there would also be `LayoutLMForQuestionAnswering`, since v2 and v3 are not licensed for commercial use. 2. Publish pre-trained model weights with an easy-to-follow model card. I found a few examples of fine-tuned layoutlm for QA models (e.g. [this](https://huggingface.co/tiennvcs/layoutlmv2-base-uncased-finetuned-docvqa)), but could not get them to run easily. For example, the "hosted inference API" UI throws an error when you try to run it. I think the visual question answering UI (which lets you load an image) might be a better fit. But I am very open to discussion on what the best experience would be. ### Motivation When we started using transformers, we saw the `question-answering` pipeline and we're blown away by how easy it was to use for text-based extractive QA. We were hoping it'd be "that easy" for document QA, but couldn't find pre-trained weights or a pipeline implementation. Thanks to [this tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/DocVQA/Fine_tuning_LayoutLMv2ForQuestionAnswering_on_DocVQA.ipynb), however, we were able to fine tune our own model and get it running. That inspired us to wonder -- could we make it _that_ easy for Document QA too? ### Your contribution We have working code for all of the proposed feature requests that we'd be happy to contribute. We also have a pre-trained model that we're happy to upload along with an easy-to-follow model card. Since there are a few changes proposed here, it might be worthwhile to break this into multiple issues/PRs, or we can do it all at once (however works best within your processes).
07-31-2022 16:09:35
07-31-2022 16:09:35
cc @Narsil as well as @NielsRogge <|||||>Thank you for this proposal ! It is really well thought out and everything you mention is pertinent. Adding support would be really awesome ! - We probably need to use VisualQuestionAnswering for this one. What defines a pipeline is the set of input/output so as far as I understand that would fit (image+question_text, output is a list of strings with scores attached, in decreasin order of `top_k`). Actually for this one, we might be able to return the bbox in addition so that we could visually show where the information is in the original document. (Optionally extra information is OK, but pipelines can't change the core input/output so that users can easily switch between models/architectures). - As far as I understand, the main reason we haven't already included the pipeline is because of the OCR. I think we actually can include it in the pipeline if it's easy to install (single dependency addition) and if we provide a clear error message when it's missing. We're already using `ffmpeg` for audio pipelines when it's missing, and `kenlm` when there's a n-gram layer with the model. Those are all pipeline specific so not necessary for `transformers` but they do make users' lives easier. - For differenciating between layout and other models, we tend not to focus on actual model names (like `layoutLM` but more on model `ForXX` name (`ForDocumentQuestionAnswering` maybe @NielsRogge ?), as they should have consistent API. So when a new model comes around and implements the same API, there's no additional work for the pipeline (99% of the time at least). Feel free to start the PRs and ping me as early on as you want (so I can help with the details). Here is doc on adding new pipelines, most of it is not necessary since `vqa` already exists but it should help with the overall design. https://huggingface.co/docs/transformers/v4.21.0/en/add_new_pipeline#adding-it-to-the-list-of-supported-tasks2 Cheers, and thanks for the proposal !<|||||>@Narsil that's great to hear! I will start sending pieces as PRs and tag you for feedback.<|||||>Re-opening this as we're still working on the pipeline.
transformers
18,379
closed
raise RuntimeError("Failed to load audio from {}".format(filepath))
### System Info i want to run run_speech_recognition_ctc.py but i got the error when run the Single GPU CTC script. `python run_speech_recognition_ctc.py \ --dataset_name="common_voice" \ --model_name_or_path="facebook/wav2vec2-large-xlsr-53" \ --dataset_config_name="tr" \ --output_dir="./wav2vec2-common_voice-tr-demo" \ --overwrite_output_dir \ --num_train_epochs="15" \ --per_device_train_batch_size="16" \ --gradient_accumulation_steps="2" \ --learning_rate="3e-4" \ --warmup_steps="500" \ --evaluation_strategy="steps" \ --text_column_name="sentence" \ --length_column_name="input_length" \ --save_steps="400" \ --eval_steps="100" \ --layerdrop="0.0" \ --save_total_limit="3" \ --freeze_feature_encoder \ --gradient_checkpointing \ --chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \ --fp16 \ --group_by_length \ --push_to_hub \ --do_train --do_eval ` The ERROR : ` raise RuntimeError("Failed to load audio from {}".format(filepath))` `RuntimeError: Failed to load audio from /root/.cache/huggingface/datasets/downloads/extracted``/05be0c29807a73c9b099873d2f5975dae6d05e9f7d577458a2466ecb9a2b0c6b/cv-corpus-6.1-2020-12-11/tr/clips``/common_voice_tr_17346025.mp3` ### Who can help? @patrickvonplaten @anton-l ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction i just run the steps written on example folder ### Expected behavior i just want to get the result
07-31-2022 12:36:29
07-31-2022 12:36:29
> feedback response with error code I didn't get what do you mean? <|||||>Hey @mehrdad78, could you share the full stack trace?<|||||>> Hey @mehrdad78, could you share the full stack trace? Yes,sure. here is my colab notebook:[https://colab.research.google.com/drive/1jNdztD-Kkk8MCkzPLlLXVr0Z2jSgpkM8?usp=sharing](url) and the stack trace: ``` `_n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=None, eval_delay=0, eval_steps=100, evaluation_strategy=steps, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=2, gradient_checkpointing=True, greater_is_better=None, group_by_length=True, half_precision_backend=auto, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0003, length_column_name=input_length, load_best_model_at_end=False, local_rank=-1, log_level=-1, log_level_replica=-1, log_on_each_node=True, logging_dir=/content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/runs/Aug01_10-37-50_87323b63b7db, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=steps, lr_scheduler_type=linear, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=15.0, optim=adamw_hf, output_dir=/content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo, overwrite_output_dir=True, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=16, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo, save_on_each_node=False, save_steps=400, save_strategy=steps, save_total_limit=3, seed=42, sharded_ddp=[], skip_memory_metrics=True, tf32=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, warmup_ratio=0.0, warmup_steps=500, weight_decay=0.0, xpu_backend=None, ) Downloading builder script: 26.4kB [00:00, 24.1MB/s] Downloading metadata: 174kB [00:00, 88.1MB/s] Downloading and preparing dataset common_voice/ru (download: 3.40 GiB, generated: 4.88 GiB, post-processed: Unknown size, total: 8.29 GiB) to /root/.cache/huggingface/datasets/common_voice/ru/6.1.0/a1dc74461f6c839bfe1e8cf1262fd4cf24297e3fbd4087a711bd090779023a5e... Downloading data: 100% 3.66G/3.66G [01:57<00:00, 31.0MB/s] Dataset common_voice downloaded and prepared to /root/.cache/huggingface/datasets/common_voice/ru/6.1.0/a1dc74461f6c839bfe1e8cf1262fd4cf24297e3fbd4087a711bd090779023a5e. Subsequent calls will reuse this data. 08/01/2022 10:43:49 - WARNING - datasets.builder - Reusing dataset common_voice (/root/.cache/huggingface/datasets/common_voice/ru/6.1.0/a1dc74461f6c839bfe1e8cf1262fd4cf24297e3fbd4087a711bd090779023a5e) remove special characters from datasets: 100% 23444/23444 [00:03<00:00, 7780.78ex/s] remove special characters from datasets: 100% 8007/8007 [00:01<00:00, 7715.10ex/s] https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp2g_x442y Downloading config.json: 100% 1.73k/1.73k [00:00<00:00, 2.68MB/s] storing https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/8508c73cd595eb416a1d517b90762416c0bc6cfbef529578079aeae4d8c14336.7581ed2ee0c677f1e933180df51bd1a668c4a2b6d5fd1297d32069373dac097c creating metadata file for /root/.cache/huggingface/transformers/8508c73cd595eb416a1d517b90762416c0bc6cfbef529578079aeae4d8c14336.7581ed2ee0c677f1e933180df51bd1a668c4a2b6d5fd1297d32069373dac097c loading configuration file https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/8508c73cd595eb416a1d517b90762416c0bc6cfbef529578079aeae4d8c14336.7581ed2ee0c677f1e933180df51bd1a668c4a2b6d5fd1297d32069373dac097c Model config Wav2Vec2Config { "_name_or_path": "facebook/wav2vec2-large-xlsr-53", "activation_dropout": 0.0, "adapter_kernel_size": 3, "adapter_stride": 2, "add_adapter": false, "apply_spec_augment": true, "architectures": [ "Wav2Vec2ForPreTraining" ], "attention_dropout": 0.1, "bos_token_id": 1, "classifier_proj_size": 256, "codevector_dim": 768, "contrastive_logits_temperature": 0.1, "conv_bias": true, "conv_dim": [ 512, 512, 512, 512, 512, 512, 512 ], "conv_kernel": [ 10, 3, 3, 3, 3, 2, 2 ], "conv_stride": [ 5, 2, 2, 2, 2, 2, 2 ], "ctc_loss_reduction": "sum", "ctc_zero_infinity": false, "diversity_loss_weight": 0.1, "do_stable_layer_norm": true, "eos_token_id": 2, "feat_extract_activation": "gelu", "feat_extract_dropout": 0.0, "feat_extract_norm": "layer", "feat_proj_dropout": 0.1, "feat_quantizer_dropout": 0.0, "final_dropout": 0.0, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout": 0.1, "hidden_size": 1024, "initializer_range": 0.02, "intermediate_size": 4096, "layer_norm_eps": 1e-05, "layerdrop": 0.1, "mask_channel_length": 10, "mask_channel_min_space": 1, "mask_channel_other": 0.0, "mask_channel_prob": 0.0, "mask_channel_selection": "static", "mask_feature_length": 10, "mask_feature_min_masks": 0, "mask_feature_prob": 0.0, "mask_time_length": 10, "mask_time_min_masks": 2, "mask_time_min_space": 1, "mask_time_other": 0.0, "mask_time_prob": 0.075, "mask_time_selection": "static", "model_type": "wav2vec2", "num_adapter_layers": 3, "num_attention_heads": 16, "num_codevector_groups": 2, "num_codevectors_per_group": 320, "num_conv_pos_embedding_groups": 16, "num_conv_pos_embeddings": 128, "num_feat_extract_layers": 7, "num_hidden_layers": 24, "num_negatives": 100, "output_hidden_size": 1024, "pad_token_id": 0, "proj_codevector_dim": 768, "tdnn_dilation": [ 1, 2, 3, 1, 1 ], "tdnn_dim": [ 512, 512, 512, 512, 1500 ], "tdnn_kernel": [ 5, 3, 3, 1, 1 ], "transformers_version": "4.22.0.dev0", "use_weighted_layer_sum": false, "vocab_size": 32, "xvector_output_dim": 512 } 100% 1/1 [00:00<00:00, 2.69ba/s] 100% 1/1 [00:00<00:00, 8.21ba/s] Didn't find file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/tokenizer_config.json. We won't load it. Didn't find file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/added_tokens.json. We won't load it. Didn't find file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/special_tokens_map.json. We won't load it. loading file /content/transformers/examples/pytorch/speech-recognition/wav2vec2-common_voice-ru-demo/vocab.json loading file None loading file None loading file None Adding <s> to the vocabulary Adding </s> to the vocabulary Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/preprocessor_config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpwqmsvu6p Downloading preprocessor_config.json: 100% 212/212 [00:00<00:00, 360kB/s] storing https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/preprocessor_config.json in cache at /root/.cache/huggingface/transformers/281aea0033110ab616ee4c2840ee83ed30496bb549916b8aec6c5668109f9e79.d4484dc1c81456a2461485e7168b04347a7b9a4e3b1ef3aba723323b33e12326 creating metadata file for /root/.cache/huggingface/transformers/281aea0033110ab616ee4c2840ee83ed30496bb549916b8aec6c5668109f9e79.d4484dc1c81456a2461485e7168b04347a7b9a4e3b1ef3aba723323b33e12326 loading feature extractor configuration file https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/preprocessor_config.json from cache at /root/.cache/huggingface/transformers/281aea0033110ab616ee4c2840ee83ed30496bb549916b8aec6c5668109f9e79.d4484dc1c81456a2461485e7168b04347a7b9a4e3b1ef3aba723323b33e12326 Feature extractor Wav2Vec2FeatureExtractor { "do_normalize": true, "feature_extractor_type": "Wav2Vec2FeatureExtractor", "feature_size": 1, "padding_side": "right", "padding_value": 0, "return_attention_mask": true, "sampling_rate": 16000 } https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpio5rku8q Downloading pytorch_model.bin: 100% 1.18G/1.18G [00:19<00:00, 65.5MB/s] storing https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.622b46163a38532eae8ac5423b0481dfc0b9ea401af488b5141772bdff889079 creating metadata file for /root/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.622b46163a38532eae8ac5423b0481dfc0b9ea401af488b5141772bdff889079 loading weights file https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.622b46163a38532eae8ac5423b0481dfc0b9ea401af488b5141772bdff889079 Some weights of the model checkpoint at facebook/wav2vec2-large-xlsr-53 were not used when initializing Wav2Vec2ForCTC: ['project_hid.bias', 'project_hid.weight', 'quantizer.weight_proj.weight', 'quantizer.weight_proj.bias', 'project_q.weight', 'project_q.bias', 'quantizer.codevectors'] - This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-large-xlsr-53 and are newly initialized: ['lm_head.weight', 'lm_head.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. preprocess datasets: 0% 0/23444 [00:00<?, ?ex/s] Traceback (most recent call last): File "run_speech_recognition_ctc.py", line 769, in <module> main() File "run_speech_recognition_ctc.py", line 628, in main desc="preprocess datasets", File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 790, in map for k, dataset in self.items() File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 790, in <dictcomp> for k, dataset in self.items() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2405, in map desc=desc, File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2756, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "run_speech_recognition_ctc.py", line 609, in prepare_dataset sample = batch[audio_column_name] File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 123, in __getitem__ value = decode_nested_example(self.features[key], value) if value is not None else None File "/usr/local/lib/python3.7/dist-packages/datasets/features/features.py", line 1260, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None File "/usr/local/lib/python3.7/dist-packages/datasets/features/audio.py", line 144, in decode_example array, sampling_rate = self._decode_mp3(file if file else path) File "/usr/local/lib/python3.7/dist-packages/datasets/features/audio.py", line 293, in _decode_mp3 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") File "/usr/local/lib/python3.7/dist-packages/torchaudio/backend/sox_io_backend.py", line 227, in load return _fallback_load(filepath, frame_offset, num_frames, normalize, channels_first, format) File "/usr/local/lib/python3.7/dist-packages/torchaudio/backend/sox_io_backend.py", line 29, in _fail_load raise RuntimeError("Failed to load audio from {}".format(filepath)) RuntimeError: Failed to load audio from /root/.cache/huggingface/datasets/downloads/extracted/707cd877a91cbe3455d83b9f62c3656e094f633f257743683372c05f4620af3b/cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3` ```<|||||>Have you ever encountered this error @albertvillanova @mariosasko ?<|||||>Hi @mehrdad78, thanks for reporting (and thanks @LysandreJik for drawing my attention to this). I have manually checked the TAR file, its content and specifically the MP3 file raising the error: `cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3` I can load it without any problem (our Datasets library, under the hood uses `torchaudio` for mp3 files): ```python In [1]: import torchaudio In [2]: path = "./data/common_voice/ru/cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3" In [3]: data = torchaudio.load(path, format="mp3") In [4]: data Out[4]: (tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., -2.6095e-04, 3.2425e-05, 8.8751e-05]]), 48000) ``` This makes me think that maybe the source of your issue is `sox`. This is a non-Python dependency that must be installed manually using your operating system package manager, e.g. ```shell sudo apt-get install sox ``` You have the installation instruction of Datasets with support for Audio in our docs: [Installation > Audio](https://huggingface.co/docs/datasets/installation#audio)<|||||>Issue opened in Datasets to raise a more actionable error message: - https://github.com/huggingface/datasets/issues/4776<|||||>> Hi @mehrdad78, thanks for reporting (and thanks @LysandreJik for drawing my attention to this). > > I have manually checked the TAR file, its content and specifically the MP3 file raising the error: `cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3` > > I can load it without any problem (our Datasets library, under the hood uses `torchaudio` for mp3 files): > > ```python > In [1]: import torchaudio > > In [2]: path = "./data/common_voice/ru/cv-corpus-6.1-2020-12-11/ru/clips/common_voice_ru_18849051.mp3" > > In [3]: data = torchaudio.load(path, format="mp3") > > In [4]: data > Out[4]: > (tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., -2.6095e-04, > 3.2425e-05, 8.8751e-05]]), > 48000) > ``` > > This makes me think that maybe the source of your issue is `sox`. This is a non-Python dependency that must be installed manually using your operating system package manager, e.g. > > ```shell > sudo apt-get install sox > ``` > > You have the installation instruction of Datasets with support for Audio in our docs: [Installation > Audio](https://huggingface.co/docs/datasets/installation#audio) Thank you. I try it and report the result. <|||||>I have just read that apparently there is a backend change in latest `torchaudio` release. Therefore, `torchaudio` version should be restricted so that it continues using `sox` backend, as expected by `datasets`. ``` pip install "torchaudio<0.12.0" ``` We should address this issue to support latest torchaudio.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,378
closed
NLLB-200 is too slow
### Feature request When I use modle 'facebook/nllb-200-distilled-600M' to translate , it consuming 0.5 second. And it look like not async。 I want make it consuming 0.1 second ,and make it async。 Is there anyone who can help me 。Thanks! ### Motivation When I use modle 'facebook/nllb-200-distilled-600M' to translate , it consuming 0.5 second. And it look like not async。 I want make it consuming 0.1 second ,and make it async。 Is there anyone who can help me 。Thanks! ### Your contribution When I use modle 'facebook/nllb-200-distilled-600M' to translate , it consuming 0.5 second. And it look like not async。 I want make it consuming 0.1 second ,and make it async。 Is there anyone who can help me 。Thanks!
07-31-2022 09:32:20
07-31-2022 09:32:20
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,377
closed
Getting Torchvision Transforms of `feature_extractor`s
### Feature request Currently if I was to add any transforms in my training pipeline, it's not quite obvious how to do so. My usual process is to read through the source code and hope to find what I'm after. What I'm after is something like in `timm` where you can do ``` config = resolve_data_config({}, model=model) transform = create_transform(**config) ``` In the above I can append any torchvision transforms at will by inspecting transforms. While I'm here it seems to be that the mean and standard deviations are 0.5 each for `VitFeatureExtractor` and same with Beit. Was this intentional as it might be incorrect if trained using imagenet data. ### Motivation See above. ### Your contribution Happy to contribute, but not sure where and how to start on unifying `FeatureExtractor` classes to return `torchvision.transforms`.
07-31-2022 00:17:41
07-31-2022 00:17:41
cc @amyeroberts <|||||>Thanks for adding this request @sachinruk :) Regarding the the mean and standard deviation values, can you raise a separate issue? We're currently going through an update of the feature extractor class for images. At the moment, it's not possible to compose the individual transformations we apply, like `create_transform` does. It's something we were thinking about doing down the road and it's great to hear there's support for it! To control what is and isn't applied by a `FeatureExtractor` you can toggle flags like e.g. `do_normalize` on the call. Note: there are some known bugs with this logic we're looking to address soon (see: [#15055](https://github.com/huggingface/transformers/issues/15055)) If you want to add transformations that aren't already applied by the `FeatureExtractor`, or work completely within `torchvision.transforms` there's a great example of a custom pipeline in our [example notebooks here](https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,376
closed
Potential memory leakage of TensorFlow Swin model on kaggle!
### System Info Info: ``` Framework: TensorFlow 2 (Keras) Version: 2.6 OS: Kaggle ``` ### Who can help? [Swin Model Card](https://huggingface.co/microsoft/swin-small-patch4-window7-224) @amyeroberts TensorFlow: @Rocketknight1 Vision: @NielsRogge, @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction A recent [kaggle competition](https://www.kaggle.com/competitions/google-universal-image-embedding) (hosted by Google), I tried to use pretrained `tf` swin transformer model from hugging face but even with the base model, I consistently received out of memory error. Below is the submission status with a `base_tf_swin` model. ![image](https://user-images.githubusercontent.com/17668390/181924503-159801b3-f74a-418b-983f-2f68d31f87b2.png) Some note: - Other framework like pytorch works fine here. - Other than this model, much larger model like `tf_convnext_xlarge` is able to run without OOM. So, I'm assuming there might be some potential memory leakage in `tf_swin` implementation. Below is the code I use to build the complete model. ```python id = "microsoft/swin-base-patch4-window7-224-in22k" from transformers import AutoFeatureExtractor, TFSwinModel feature_extractor = AutoFeatureExtractor.from_pretrained(id) ``` ```python inputs = keras.Input(shape=(None, None, 3), dtype='uint8') mode_inputs = tf.cast(inputs, tf.float32) mode_inputs = keras.layers.Resizing(*INPUT_SHAPE)(mode_inputs) mode_inputs = keras.layers.Rescaling(scale=1.0 / 255)(mode_inputs) mode_inputs = keras.layers.Normalization( mean=feature_extractor.image_mean, variance=[x ** 2 for x in feature_extractor.image_std ], axis=3 )(mode_inputs) mode_inputs = keras.layers.Permute(dims=(3, 1, 2))(mode_inputs) tf_huggingface_module = TFSwinModel.from_pretrained(id) tf_huggingface_module.trainable = False ``` ```python logits = tf_huggingface_module(mode_inputs) adv_logits = keras.Dense(64)(logits.pooler_output) outputs = keras.layers.Lambda( lambda x: tf.math.l2_normalize(x, axis=-1), name='embedding_norm' )(adv_logits) tf_huggingface_classifier = keras.Model(inputs, outputs) ``` ### Expected behavior It should work like other model. To reproduce the issue exactly, (in the worst case), you may need to run it on kaggle platform. Kaggle submission status (as shown in the above diagram) is not very descriptive other than just showing submission status :(. Mainly, I like to know what could be the cause of it and any possible solution.
07-30-2022 16:00:37
07-30-2022 16:00:37
Hi @innat, thanks for flagging this! In order to help figure out what's causing the problem and possible solutions, could you please answer the following questions: * Could you give the version of transformers you're using and any other relevant packages? * Does the notebook run successfully before entering it as a submission? If not, what line of code causes the failure? * Could you give details on the checkpoint used for convnext? Can you confirm the convnext model works with the exact same pipeline? * When you said other frameworks work fine - can you confirm that you were able to use the equivalent Swin PyTorch model on the same swin checkpoint? What would help most and answer all of these would be a saved kaggle notebook that you could share.<|||||>Can you help me build an app On Mon, Aug 1, 2022, 6:47 PM amyeroberts ***@***.***> wrote: > Hi @innat <https://github.com/innat>, thanks for flagging this! > > In order to help figure out what's causing the problem and possible > solutions, could you please answer the following questions: > > - Could you give the version of transformers you're using and any > other relevant packages? > - Does the notebook run successfully before entering it as a > submission? If not, what line of code causes the failure? > - Could you give details on the checkpoint used for convnext? Can you > confirm the convnext model works with the exact same pipeline? > - When you said other frameworks work fine - can you confirm that you > were able to use the equivalent Swin PyTorch model on the same swin > checkpoint? > > What would help most and answer all of these would be a saved kaggle > notebook that you could share. > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/18376#issuecomment-1201519353>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AXV67Y5E7M3352TID62TFHLVXAETFANCNFSM55DNTU2A> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > <|||||>Hello @amyeroberts; thanks for checking. To answer all of your query, > Could you give the version of transformers you're using and any other relevant packages? 1. It can be done, we will share a notebook file. Shortly, ```python tf.__version__, tfa.__version__, transformers.__version__ ('2.6.4', '0.14.0', '4.22.0.dev0') ``` > Does the notebook run successfully before entering it as a submission? If not, what line of code causes the failure? 2. I hardly used hugging face vision model. It's kind of my first look of these vision models for current [on-going kaggle competition](https://www.kaggle.com/competitions/google-universal-image-embedding). > Could you give details on the checkpoint used for convnext? Can you confirm the convnext model works with the exact same pipeline? 3. Regarding the convnext checkpoint, yes, I can give you the exact file and reproducible code. And I confirm that hugging face convnext (larger one) runs fine whereas tiny swin gives OOM. > When you said other frameworks work fine - can you confirm that you were able to use the equivalent Swin PyTorch model on the same swin checkpoint? 4. I should have elaborate more. I'm not PyTorch 1st user. Swin PyTorch model works fine is reported by other practitioners. --- > What would help most and answer all of these would be a saved kaggle notebook that you could share. [Notebook Files](https://gist.github.com/innat/edf5d2c64d55e341efaee2884a8536e8) It contains TensorFlow ConvNeXt and Swin Model pipelines and relevant package's version. The modeling strategy, saving, and submission process is followed according to the rules. The [evaluation page](https://www.kaggle.com/competitions/google-universal-image-embedding/overview/evaluation) also describes how they evaluate both framework and expected modeling approach. Hope it helps. <|||||>Thank you for your response. On Tue, Aug 2, 2022, 12:53 PM Mohammed Innat ***@***.***> wrote: > Hello @amyeroberts <https://github.com/amyeroberts>; thanks for checking. > To answer all of your query, > > Could you give the version of transformers you're using and any other > relevant packages? > > > 1. It can be done, we will share a notebook file. Shortly, > > tf.__version__, tfa.__version__, transformers.__version__ > ('2.6.4', '0.14.0', '4.22.0.dev0') > > Does the notebook run successfully before entering it as a submission? If > not, what line of code causes the failure? > > > 1. I hardly used hugging face vision model. It's kind of my first look > of these vision models for current on-going kaggle competition > <https://www.kaggle.com/competitions/google-universal-image-embedding>. > > Could you give details on the checkpoint used for convnext? Can you > confirm the convnext model works with the exact same pipeline? > > > 1. Regarding the convnext checkpoint, yes, I can give you the exact > file and reproducible code. And I confirm that hugging face convnext > (larger one) runs fine whereas tiny swin gives OOM. > > When you said other frameworks work fine - can you confirm that you were > able to use the equivalent Swin PyTorch model on the same swin checkpoint? > > > 1. I should have elaborate more. I'm not PyTorch 1st user. Swin > PyTorch model works fine is reported by other practitioners. > > ------------------------------ > > What would help most and answer all of these would be a saved kaggle > notebook that you could share. > > Notebook Files > <https://gist.github.com/innat/edf5d2c64d55e341efaee2884a8536e8> > > It contains TensorFlow ConvNeXt and Swin Model pipelines and relevant > package's version. The modeling strategy, saving, and submission process is > followed according to the rules. The evaluation page > <https://www.kaggle.com/competitions/google-universal-image-embedding/overview/evaluation> > also describes how they evaluate both framework and expected modeling > approach. Hope it helps. > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/18376#issuecomment-1202385475>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AXV67Y2MCP2O5YEEYZPM7TDVXED2DANCNFSM55DNTU2A> > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> > <|||||>Hi @innat, thank you for all of your detailed responses and for sharing the notebook. I ran the notebook in kaggle and was able to save out the model with the checkpoint you used in your first example: `"microsoft/swin-base-patch4-window7-224-in22k"` The notebook is here: https://www.kaggle.com/code/aeroberts4444/test-swin-saving/notebook Are you able to run the notebook you shared on kaggle? Or do you still hit the OOM?<|||||>@amyeroberts Thanks for running the code. Yes, if you run the code that I shared, you won't see any OOM effect instant. As I said, I tried to submit two model from hugging-face (`"microsoft/swin-tiny-patch4-window7-224"` and `"facebook/convnext-large-224-22k-1k"`) to [this](https://www.kaggle.com/competitions/google-universal-image-embedding/overview/evaluation) competition. The convnext is comparatively much larger than tiny swin, but in the inference time, the submission status always exceed the allowed compute resource for tiny swin but works fine for large convnext model. That's why I kind of have **weak assumption** that, there may be some issue with swin implementation. Also, later I realized that pytorch practitioners use `timm` version of swin model, and not from `huggingface` and no issue found about OOM with that. This competition is unique (no training or test data is provided), so it might be hard to debug the root cause. Please let me know if its out of scope to address such issue. <|||||>Hi @innat, thanks for clarifying. It's certainly a problem if there's a memory leak and one we'd want to address. I'm going to continue to look into this. As you said, because of the nature of kaggle and the competition it can be hard to debug. As such, it might take some time before I manage to figure out if there's a problem, what it is and how to solve. <|||||>@amyeroberts Thanks for your cordial support. I also informed competition host (googler), [HERE](https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/336534#1883421), but no response yet. cc @kfrancischen <|||||>Hi @innat. As mentioned above it's quite hard to debug without know what's happening during submission and logs from the kaggle notebook. My current best guess is it's due to the size of the saved Swin model. Using your script to create and save out a model, I looked at the sizes across different checkpoints: ``` "microsoft/resnet-50" # 23,561,152 params "google/vit-base-patch16-224-in21k" # 86,389,248 params "microsoft/swin-base-patch4-window7-224-in22k" # 86,743,224 params "microsoft/swin-tiny-patch4-window7-224" # 27,519,354 params "facebook/convnext-large-224-22k-1k" # 196,230,336 params ``` ``` tf_hf_classifier_convnext_large_224_22k_1k: total 25712 drwxr-xr-x 6 amyroberts staff 192B 10 Aug 13:13 . drwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 .. drwxr-xr-x 2 amyroberts staff 64B 10 Aug 13:13 assets -rw-r--r-- 1 amyroberts staff 510K 10 Aug 13:13 keras_metadata.pb -rw-r--r-- 1 amyroberts staff 12M 10 Aug 13:13 saved_model.pb drwxr-xr-x 4 amyroberts staff 128B 10 Aug 13:13 variables tf_hf_classifier_resnet_50: total 12048 drwxr-xr-x 6 amyroberts staff 192B 10 Aug 12:51 . drwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 .. drwxr-xr-x 2 amyroberts staff 64B 10 Aug 12:51 assets -rw-r--r-- 1 amyroberts staff 488K 10 Aug 12:51 keras_metadata.pb -rw-r--r-- 1 amyroberts staff 5.4M 10 Aug 12:51 saved_model.pb drwxr-xr-x 4 amyroberts staff 128B 10 Aug 12:51 variables tf_hf_classifier_swin_base_patch4_window7_224_in22k: total 179216 drwxr-xr-x 6 amyroberts staff 192B 10 Aug 13:00 . drwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 .. drwxr-xr-x 2 amyroberts staff 64B 10 Aug 12:59 assets -rw-r--r-- 1 amyroberts staff 7.4M 10 Aug 13:00 keras_metadata.pb -rw-r--r-- 1 amyroberts staff 80M 10 Aug 13:00 saved_model.pb drwxr-xr-x 4 amyroberts staff 128B 10 Aug 12:59 variables tf_hf_classifier_swin_tiny_patch4_window7_224: total 83944 drwxr-xr-x 6 amyroberts staff 192B 10 Aug 13:09 . drwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 .. drwxr-xr-x 2 amyroberts staff 64B 10 Aug 13:09 assets -rw-r--r-- 1 amyroberts staff 474K 10 Aug 13:09 keras_metadata.pb -rw-r--r-- 1 amyroberts staff 41M 10 Aug 13:09 saved_model.pb drwxr-xr-x 4 amyroberts staff 128B 10 Aug 13:09 variables tf_hf_classifier_vit_base_patch16_224_in21k: total 21328 drwxr-xr-x 6 amyroberts staff 192B 10 Aug 12:53 . drwxr-xr-x 24 amyroberts staff 768B 10 Aug 13:13 .. drwxr-xr-x 2 amyroberts staff 64B 10 Aug 12:53 assets -rw-r--r-- 1 amyroberts staff 162K 10 Aug 12:53 keras_metadata.pb -rw-r--r-- 1 amyroberts staff 10M 10 Aug 12:53 saved_model.pb drwxr-xr-x 4 amyroberts staff 128B 10 Aug 12:53 variables ``` I haven't dug much into why the model is so much larger. A cursory glance at the model graphs didn't reveal anything particularly surprising. <|||||>Randomly jumping in this thread :-) - Are you able to reproduce this issue in a machine with similar spec as Kaggle machines? - One way to narrow down to the root cause is to gradually remove some parts of code - From the provided notebook, we can't have any conclusion on memory leak. Memory leak refers to the memory usage increase during a repetition of the same call to a particular code block. - Suggestion: try to see if this issue occurs during model saving, or the memory usage increases during inference time.<|||||>@amyeroberts Thanks for checking. I'll quickly check the size of these models in torch version. @kfrancischen Your feedback is really much appreciate here. ([more info](https://www.kaggle.com/competitions/google-universal-image-embedding/discussion/336534#1859160))<|||||>I would suggest debug this in a VM outside Kaggle though. I remembered there is limited GPU/TPU hours per week on Kaggle. Don't waste your quota :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,375
closed
Correct the spelling of bleu metric
# What does this PR do? This PR corrects a simple spelling error. From `blue` to `bleu` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
07-30-2022 14:37:54
07-30-2022 14:37:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,374
closed
Implement pytorch model with BetterTransformer in pytorch1.12?
### Feature request BetterTransformer is a fastpath for the PyTorch Transformer API. The fastpath is a native, specialized implementation of key Transformer functions for CPU and GPU that applies to common Transformer use cases. ### Motivation Faster. ### Your contribution Pytorch official blog: https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
07-30-2022 12:34:04
07-30-2022 12:34:04
Hey @luoling1993, we're currently in talks with PyTorch to see how to best approach this. cc @erichan1<|||||>Hey there @luoling1993! I'm from the PyTorch team - we are working on this. My working PR https://github.com/erichan1/transformers/pull/2<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @luoling1993 ! This has been implemented now. Please check: https://medium.com/pytorch/bettertransformer-out-of-the-box-performance-for-huggingface-transformers-3fbe27d50ab2
transformers
18,373
closed
Transformers 4.21.0: Can't load XLMRoberta checkpoints
### System Info Transformers 4.21.0 ### Who can help? @LysandreJik @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. cont_pre_model = XLMRobertaForMaskedLM.from_pretrained('xlm-roberta-base') 2. cont_pre_training_args = TrainingArguments( output_dir=temp_dir, num_train_epochs=40, per_device_train_batch_size=4, save_steps=5000, logging_steps=50, save_total_limit=3, prediction_loss_only=True, evaluation_strategy='no', learning_rate=2e-5, warmup_steps=cont_pre_warmup_steps, dataloader_num_workers=0, disable_tqdm=False, gradient_accumulation_steps=8, fp16=True ) 3. cont_pre_trainer = Trainer( model=cont_pre_model, args=cont_pre_training_args, train_dataset=cont_pre_dataset, data_collator=cont_pre_collator ) Then start to train, cont_pre_trainer.train(), save some checkpoints 4. Corrupt it, try to continue from checkpoint cont_pre_trainer.train(resume_from_checkpoint=True) ### Expected behavior Describe: I used XLMRobertaForMaskedLM.from_pretrained('xlm-roberta-base') to continue pretraining, the training process was too long, so I save checkpoints regularly. I did this in Google Colab. Several days ago, I can't load any saved checkpoints by using "cont_pre_trainer.train(resume_from_checkpoint=True)", there is always such an error: RuntimeError: Error(s) in loading state_dict for XLMRobertaForMaskedLM: Missing key(s) in state_dict: "lm_head.decoder.weight", "lm_head.decoder.bias". The reason: XLMRobertaForMaskedLM doesn't have "lm_head.decoder.weight", "lm_head.decoder.bias". And state_dict of a PyTorch module is an OrderedDict and it complains about missing keys. Maybe you should use such a command somewhere: load_state_dict(state_dict, strict=False) How to solve it by myself: I rollbacked transformers to version 4.20.1 and it worked then. Problem Conclusion: Transformers version 4.21.0 can't load checkpoints that trained on both version 4.20.1 and version 4.21.0. (Transformers version 4.20.1 works normally, I use it to process checkpoints trained on version 4.20.1 or version 4.21.0)
07-30-2022 10:10:57
07-30-2022 10:10:57
cc @sgugger <|||||>We don't support resuming training with a different version of Transformers that initiated it, as it would require just freezing the whole `Trainer` forever: any bug fix or feature added in it won't work with a resumed checkpoint.<|||||>I am facing the same error with Transformers version 4.21.0 - model trained on same transformers version and loading the best model after training gives this error. I am using `xlm-roberta-base` with `AutoModelForMaskedLM` `RuntimeError: Error(s) in loading state_dict for XLMRobertaForMaskedLM: Missing key(s) in state_dict: "lm_head.decoder.weight", "lm_head.decoder.bias".` <|||||>Thanks for reporting @harshit-sethi09, with the initial report I thought this was a change in the XLM-RoBERTa model that was causing problems across versions but the whole reload is broken in 4.21.0 because of the changes in #18221 . The PR mentioned above should fix it and we will soon make a patch release with it.
transformers
18,372
closed
change shape to support dynamic batch input in tf.function XLA generate for tf serving
# What does this PR do? support dynamic input for tf.function + generate (XLA). needed for batch tf serving export: ``` import tensorflow as tf from transformers import TFAutoModelForSeq2SeqLM class MyOwnModel(tf.Module): def __init__(self, model_path="t5-small"): super(MyOwnModel, self).__init__() self.model = TFAutoModelForSeq2SeqLM.from_pretrained(model_path) @tf.function(input_signature=(tf.TensorSpec((None, 32), tf.int32, name="input_ids"), tf.TensorSpec((None, 32), tf.int32, name="attention_mask")), jit_compile=True) def serving(self, input_ids, attention_mask): outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=32, return_dict_in_generate=True) return {"sequences": outputs["sequences"]} model = MyOwnModel() export_dir = "./" tf.saved_model.save( model, export_dir, signatures={ "serving_default": model.serving }) ``` tf model run ``` import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM export_dir = "./" model = tf.saved_model.load(export_dir) tokenizer = AutoTokenizer.from_pretrained("t5-small") tokenization_kwargs = {"pad_to_multiple_of": 32, "padding": True, "return_tensors": "tf"} input_prompts = [ f"translate English to {language}: I have four cats and three dogs." for language in ["German", "French", "Romanian"] ] def generate_text(inputs): tokenized_inputs = tokenizer(inputs, **tokenization_kwargs) generated_texts = model.signatures["serving_default"](**tokenized_inputs) for text in generated_texts["sequences"]: print(tokenizer.decode(text, skip_special_tokens=True)) # The first prompt will be slow (compiling), the others will be very fast! generate_text(input_prompts[:2]) generate_text(input_prompts[:3]) ``` xla_run ``` import tensorflow as tf from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("t5-small") model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-small") # Main changes with respect to the original generate workflow: `tf.function` and `pad_to_multiple_of` xla_generate = tf.function(model.generate, jit_compile=True) tokenization_kwargs = {"pad_to_multiple_of": 32, "padding": True, "return_tensors": "tf"} # The first prompt will be slow (compiling), the others will be very fast! input_prompts = [ f"translate English to {language}: I have four cats and three dogs." for language in ["German", "French", "Romanian"] ] tokenized_inputs = tokenizer(input_prompts, **tokenization_kwargs) generated_texts = xla_generate(**tokenized_inputs, max_new_tokens=32) for text in generated_texts: print(tokenizer.decode(text, skip_special_tokens=True)) ``` this also works for beam search by changing exported code as ``` def serving(self, input_ids, attention_mask): outputs = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, max_new_tokens=32, return_dict_in_generate=True, num_beams=3, num_return_sequences=3) return {"sequences": outputs["sequences"]} ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #18357 Fixes #16823 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> cc @gante @patrickvonplaten
07-30-2022 01:49:37
07-30-2022 01:49:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>Cc @gante @patrickvonplaten <|||||>Hi @nlpcat 👋 I see the change is needed because an unknown batch size is specified (hence the need for dynamic shapes). I'm going to double-check a few cases against this branch and, if all goes well, I may propose a few changes. In general, I'm in favor of adding the change, thank you for the PR :) <|||||>(edited the PR header to link more issues this PR fixes :) )<|||||>@gante @sgugger i have added the test . https://github.com/huggingface/transformers/pull/18372/commits/596ecf4003ed8fdf46dda220062f1bba08bec689. Can you help review and merge this PR if it looks good? Thanks.<|||||>@nlpcat this is fantastic! Thank you so much for your contribution 🙏 <|||||>The whole idea of Tensorflow in Huggingface is very complicated and a pain. @nlpcat - You better look into https://github.com/legacyai/tf-transformers/blob/main/docs/source/model_usage/text_generation_using_t5.ipynb <|||||>I was testing this code, but I have found an issue with my model: I think the file [tf_logits_process.py](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/tf_logits_process.py), also needs to use the `shape_list` function to support dynamic batch input.<|||||>@rafaellemay can you open an issue with the problem that you found (and a snippet containing an example)? It would help us ensure the library works well in all cases :)
transformers
18,371
closed
Bump mistune from 0.8.4 to 2.0.3 in /examples/research_projects/visual_bert
Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/lepture/mistune/releases">mistune's releases</a>.</em></p> <blockquote> <h2>Version 2.0.2</h2> <p>Fix <code>escape_url </code> via <a href="https://github-redirect.dependabot.com/lepture/mistune/pull/295">lepture/mistune#295</a></p> <h2>Version 2.0.1</h2> <p>Fix XSS for image link syntax.</p> <h2>Version 2.0.0</h2> <p>First release of Mistune v2.</p> <h2>Version 2.0.0 RC1</h2> <p>In this release, we have a <strong>Security Fix</strong> for harmful links.</p> <h2>Version 2.0.0 Alpha 1</h2> <p>This is the first release of v2. An alpha version for users to have a preview of the new mistune.</p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/lepture/mistune/blob/master/docs/changes.rst">mistune's changelog</a>.</em></p> <blockquote> <h2>Changelog</h2> <p>Here is the full history of mistune v2.</p> <p>Version 2.0.4</p> <pre><code> Released on Jul 15, 2022 <ul> <li>Fix <code>url</code> plugin in <code>&amp;lt;a&amp;gt;</code> tag</li> <li>Fix <code>*</code> formatting</li> </ul> <p>Version 2.0.3 </code></pre></p> <p>Released on Jun 27, 2022</p> <ul> <li>Fix <code>table</code> plugin</li> <li>Security fix for CVE-2022-34749</li> </ul> <p>Version 2.0.2</p> <pre><code> Released on Jan 14, 2022 <p>Fix <code>escape_url</code></p> <p>Version 2.0.1 </code></pre></p> <p>Released on Dec 30, 2021</p> <p>XSS fix for image link syntax.</p> <p>Version 2.0.0</p> <pre><code> Released on Dec 5, 2021 <p>This is the first non-alpha release of mistune v2.</p> <p>Version 2.0.0rc1 </code></pre></p> <p>Released on Feb 16, 2021</p> <p>Version 2.0.0a6</p> <pre><code> &lt;/tr&gt;&lt;/table&gt; </code></pre> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/lepture/mistune/commit/3f422f1e84edae0f39756c45be453ecde534b755"><code>3f422f1</code></a> Version bump 2.0.3</li> <li><a href="https://github.com/lepture/mistune/commit/a6d43215132fe4f3d93f8d7e90ba83b16a0838b2"><code>a6d4321</code></a> Fix asteris emphasis regex CVE-2022-34749</li> <li><a href="https://github.com/lepture/mistune/commit/5638e460459cb59ceb20e4ce4716c802d4d73c53"><code>5638e46</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lepture/mistune/issues/307">#307</a> from jieter/patch-1</li> <li><a href="https://github.com/lepture/mistune/commit/0eba47196a81453bafe1f2492748a87475063dff"><code>0eba471</code></a> Fix typo in guide.rst</li> <li><a href="https://github.com/lepture/mistune/commit/61e9337884e20f9f8fdc0b7788d319afdd259729"><code>61e9337</code></a> Fix table plugin</li> <li><a href="https://github.com/lepture/mistune/commit/76dec68c4514c2612ef9263b49c6ec7f4d77bd14"><code>76dec68</code></a> Add documentation for renderer heading when TOC enabled</li> <li><a href="https://github.com/lepture/mistune/commit/799cd118cc5e664b72e98410ce1b68645f1a38c0"><code>799cd11</code></a> Version bump 2.0.2</li> <li><a href="https://github.com/lepture/mistune/commit/babb0cfa57a983ead615286a2b7c8f6885c46721"><code>babb0cf</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lepture/mistune/issues/295">#295</a> from dairiki/bug.escape_url</li> <li><a href="https://github.com/lepture/mistune/commit/fc2cd53d7698e432ab5b250ffac53458263a49e2"><code>fc2cd53</code></a> Make mistune.util.escape_url less aggressive</li> <li><a href="https://github.com/lepture/mistune/commit/3e8d35215120ac82176f300dd5e20c0bea5464ea"><code>3e8d352</code></a> Version bump 2.0.1</li> <li>Additional commits viewable in <a href="https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=mistune&package-manager=pip&previous-version=0.8.4&new-version=2.0.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
07-29-2022 23:31:51
07-29-2022 23:31:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,370
closed
Bump mistune from 0.8.4 to 2.0.3 in /examples/research_projects/lxmert
Bumps [mistune](https://github.com/lepture/mistune) from 0.8.4 to 2.0.3. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/lepture/mistune/releases">mistune's releases</a>.</em></p> <blockquote> <h2>Version 2.0.2</h2> <p>Fix <code>escape_url </code> via <a href="https://github-redirect.dependabot.com/lepture/mistune/pull/295">lepture/mistune#295</a></p> <h2>Version 2.0.1</h2> <p>Fix XSS for image link syntax.</p> <h2>Version 2.0.0</h2> <p>First release of Mistune v2.</p> <h2>Version 2.0.0 RC1</h2> <p>In this release, we have a <strong>Security Fix</strong> for harmful links.</p> <h2>Version 2.0.0 Alpha 1</h2> <p>This is the first release of v2. An alpha version for users to have a preview of the new mistune.</p> </blockquote> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/lepture/mistune/blob/master/docs/changes.rst">mistune's changelog</a>.</em></p> <blockquote> <h2>Changelog</h2> <p>Here is the full history of mistune v2.</p> <p>Version 2.0.4</p> <pre><code> Released on Jul 15, 2022 <ul> <li>Fix <code>url</code> plugin in <code>&amp;lt;a&amp;gt;</code> tag</li> <li>Fix <code>*</code> formatting</li> </ul> <p>Version 2.0.3 </code></pre></p> <p>Released on Jun 27, 2022</p> <ul> <li>Fix <code>table</code> plugin</li> <li>Security fix for CVE-2022-34749</li> </ul> <p>Version 2.0.2</p> <pre><code> Released on Jan 14, 2022 <p>Fix <code>escape_url</code></p> <p>Version 2.0.1 </code></pre></p> <p>Released on Dec 30, 2021</p> <p>XSS fix for image link syntax.</p> <p>Version 2.0.0</p> <pre><code> Released on Dec 5, 2021 <p>This is the first non-alpha release of mistune v2.</p> <p>Version 2.0.0rc1 </code></pre></p> <p>Released on Feb 16, 2021</p> <p>Version 2.0.0a6</p> <pre><code> &lt;/tr&gt;&lt;/table&gt; </code></pre> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/lepture/mistune/commit/3f422f1e84edae0f39756c45be453ecde534b755"><code>3f422f1</code></a> Version bump 2.0.3</li> <li><a href="https://github.com/lepture/mistune/commit/a6d43215132fe4f3d93f8d7e90ba83b16a0838b2"><code>a6d4321</code></a> Fix asteris emphasis regex CVE-2022-34749</li> <li><a href="https://github.com/lepture/mistune/commit/5638e460459cb59ceb20e4ce4716c802d4d73c53"><code>5638e46</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lepture/mistune/issues/307">#307</a> from jieter/patch-1</li> <li><a href="https://github.com/lepture/mistune/commit/0eba47196a81453bafe1f2492748a87475063dff"><code>0eba471</code></a> Fix typo in guide.rst</li> <li><a href="https://github.com/lepture/mistune/commit/61e9337884e20f9f8fdc0b7788d319afdd259729"><code>61e9337</code></a> Fix table plugin</li> <li><a href="https://github.com/lepture/mistune/commit/76dec68c4514c2612ef9263b49c6ec7f4d77bd14"><code>76dec68</code></a> Add documentation for renderer heading when TOC enabled</li> <li><a href="https://github.com/lepture/mistune/commit/799cd118cc5e664b72e98410ce1b68645f1a38c0"><code>799cd11</code></a> Version bump 2.0.2</li> <li><a href="https://github.com/lepture/mistune/commit/babb0cfa57a983ead615286a2b7c8f6885c46721"><code>babb0cf</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/lepture/mistune/issues/295">#295</a> from dairiki/bug.escape_url</li> <li><a href="https://github.com/lepture/mistune/commit/fc2cd53d7698e432ab5b250ffac53458263a49e2"><code>fc2cd53</code></a> Make mistune.util.escape_url less aggressive</li> <li><a href="https://github.com/lepture/mistune/commit/3e8d35215120ac82176f300dd5e20c0bea5464ea"><code>3e8d352</code></a> Version bump 2.0.1</li> <li>Additional commits viewable in <a href="https://github.com/lepture/mistune/compare/v0.8.4...v2.0.3">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=mistune&package-manager=pip&previous-version=0.8.4&new-version=2.0.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
07-29-2022 23:20:29
07-29-2022 23:20:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,369
closed
Migrate metric to Evaluate in Pytorch examples
# What does this PR do? As metrics are being deprecated in Datasets, they need to be moved to Evaluate. This PR migrates function calls of `load_metric` from Datasets to `load` in Evaluate in Pytorch examples. Fixes #18306 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-29-2022 23:06:00
07-29-2022 23:06:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,368
closed
Fix uninitialized parameter in conformer relative attention.
`torch.Tensor` creates an unitialized tensor (as via `torch.empty`), this leads to undeterministic behavior, poor initialization, and nans if you have unlucky init. The paper does not specify the initialization for bias terms, so I guess zero seems like a good choice - no bias initially. `torch.Tensor` is usually populated with zeros, so this fix will be close to the intended behavior: ``` >>> torch.Tensor(100, 100).sum() tensor(0.) >>> torch.Tensor(100, 100).sum() tensor(nan) >>> torch.Tensor(100, 100).sum() tensor(0.) ``` ## Who can review? @patrickvonplaten
07-29-2022 20:56:00
07-29-2022 20:56:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @PiotrDabkowski, won't these be overridden by the weights when loaded with `from_pretrained`? Edit: ah, understood - you'd like to have a better initialization at zeros instead of random. Will let @patrickvonplaten or @sanchit-gandhi comment, we try to stay as close as possible to the original implementation so this will likely be the deciding factor.<|||||>Hey @PiotrDabkowski! That's a great catch! It looks like the initialisation of the (u, v) bias terms in the relative positional encoding transformation were copied one-to-one from the original modelling code in the fairseq repo: https://github.com/facebookresearch/fairseq/blob/4fe8583396191c22011350248119db98ec1b5cb8/fairseq/modules/espnet_multihead_attention.py#L127-L128 Indeed, using `torch.Tensor` leads to undesirable behaviour for the initialisation of tensors, and `torch.zeros` is an apt replacement. I've opened an issue on fairseq: https://github.com/facebookresearch/fairseq/issues/4622 and PR with the corresponding fix: https://github.com/facebookresearch/fairseq/pull/4623 If it's good with you, we'll wait for the original authors to respond and then hopefully update the two in tandem!
transformers
18,367
closed
UnicodeDecodeError when using run_mlm_flax.py
Hi, I want to develop BERT model from scratch using Turkish text corpus. First I created tokenizer and I load text data from my local as seen below ``` from tokenizers import BertWordPieceTokenizer import glob tokenizer = BertWordPieceTokenizer( clean_text=True, handle_chinese_chars=False, strip_accents=False, lowercase=False, ) files = glob.glob('/content/drive/MyDrive/Scorpus.txt') trainer = tokenizer.train( files, vocab_size=32000, min_frequency=2, show_progress=True, special_tokens=['[PAD]', '[UNK]', '[CLS]', '[SEP]', '[MASK]'], limit_alphabet=1000, wordpieces_prefix="##" ) tokenizer.save_model("/content/bert") from datasets import load_dataset # load dataset datasetr = load_dataset('text', data_files={'train': ['/content/drive/MyDrive/Scorpus.txt']},encoding='utf-8') ``` Then, I run `run_mlm_flax.py` ``` !python run_mlm_flax.py \ --output_dir="/content/bert" \ --model_type="bert" \ --config_name="/content/bert" \ --tokenizer_name="/content/bert" \ --line_by_line=True \ --dataset_name="text" \ --dataset_config_name="default-b06526c46e9384b1" \ --max_seq_length="512" \ --weight_decay="0.01" \ --per_device_train_batch_size="128" \ --learning_rate="3e-4" \ --overwrite_output_dir \ --num_train_epochs="16" \ --adam_beta1="0.9" ``` And I get an error ``` [19:02:31] - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/content/bert', overwrite_output_dir=True, do_train=False, do_eval=False, per_device_train_batch_size=128, per_device_eval_batch_size=8, learning_rate=0.0003, weight_decay=0.01, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, adafactor=False, num_train_epochs=16.0, warmup_steps=0, logging_steps=500, save_steps=500, eval_steps=None, seed=42, push_to_hub=False, hub_model_id=None, hub_token=None) [19:02:31] - WARNING - datasets.builder - Using custom data configuration default-b06526c46e9384b1-d2418f61cbe4411a Downloading and preparing dataset text/default-b06526c46e9384b1 to /root/.cache/huggingface/datasets/text/default-b06526c46e9384b1-d2418f61cbe4411a/0.0.0/21a506d1b2b34316b1e82d0bd79066905d846e5d7e619823c0dd338d6f1fa6ad... Downloading data files: 100% 1/1 [00:00<00:00, 5190.97it/s] Extracting data files: 100% 1/1 [00:00<00:00, 543.80it/s] Traceback (most recent call last): File "run_mlm_flax.py", line 880, in <module> main() File "run_mlm_flax.py", line 430, in main use_auth_token=True if model_args.use_auth_token else None, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1751, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1275, in _prepare_split generator, unit=" tables", leave=False, disable=(not logging.is_progress_bar_enabled()) File "/usr/local/lib/python3.7/dist-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/text/text.py", line 77, in _generate_tables batch = f.read(self.config.chunksize) File "/usr/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte ``` Note: I use google colab
07-29-2022 19:24:31
07-29-2022 19:24:31
If you first serialize your dataset to local files, and then use that path as a dataset name; does it work then?<|||||>Thank you for answer. I am new for using Huggingface dataset libary. My corpus only contains sentences with each line being a sentence. How can I serialize my dataset?<|||||>Let me cc @albertvillanova, who might have seen this error before :)<|||||>Hi @hazalturkmen, thanks for reporting. When looking at your script call, I see you pass "default-b06526c46e9384b1" as `dataset_config_name`, then pointing to the cache (containing the Arrow file instead of the text file). I guess this is the cause of the issue. When running the script `run_mlm_flax.py` with local data files, you should pass instead the parameter `train_file` (and `validation_file` if applicable). In summary, when calling `run_mlm_flax.py`: - you should not pass `dataset_name` nor `dataset_config_name` - you should pass `train_file` ```shell !python run_mlm_flax.py \ --output_dir="/content/bert" \ --model_type="bert" \ --config_name="/content/bert" \ --tokenizer_name="/content/bert" \ --line_by_line=True \ --train_file="/content/drive/MyDrive/Scorpus.txt" \ --max_seq_length="512" \ --weight_decay="0.01" \ --per_device_train_batch_size="128" \ --learning_rate="3e-4" \ --overwrite_output_dir \ --num_train_epochs="16" \ --adam_beta1="0.9" ``` Please, let me know if this fixes your issue.<|||||>Thank you @albertvillanova ! It fixed the issue :+1:
transformers
18,366
closed
Rewrite push_to_hub to use upload_files
# What does this PR do? As asked in #18299 this PR rewrites the `PushToHubMixin` completely to use `upload_file` behind the scenes. This is breaking since there is no more git repository involved, so while the user might have been left with a proper repository in the past, this is not the case anymore. The trade-off is that there is no need to clone anything now, so the upload should be faster. Another breaking change is to default `use_temp_files` to None now, which then defaults to `True` if there is a local folder of the same name, `False` otherwise. This felt way more natural with this change, but we can discuss and revisit if needed. For the rest, normally backward compatibility is ensured with proper deprecation warnings when needed. Using `push_to_hub=True` in `save_pretrained` is adapted to this rewrite but with 0 breaking change normally. You can see in the tests how this simplifies a lot of things at the end of the day, so I'm quite happy with the final API. Oh and one last change I made is to move the code specific to TensorFlow models in the TF modeling utils file.
07-29-2022 18:42:56
07-29-2022 18:42:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,365
closed
Fix OPT doc tests
# What does this PR do? Fixes the doctest for OPT by adding slight modifications.
07-29-2022 16:59:27
07-29-2022 16:59:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ArthurZucker Do you know why the expected value has extra ``` ? `OPTForCausalLM` has the same expected value, but it doesn't have test failure. Seems strange to me 😕 ``` 1003 >>> # Generate 1004 >>> generate_ids = model.generate(inputs.input_ids, max_length=30) 1005 >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] Expected: "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you." ``` Got: "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you." ```<|||||>Actually it seems that the automatic doc préparation that adds a line does not affect the TFOPTForCausalLM forward doc. Thus I removed the manually created example and defeated to using **add_code_sample** which worked in that case. Now why does the add extra line does not work is another mystery and I could not setup a debugging instance to check that <|||||>Thank you. I will check this on Monday.<|||||>Regarding the mystery , it is due to this line > ``decoder_input_ids``` https://github.com/huggingface/transformers/blob/b2e4b091f08f1aaf21855d588c6c8d284baba9eb/src/transformers/models/opt/modeling_tf_opt.py#L601 ~~We can fix this in this PR too :-) It should be simply~~ > \`decoder_input_ids\`<|||||>I forgot to change the name in the `add_docstring` will fix that
transformers
18,364
closed
[WIP] Accelerate gpt2
null
07-29-2022 16:38:56
07-29-2022 16:38:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_18364). All of your documentation changes will be reflected on that endpoint.<|||||>Hope you will also check how will it convert to onnx and will be optimized by onnxruntime after these changes in Attention. N. B.: Attention optimizations (FusedAttention) in onnxruntime while converting to fp16 greatly help with inference time.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
18,363
closed
Add FX support for torch.baddbmm and torch.Tensor.baddbmm
This doesn't quite fix the issue where the model isn't torch scriptable on torch 1.10, but this will help fx support on torch 1.10 at the very least. I'll merge this after #18344
07-29-2022 15:20:28
07-29-2022 15:20:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,362
closed
Fix TFSegformerForSemanticSegmentation doctest
# What does this PR do? Fix `TFSegformerForSemanticSegmentation` doctest. With @amyeroberts uploading `nvidia/mit-b0`, all doctests for TF Segformer pass now. (I was waiting for a TF checkpoint and not fixed this one in the previous PR. This model (SemanticSegmentation) already had a TF checkpoint however)
07-29-2022 14:13:25
07-29-2022 14:13:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,361
closed
[JAX] Replace all jax.tree_* calls with jax.tree_util.tree_*
# What does this PR do? Newer versions of JAX moved all tree utility methods to `jax.tree_util` and emit warnings for using the old locations under `jax`. This PR refactors all `jax.tree_*` calls to `jax.tree_util.tree_*`, thus disabling the warnings with no changes to the code functionality. ```python import jax inputs = jax.numpy.ones(10) # Define identity function with tree_map -> FutureWarning jax.tree_map(lambda x: x, inputs) # Define identity function with tree_util.tree_map -> no warning! jax.tree_util.tree_map(lambda x: x, inputs) ``` ``` FutureWarning: jax.tree_map is deprecated, and will be removed in a future release. Use jax.tree_util.tree_map instead. ``` All `jax.tree_*` calls in Flax were updated in this PR: https://github.com/google/flax/pull/2325. However, the `FutureWarning` message will still be raised if using newer versions of JAX and older versions of Flax. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
07-29-2022 12:41:15
07-29-2022 12:41:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry for being so late here! Let's merge it
transformers
18,360
closed
How can I use DDP mode without `torch.distributed.launch` command when I use transformers.Trainer
I checked the historical issues and readed a part of source code. But I didn't find the way out to use DDP without `torch.distributed.launch`. Is there a way to use deepspeed or ddp without modifying the command? Like a field named `strategy: str` to select the way weather I use DDP.
07-29-2022 10:30:23
07-29-2022 10:30:23
cc @sgugger<|||||>No, you need to launch your Python script with the torch/DeepSpeed launcher.<|||||>> Thank you. I got it.
transformers
18,359
closed
Fix some doctests
# What does this PR do? The doctests is back running, and we have 16 test failures now. This PR fixes 3 of them.
07-29-2022 10:17:49
07-29-2022 10:17:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
18,358
closed
fix FSDP ShardedGradScaler
# What does this PR do? 1. Fixes #18350 by renaming the import for FSDP ShardedGradScaler so that it doesn't change the scope of globally imported Fairscale ShardedGradScaler 2. Another minor change of raising error when transformer auto wrap class name is not found in the model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-29-2022 10:00:35
07-29-2022 10:00:35
_The documentation is not available anymore as the PR was closed or merged._