repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
20,869
closed
having new model entries in Hindi for Hindi README
# What does this PR do? 1. Having new model entries in Hindi for Hindi README
12-22-2022 08:55:52
12-22-2022 08:55:52
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,868
closed
Add Onnx Config for PoolFormer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (https://github.com/huggingface/transformers/issues/16308) Add changes to make PoolFormer models available for Onnx conversion. Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ChainYo
12-22-2022 05:11:42
12-22-2022 05:11:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ChainYo @michaelbenayoun I have mistakenly closed that previous pull request. Created this with resolved conflicts. <|||||>Yeah I will do that<|||||>Thank you
transformers
20,867
closed
Default rescale for ImageProcessing
### System Info - `transformers` version: 4.26.0.dev0 - Platform: Linux-5.4.0-1096-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Distributed ### Who can help? @amyeroberts Is there any reason that the default value for `do_rescale` is set to `True`? Does it mess with current ViT pre-trained model such as Google-ViT? I am trying to pre-train my model and saw that there is this discrepancy that arguably could impact accuracy. I am just trying to figure out the rationale behind this. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. from transformers import AutoFeatureExtractor 2. Autofeature_extractor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224") 3. Autofeature_extractor ``` ViTImageProcessor { "do_normalize": true, "do_rescale": true, "do_resize": true, "image_mean": [ 0.5, 0.5, 0.5 ], "image_processor_type": "ViTImageProcessor", "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "rescale_factor": 0.00392156862745098, "size": { "height": 224, "width": 224 } } ``` ### Expected behavior ``` ViTImageProcessor { "do_normalize": true, "do_resize": true, "image_mean": [ 0.5, 0.5, 0.5 ], "image_processor_type": "ViTImageProcessor", "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": { "height": 224, "width": 224 } } ```
12-22-2022 01:45:08
12-22-2022 01:45:08
@yazdanbakhsh Thanks for creating the issue and putting in so much detail. If I've understood correctly, your question is about differences in the configuration seen on the hub e.g. [here](https://huggingface.co/google/vit-base-patch16-224/blob/main/preprocessor_config.json) and the object representation when it's loaded and whether this affects training your model. I'll answer for this, but let me know if there's anything I've missed. TLDR; This will not affect training your model unless your input images are numpy arrays and `do_resize=False`. The feature extractors have recently been deprecated in place of image processors. The feature extractors now act as an alias for the image processor, and using `AutoFeatureExtractor` will load an image processor under the hood. At the moment, the previous feature extractor configurations are loaded and converted to the equivalent image processor configuration. For example, you'll notice that [`size` in `preprocessor_config.json`](https://huggingface.co/google/vit-base-patch16-224/blob/main/preprocessor_config.json) is an int `224`, whereas it's a dictionary in the image processor. In time, these configurations will be updated on the hub. With respect to `do_rescale`, this flag has been added in order to separate the concerns of certain processing logic. [Rescaling still happened in the old feature extractors](https://github.com/huggingface/transformers/blob/4fd89e49788f60b021b4e2c578a1fb12f1e900e4/src/transformers/image_utils.py#LL380C7-L380C7), this just makes the step more explicit and controllable by the user. Previously, images would have their pixel values divided by 255 if `do_normalize=True` and the input image was a `PIL.Image.Image` or `do_resize=True`, however they wouldn't be rescaled if the input was a numpy array and `do_resize=False`. This ensures consistent rescaling behaviour, regardless of the input type. As a result, there may be differences in the resulting images between the old feature extractors and new image processors if your input images are numpy arrays and `do_resize=False`, however, the resulting image should be consistent across input types for all flag combinations with the new image processors. <|||||>@amyeroberts Thank you so much for the detailed explanation. That all make sense to me. I just wanted to share this to ensure that this is an expected behavior from the implementation. Based on your explanation, I think we should be able to close this issue.
transformers
20,866
closed
Update image processor parameters if creating with kwargs
# What does this PR do? Ensures backwards compatibility with previous feature extractor creation when using `from_pretrained` and `from_dict`. **Updates attribute before instantiation:** Previously, `size` attributes were stored as an int/tuple. This has been updated to a dictionary to reduce ambiguity of whether the int represents the shortest edge, height or width. If the feature extractor / image processor is created with `size` as an int, it is converted to the appropriate dictionary with a `logging.info` message. However, if the image processor is created using `from_pretrained` or `from_dict` with `size` as a kwarg, the class is first instantiated and then the `size` kwarg overwrites the class parameter. In this case, `image_processor.size` is an int, and is not be converted to the correct dictionary format. This PR makes sure the dict creating the instance has the updated value, which is then converted to a dict if necessary. **Renames attribute before instantiation** Some feature extractor instance attributes have been removed when updating to image processors. For example `reduce_labels` became `do_reduce_labels` to ensure naming consistency and `max_size` is now part of the `size` dictionary as `size["longest_edge"]`. In `from_dict`, if the instance doesn't have the attribute, and the attribute is passed in as a kwarg with its old name, the instance won't have it added as an attribute. This PR will update the name of the attribute in `from_dict` to the new name if necessary. Previously: ```python >>> image_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", size=600, max_size=800) >>> image_processor DetrImageProcessor { "do_normalize": true, "do_pad": true, "do_rescale": true, "do_resize": true, "feature_extractor_type": "DetrFeatureExtractor", "format": "coco_detection", "image_mean": [ 0.485, 0.456, 0.406 ], "image_processor_type": "DetrImageProcessor", "image_std": [ 0.229, 0.224, 0.225 ], "resample": 2, "rescale_factor": 0.00392156862745098, "size": 600 } ``` Now: ```python >>> image_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", size=600, max_size=800) >>> image_processor DetrImageProcessor { "do_normalize": true, "do_pad": true, "do_rescale": true, "do_resize": true, "feature_extractor_type": "DetrFeatureExtractor", "format": "coco_detection", "image_mean": [ 0.485, 0.456, 0.406 ], "image_processor_type": "DetrImageProcessor", "image_std": [ 0.229, 0.224, 0.225 ], "resample": 2, "rescale_factor": 0.00392156862745098, "size": { "longest_edge": 800, "shortest_edge": 600 } } ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
12-21-2022 21:53:32
12-21-2022 21:53:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,865
closed
change strings to f-strings in image_processing_utils.py
# What does this PR do? not top priority, it's just that there's a python string which should be a f-string :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
12-21-2022 19:33:13
12-21-2022 19:33:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,864
closed
Adding support for `fp16` for asr pipeline.
# What does this PR do? Fixes #20862 Many things were considered before settling for this design. - `feature_extractor(return_tensors="pt¨, torch_dtype=torch_dtype)` . This would have the advantage of being consistent, but not all feature extractors to define this, so it would affect all of them. Then why would we use `torch_dtype` instead of the more common place `dtype` which could be applied to TF and flax as well. Also it feels a bit redundant to specify both` return_tensors` and `torch_dtype`, it would be a good candidate to fuse both parameters (but outisde the scope of this PR). - `AutoFeatureExtractor.from_pretrained(..., torch_dtype=torch_dtype)`. This would have the advantage of being overall so users don't need to respecify on each call. However we can't specifiy `return_tensors="pt" ` in there either, so for consistency I didn't try to put it there. - `ffmpeg_read(..., dtype=dtype)` This would be nice to load directly the waveform into fp16 and just let fp16 flow through the feature_extractor. However, whisper in particular uses mel_spectrogram, so using f16 sound might actually damage performance. In the end, this solution is the simplement I could come up with. Let `torch_dtype` flow to the pipeline, use it as a regular parameter and convert the output of the feature_extractor after. This does incur a potentially extra copy but there's no risk of damaging quality of the input. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-21-2022 17:54:15
12-21-2022 17:54:15
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,863
closed
Update `HubertModelIntegrationTest.test_inference_keyword_spotting`
# What does this PR do? Our CI is updated to use torch 1.13 (which also use cuda 11.6) instead of torch 1.12 (cuda 11.3), this test fails. The tolerance `2e-2` is not enough anymore, but it passes with `3e-2`.
12-21-2022 17:25:49
12-21-2022 17:25:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,862
closed
Run `AutomaticSpeechRecognitionPipeline` with FP16
### Feature request Hi @Narsil, I would like to run inference with `AutomaticSpeechRecognitionPipeline` in FP16 using some large models (e,g, whisper). But I don't believe it's supported in current version (please correct me if I'm wrong here). ### Reproduction Below is a code snippet to reproduce the behavior. ```python import torch from transformers import pipeline pipe = pipeline(model="openai/whisper-base", device=0, torch_dtype=torch.float16) pipe("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac") ``` When running this we see the following stack trace: ``` RuntimeError Traceback (most recent call last) Cell In[1], line 5 2 from transformers import pipeline 4 pipe = pipeline(model="openai/whisper-base", device=0, torch_dtype=torch.float16) ----> 5 pipe("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/1.flac") File ~/transformers/src/transformers/pipelines/automatic_speech_recognition.py:232, in AutomaticSpeechRecognitionPipeline.__call__(self, inputs, **kwargs) 191 def __call__( 192 self, 193 inputs: Union[np.ndarray, bytes, str], 194 **kwargs, 195 ): 196 """ 197 Transcribe the audio sequence(s) given as inputs to text. See the [`AutomaticSpeechRecognitionPipeline`] 198 documentation for more information. (...) 230 `"".join(chunk["text"] for chunk in output["chunks"])`. 231 """ --> 232 return super().__call__(inputs, **kwargs) File ~/transformers/src/transformers/pipelines/base.py:1074, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs) 1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params) 1073 else: -> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File ~/transformers/src/transformers/pipelines/base.py:1096, in ChunkPipeline.run_single(self, inputs, preprocess_params, forward_params, postprocess_params) 1094 all_outputs = [] 1095 for model_inputs in self.preprocess(inputs, **preprocess_params): -> 1096 model_outputs = self.forward(model_inputs, **forward_params) 1097 all_outputs.append(model_outputs) 1098 outputs = self.postprocess(all_outputs, **postprocess_params) File ~/transformers/src/transformers/pipelines/base.py:990, in Pipeline.forward(self, model_inputs, **forward_params) 988 with inference_context(): 989 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device) --> 990 model_outputs = self._forward(model_inputs, **forward_params) 991 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu")) 992 else: File ~/transformers/src/transformers/pipelines/automatic_speech_recognition.py:370, in AutomaticSpeechRecognitionPipeline._forward(self, model_inputs) 364 # we need to pass `processed.get("attention_mask")` here since audio encoder 365 # attention mask length is different from expected text decoder `encoder_attention_mask` length 366 # `generate` magic to create the mask automatically won't work, we basically need to help 367 # it here. 368 attention_mask = model_inputs.pop("attention_mask", None) 369 tokens = self.model.generate( --> 370 encoder_outputs=encoder(inputs, attention_mask=attention_mask), 371 attention_mask=attention_mask, 372 ) 374 out = {"tokens": tokens} 376 else: File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs) 1186 # If we don't have any hooks, we want to skip the rest of the logic in 1187 # this function, and just call forward. 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1189 or _global_forward_hooks or _global_forward_pre_hooks): -> 1190 return forward_call(*input, **kwargs) 1191 # Do not call functions when jit is used 1192 full_backward_hooks, non_full_backward_hooks = [], [] File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:654, in WhisperEncoder.forward(self, input_features, attention_mask, head_mask, output_attentions, output_hidden_states, return_dict) 650 output_hidden_states = ( 651 output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states 652 ) 653 return_dict = return_dict if return_dict is not None else self.config.use_return_dict --> 654 inputs_embeds = nn.functional.gelu(self.conv1(input_features)) 655 inputs_embeds = nn.functional.gelu(self.conv2(inputs_embeds)) 657 inputs_embeds = inputs_embeds.permute(0, 2, 1) File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/module.py:1190, in Module._call_impl(self, *input, **kwargs) 1186 # If we don't have any hooks, we want to skip the rest of the logic in 1187 # this function, and just call forward. 1188 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1189 or _global_forward_hooks or _global_forward_pre_hooks): -> 1190 return forward_call(*input, **kwargs) 1191 # Do not call functions when jit is used 1192 full_backward_hooks, non_full_backward_hooks = [], [] File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/conv.py:313, in Conv1d.forward(self, input) 312 def forward(self, input: Tensor) -> Tensor: --> 313 return self._conv_forward(input, self.weight, self.bias) File ~/anaconda3/envs/asr/lib/python3.8/site-packages/torch/nn/modules/conv.py:309, in Conv1d._conv_forward(self, input, weight, bias) 305 if self.padding_mode != 'zeros': 306 return F.conv1d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode), 307 weight, bias, self.stride, 308 _single(0), self.dilation, self.groups) --> 309 return F.conv1d(input, weight, bias, self.stride, 310 self.padding, self.dilation, self.groups) RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same ``` ### System Info ``` - `transformers` version: 4.26.0.dev0 ``` ### Motivation To accelerate inference and take less memory when using the pipeline with large models ### Your contribution Right now I just force the casting to fp16 after calling `feature_extractor` to make the pipeline run for whisper models (I'm not yet using chunked inference). https://github.com/huggingface/transformers/blob/3090e708577e2d0145ab81d0e2362e3235aebbd9/src/transformers/pipelines/automatic_speech_recognition.py#L338-L340 ```python processed["input_features"] = processed["input_features"].to(self.model.config.torch_dtype) ``` But I understand should better wrap it in another function which considers input names for different models. Willing to make a PR if you can guide me here :)
12-21-2022 15:15:06
12-21-2022 15:15:06
Good catch ! Thanks for the tip. I started considering a few options into how this could be done, and I ended up doing the PR, please review if you can (sorry for doing it, I usually like when contributors can do it, but as I wasn't sure of what exactly should be done and I was exploring, I ended up doing it :D ) https://github.com/huggingface/transformers/pull/20864<|||||>No problem! Thanks for the explication in the PR! Just leave some comments
transformers
20,861
closed
[Past CI] 🔥 Leave Past CI failures in the past 🔥
# What does this PR do? Make Past CI (with `torch 1.8`) cleaner
12-21-2022 14:47:18
12-21-2022 14:47:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
20,860
closed
[`MobileNet-v2`] Fix ONNX typo
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/20856 In order to get which model is exportable with ONNX, we need first to pre-process the model type before checking with `FeaturesManager` by replacing the `_` with `-`. This has been forgotten for `mobilenet` family models which leads to errors such as the one described in #20856 It seems that the proper way to call the method `get_supported_features_for_model_type`, is to first apply some pre-preprocessing as it is done [here](https://github.com/huggingface/transformers/blob/d87e381f9303c7d6a8aa7333dc09ed767de7395f/src/transformers/onnx/features.py#L723). This patch has been added to `tests/onnx/test_onnx_v2.py` too This PR fixes these issues. Also I would like to hear from @lewtun as it's my first ONNX-related PR. cc @sgugger @ArthurZucker
12-21-2022 10:06:02
12-21-2022 10:06:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>I can confirm the script below works: ``` from transformers import AutoModelForImageClassification from transformers.onnx import FeaturesManager model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_1.0_224") model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model) ``` which yields to successfully retrieving the MobileNet ONNX config, however running `optimum-cli export onnx --model google/mobilenet_v2_1.0_224 onnx/` gives an error ``` KeyError: "mobilenet-v2 is not supported yet. Only {'xlm', 'deberta', 'distilbert', 'electra', 'mobilebert', 'm2m-100', 'segformer', 'convbert', 'longt5', 'data2vec-text', 'flaubert', 'gptj', 'detr', 'layoutlmv3', 'mobilevit', 'groupvit', 'levit', 'mbart', 'big-bird', 'albert', 'bloom', 't5', 'swin', 'roberta', 'blenderbot', 'bert', 'yolos', 'marian', 'deit', 'layoutlm', 'perceiver', 'xlm-roberta', 'vit', 'gpt-neo', 'mt5', 'bigbird-pegasus', 'codegen', 'clip', 'whisper', 'data2vec-vision', 'squeezebert', 'convnext', 'deberta-v2', 'ibert', 'roformer', 'blenderbot-small', 'bart', 'beit', 'resnet', 'camembert', 'gpt2'} are supported. If you want to support mobilenet-v2 please propose a PR or open up an issue." ``` Mobilenet is not listed in `optimum.exporters.tasks`, probably by mistake or for some other reason. Opened a PR to support `Mobilenet` in optimum, https://github.com/huggingface/optimum/pull/633 I can confirm the model can be safely exported after checking out this PR! <|||||>Now the PR https://github.com/huggingface/optimum/pull/633 being merged the export works as expected, merging
transformers
20,859
closed
Fix past CI by Skipping `LevitModelTest.test_problem_types`
# What does this PR do? Fix past CI by Skipping `LevitModelTest.test_problem_types` for `PyTorch 1.9`. This test failed with torch 1.9 with some CUDA error, but it passes with `torch 1.8` and `torch >= 1.10`. The error is ```bash input = <[RuntimeError('CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously repor...incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.') raised in repr()] Tensor object at 0x7fcb37dbd900> weight = <[RuntimeError('CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously repor...orrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.') raised in repr()] Parameter object at 0x7fcb36c935c0>, bias = None FAILED tests/models/levit/test_modeling_levit.py::LevitModelTest::test_problem_types - RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` ```
12-21-2022 09:57:59
12-21-2022 09:57:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,858
closed
Remove more unused attributes in config classes
# What does this PR do? remove more unused attributes in config classes
12-21-2022 09:38:22
12-21-2022 09:38:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,857
closed
Use `config.num_channels` in CLIP-like modeling files
# What does this PR do? `config.num_channels` is not used in some CLIP-like modeling files. Unlike previous PRs like #20596 or #20844, we use this attribute in the modeling files in this PR. The only breaking case is when a user previously set `config.num_channels=X` with `X !=3`, which is super unlikely IMO. (Even they did so, the actual Conv2D layer still uses `3` as it is hard-coded in the current `main` branch)
12-21-2022 08:13:31
12-21-2022 08:13:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,856
closed
transformers.onnx mobilenet_v2 not supported but exists in supported list
### System Info - `transformers` version: 4.25.1 - Platform: Linux-6.0.6-76060006-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.13.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes (using notebooks for training) - Using distributed or parallel set-up in script?: no `KeyError: "mobilenet-v2 is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet_v1', 'mobilenet_v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support mobilenet-v2 please propose a PR or open up an issue."` ### Who can help? @amyeroberts @NielsRogge ### Information - [X] My own modified scripts ### Tasks - [x] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForImageClassification from transformers.onnx import FeaturesManager model = AutoModelForImageClassification.from_pretrained("google/mobilenet_v2_1.0_224") model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model) ``` ### Expected behavior Should be able to get model_onnx_config for export.
12-21-2022 06:21:00
12-21-2022 06:21:00
It's just a typo in the list it looks like. cc @younesbelkada or @ArthurZucker if you want to make a quick fix.<|||||>on it!<|||||>Hi @ashim-mahara You should be now be able to export mobilenet in ONNX using the main branch of optimum and transformers<|||||>Hi @younesbelkada thank you! that was fast, kudos to you folks!
transformers
20,855
closed
Gradient accumulation trick and Activation Checkpointing feature
### Feature request 1. Adds gradient accumulation trick to https://github.com/huggingface/transformers/blob/main/examples/flax/summarization/run_summarization_flax.py 2. Adds [Activation Checkpointing feature](https://github.com/microsoft/DeepSpeed/issues/2302#issuecomment-1320728107) ### Motivation For GPU memory issue as well as faster training process. In the next `Your contribution` column, might I ask if the extra `if-else` block makes sense OR do we even need `optax.apply_every()` for gradient accumulation ? ### Your contribution The following `jax` code is [modified](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/49cf1b815fce39ea9d192d5d916a51243e71a2c3#file-run_summarization_flax-py-L913-L919) from [original huggingface version](https://github.com/huggingface/transformers/blob/main/examples/flax/summarization/run_summarization_flax.py) ``` batch_size_per_update = train_batch_size * training_args.gradient_accumulation_steps # add gradient accumulation if training_args.gradient_accumulation_steps > 1: optimizer = optax.chain( optax.apply_every(batch_size_per_update), optimizer ) # Setup train state state = TrainState.create(apply_fn=model.__call__, params=model.params, tx=optimizer, dropout_rng=dropout_rng) ``` ``` if len(accumulated_gradients) < training_args.gradient_accumulation_steps: accumulated_gradients.append(grad) new_state = state else: grad = jax.tree_multimap(lambda *x: jnp.sum(jnp.stack(x), axis=0), *accumulated_gradients) new_state = state.apply_gradients(grads=grad, dropout_rng=new_dropout_rng) accumulated_gradients = [] ```
12-21-2022 04:29:52
12-21-2022 04:29:52
cc @sanchit-gandhi <|||||>Hey @buttercutter! It looks like there are two different feature requests going on here! Let's focus on the JAX gradient accumulation one since this more relevant to the 'motivation' and code snippet you've provided. Feel free to open a separate issue for DeepSpeed activation checkpointing. Unfortunately, gradient accumulation in JAX isn't as straightforward as using `optax.apply_every`! If you dig through the source code, you'll actually find that using `apply_every` with a batch size of N/2 and 2 accumulation steps is not necessarily equivalent to not using `apply_every` with a batch size of N. See https://optax.readthedocs.io/en/latest/api.html#optax.apply_every There is an alternative in `optax.MultiSteps`: https://optax.readthedocs.io/en/latest/api.html#optax.MultiSteps. This will give correct gradient equivalence between using gradient accumulation and not using gradient accumulation. However in my experiments, I found it to be not super memory efficient, and consequently quite an unreliable means of using gradient accumulation. For this reason, I took the decision not to add it to the examples scripts. Feel free to experiment with using `optax.MultiSteps` in your code! If you're able to get nice performance, we can explore adding it to the examples scripts! It'd be cool to benchmark the maximum permissible batch size you get without gradient accumulation, and then the maximum effective batch size you get with gradient accumulation! In my experiments, the most memory efficient way of implementing gradient accumulation was to to write a custom loop: https://github.com/sanchit-gandhi/seq2seq-speech/blob/669e51452c396b3b8605c9ac7511da8abe31038f/run_flax_speech_recognition_seq2seq.py#L1352 Now while this is the most memory efficient way, it's the most complicated in terms of code understanding! For this reason, it's also not a good fit for the Transformers examples scripts, which we try and keep as clean and lightweight as possible.<|||||>I am using your [custom loop](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/35f9ae01f85745143f56a5b049596ebe3c57a145#file-run_summarization_flax-py-L1174) for `train_step()`, but I have the following error: Note: In my code, ` training_args.per_device_gradient_accumulation_steps = 10` , and `training_args.per_device_train_batch_size = 8` and `batch` has shape of `(8, 3600)` ``` Traceback (most recent call last): File "run_summarization_flax.py", line 1338, in <module> main() File "run_summarization_flax.py", line 1264, in main state, train_metric = p_train_step(state, batch) File "/home/moe/.local/lib/python3.8/site-packages/chex/_src/fake.py", line 175, in wrapped_fn output = vmapped_fn(*call_args) File "run_summarization_flax.py", line 1173, in train_step batch = jax.tree_map( File "run_summarization_flax.py", line 1174, in <lambda> lambda x: x.reshape( File "/home/moe/.local/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 793, in _reshape return lax.reshape(a, newshape, None) jax.core.InconclusiveDimensionOperation: Cannot divide evenly the sizes of shapes (8, 8, 3600) and (8, 10, 8, 3600) ``` <|||||>@sanchit-gandhi When I run [your original python script without any modifications](https://github.com/sanchit-gandhi/seq2seq-speech/blob/669e51452c396b3b8605c9ac7511da8abe31038f/run_flax_speech_recognition_seq2seq.py), it gave `free(): invalid pointer` ? And when I use [run_librispeech.sh](https://github.com/sanchit-gandhi/seq2seq-speech/blob/2765278c6a37d642d99bda8e52dfc9d8a983b4ed/scripts/seq2seq/run_librispeech.sh) , it gave similar error on `free()`again. ``` sh run_librispeech.sh src/tcmalloc.cc:332] Attempt to free invalid pointer 0x7fc48dd90558 Aborted (core dumped) ```<|||||>@sanchit-gandhi I am not able to use your original python script, hence I proceed with [my own python script with the following slight modification ](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/revisions#diff-a8b873e9d2d0489c80ac16d9b2dbd0706efea6bc1947ae235eef864ee5c7b050L1175)to get it past the dimension runtime error. Note that the `-1` in the `reshape` operation means that the size of the last dimension will be inferred from the size of `x` and the other dimensions. Hence the following modification will reshape `batch` to have shape `(8, 10, 3600)` ``` # add a first dimension over gradient_accumulation_steps for minibatch slices batch = jax.tree_map( lambda x: x.reshape( training_args.per_device_train_batch_size, training_args.per_device_gradient_accumulation_steps, -1 #*x.shape[1::] ), batch, ) ```<|||||>Hey @buttercutter! Sorry for the late reply here! The shape mismatch error you are experiencing is likely due to a difference in the number of accelerator devices. I purposed my script for a TPU v3-8 (8 devices), whereas it looks like you're testing on a single GPU (1 device). With multiple devices, we shard the data across devices by prepending an extra dimension to the start of the data: `(num_devices, per_device_train_batch_size, input_shape)`. We don't get this extra dimension with one device: since we run everything on a single GPU, there is no need for any data sharding. This is probably the reason for the shape mis-match we are seeing here (your data is of shape `(per_device_train_batch_size, input_shape)`). The workaround with setting `-1` in the reshape operation looks valid in this case! Glad to see the script is working now! Let me know if you encounter any further issues - more than happy to help here!<|||||>Hey @sanchit-gandhi How to properly [modify line 1208 till line 1230 for enabling gradient accumulation trick](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/4d4b958675c6c8e2f8b988227e2bc5330d8c5312#file-run_summarization_flax-py-L1208-L1230) ? ![image](https://user-images.githubusercontent.com/3324659/210921313-3da8df59-ce18-44b8-bfd9-549347d51f16.png) <|||||>I have turned off `training_args.gradient_checkpointing` option for now because of the following runtime error. Could you also help to advise on this as well ? ``` All the weights of FlaxLongT5ForConditionalGeneration were initialized from the model checkpoint at google/long-t5-tglobal-base. If your task is similar to the task the model of the checkpoint was trained on, you can already use FlaxLongT5ForConditionalGeneration for predictions without further training. Traceback (most recent call last): File "run_summarization_flax.py", line 1340, in <module> main() File "run_summarization_flax.py", line 605, in main model.enable_gradient_checkpointing() AttributeError: 'FlaxLongT5ForConditionalGeneration' object has no attribute 'enable_gradient_checkpointing' ```<|||||>It seems that `AttributeError: 'FlaxLongT5ForConditionalGeneration' object has no attribute 'enable_gradient_checkpointing'` is gone after forced reinstall of transformers library. The only issue left is the [gradient accumulation](https://github.com/huggingface/transformers/issues/20855#issuecomment-1373082511)<|||||>@sanchit-gandhi [these code changes](https://gist.github.com/buttercutter/34597783d681ce6407ff26ec3b76e56e/revisions#diff-a8b873e9d2d0489c80ac16d9b2dbd0706efea6bc1947ae235eef864ee5c7b050) at least **bypass** the gradient accumulation runtime error for now. ![image](https://user-images.githubusercontent.com/3324659/211711068-b51c20e3-7eca-4b84-888e-30130b85ab33.png) <|||||>Hey @buttercutter, For such specific questions, it really helps to provide a reproducible code-snippet, such that the maintainer looking into the issue can replicate the error being faced and dig into the code on their end locally. In this case, I created one that uses a ['tiny random' version](https://huggingface.co/sshleifer/bart-tiny-random) of the BART model so that the forward/backward passes are fast, and a ['mini' version](https://huggingface.co/datasets/iohadrubin/mini_xsum) of the XSUM dataset such that the dataset download and preparation time is small: ``` python run_summarization_flax.py \ --output_dir="./" \ --model_name_or_path="sshleifer/bart-tiny-random" \ --tokenizer_name="sshleifer/bart-tiny-random" \ --dataset_name="iohadrubin/mini_xsum" \ --do_train \ --do_eval \ --predict_with_generate \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --overwrite_output_dir \ --max_source_length="64" \ --max_target_length 32 \ ``` I would highly recommend this approach of using tiny/mini versions of the model/dataset when debugging to give a fast feedback loop! Having tiny/mini versions is also good practice when sharing your code, as it allows others to try the code out locally without enormous download and wait times. The easiest thing to do would be to remove all the layer/grad norm logs if you don't need them (L1208-1225). Otherwise, you can follow this fix. Upon inspection, the keys for the `layer_grad_norm` and `layer_param_norm` need to be changed for the BART model to include an extra key. The layer grad norm values then need to be made into a `jnp.array`: ```diff logs = { "layer_grad_norm": layer_grad_norm, - "encoder_grad_norm": jnp.linalg.norm(jax.tree_util.tree_leaves(layer_grad_norm["encoder"])), + "encoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["model"]["encoder"]))), - "decoder_grad_norm": jnp.linalg.norm(jax.tree_util.tree_leaves(layer_grad_norm["decoder"])), + "decoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["model"]["decoder"]))), } ``` Here's the full corrected code snippet: ```python # compute gradient norms over all layers, total encoder, total decoder and global for detailed monitoring layer_grad_norm = jax.tree_map(jnp.linalg.norm, grad) logs = { "layer_grad_norm": layer_grad_norm, "encoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["model"]["encoder"]))), "decoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["model"]["decoder"]))), } logs["grad_norm"] = jnp.linalg.norm([logs["encoder_grad_norm"], logs["decoder_grad_norm"]]) # compute parameter norms over all layers, total encoder, total decoder and global for detailed monitoring layer_param_norm = jax.tree_map(jnp.linalg.norm, new_state.params) logs["layer_param_norm"] = layer_param_norm logs["encoder_param_norm"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm["model"]["encoder"]))) logs["decoder_param_norm"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm["model"]["decoder"]))) logs["param_norm"] = jnp.linalg.norm([logs["encoder_param_norm"], logs["decoder_param_norm"]]) ``` Hope that helps! <|||||>@sanchit-gandhi `model` key seems not found ? Let me also do some debugging at the same time. ```python Traceback (most recent call last): File "run_summarization_flax.py", line 1341, in <module> main() File "run_summarization_flax.py", line 1270, in main state, train_metric = p_train_step(state, batch) File "/home/moe/.local/lib/python3.8/site-packages/jax/_src/traceback_util.py", line 162, in reraise_with_filtered_traceback return fun(*args, **kwargs) File "/home/moe/.local/lib/python3.8/site-packages/jax/_src/api.py", line 2253, in cache_miss execute = pxla.xla_pmap_impl_lazy(fun_, *tracers, **params) File "/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 974, in xla_pmap_impl_lazy compiled_fun, fingerprint = parallel_callable( File "/home/moe/.local/lib/python3.8/site-packages/jax/linear_util.py", line 303, in memoized_fun ans = call(fun, *args) File "/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1245, in parallel_callable pmap_computation = lower_parallel_callable( File "/home/moe/.local/lib/python3.8/site-packages/jax/_src/profiler.py", line 314, in wrapper return func(*args, **kwargs) File "/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1414, in lower_parallel_callable jaxpr, consts, replicas, parts, shards = stage_parallel_callable( File "/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/pxla.py", line 1321, in stage_parallel_callable jaxpr, out_sharded_avals, consts = pe.trace_to_jaxpr_final( File "/home/moe/.local/lib/python3.8/site-packages/jax/_src/profiler.py", line 314, in wrapper return func(*args, **kwargs) File "/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/partial_eval.py", line 2065, in trace_to_jaxpr_final jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic( File "/home/moe/.local/lib/python3.8/site-packages/jax/interpreters/partial_eval.py", line 1998, in trace_to_subjaxpr_dynamic ans = fun.call_wrapped(*in_tracers_) File "/home/moe/.local/lib/python3.8/site-packages/jax/linear_util.py", line 167, in call_wrapped ans = self.f(*args, **dict(self.params, **kwargs)) File "run_summarization_flax.py", line 1214, in train_step "encoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["model"]["encoder"]))), jax._src.traceback_util.UnfilteredStackTrace: KeyError: 'model' The stack trace below excludes JAX-internal frames. The preceding is the original exception that occurred, unmodified. -------------------- The above exception was the direct cause of the following exception: Traceback (most recent call last): File "run_summarization_flax.py", line 1341, in <module> main() File "run_summarization_flax.py", line 1270, in main state, train_metric = p_train_step(state, batch) File "run_summarization_flax.py", line 1214, in train_step "encoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["model"]["encoder"]))), KeyError: 'model' ```<|||||>@sanchit-gandhi I did a print on `layer_grad_norm`, and it seems that `model` is not one of the key. Could you advise ? ```python layer_grad_norm = {'decoder': {'block': {'0': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'relative_attention_bias': {'embedding': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '1': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '10': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '11': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '2': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '3': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '4': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '5': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '6': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '7': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '8': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '9': {'layer': {'0': {'SelfAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'EncDecAttention': {'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '2': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}}, 'final_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'encoder': {'block': {'0': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'global_relative_attention_bias': {'embedding': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'relative_attention_bias': {'embedding': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '1': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '10': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '11': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '2': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '3': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '4': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '5': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '6': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '7': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '8': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}, '9': {'layer': {'0': {'TransientGlobalSelfAttention': {'global_input_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'k': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'o': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'q': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'v': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, '1': {'DenseReluDense': {'wi_0': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wi_1': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'wo': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}}}}, 'final_layer_norm': {'weight': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}}, 'lm_head': {'kernel': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}, 'shared': {'embedding': Traced<ShapedArray(bfloat16[])>with<DynamicJaxprTrace(level=0/1)>}} ``` <|||||>Hey @buttercutter, Unless you're really keen for grad/param norms **and** have your logger set-up for this, the cleanest thing to do would be to strip the grad/param norm code out of the train step. Otherwise it adds unnecessary computations for results that you won't be analysing! I can't reproduce your code snippet, but it looks like the model you're using has one less `model` key in its params than the dummy one from my code snippet. If you're set on keeping the logging code in, we need to update the dict references accordingly: ```python # compute gradient norms over all layers, total encoder, total decoder and global for detailed monitoring layer_grad_norm = jax.tree_util.tree_map(jnp.linalg.norm, grad) logs = { "layer_grad_norm": layer_grad_norm, "encoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["encoder"]))), "decoder_grad_norm": jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_grad_norm["decoder"]))), } logs["grad_norm"] = jnp.linalg.norm(jnp.array([logs["encoder_grad_norm"], logs["decoder_grad_norm"]])) # compute parameter norms over all layers, total encoder, total decoder and global for detailed monitoring layer_param_norm = jax.tree_util.tree_map(jnp.linalg.norm, new_state.params) logs["layer_param_norm"] = layer_param_norm logs["encoder_param_norm"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm["encoder"]))) logs["decoder_param_norm"] = jnp.linalg.norm(jnp.array(jax.tree_util.tree_leaves(layer_param_norm["decoder"]))) logs["param_norm"] = jnp.linalg.norm(jnp.array([logs["encoder_param_norm"], logs["decoder_param_norm"]])) ``` <|||||>@sanchit-gandhi I just confirmed that the suggested code changes to properly include `logs["grad_norm"]` and `logs["param_norm"]` actually caused OOM error on TPU. ``` Epoch ... (1/16): 0%| | 0/16 [07:05<?, ?it/s] Traceback (most recent call last): File "run_summarization_flax.py", line 1339, in <module> main() File "run_summarization_flax.py", line 1268, in main state, train_metric = p_train_step(state, batch) ValueError: RESOURCE_EXHAUSTED: Attempting to allocate 382.18M. That was not possible. There are 375.16M free.; (0x0x0_HBM0): while running replica 0 and partition 0 of a replicated computation (other replicas may have failed as well). ```<|||||>That's probably because training is working now and we're managing to run the script past the previous error no? As mentioned, feel free to remove all the logger code if you're not interested in tracking param/grad norms (this will save you a bit of memory). Then you can try reducing your `per_device_train_batch_size` by factors of 2 and increasing `gradient_accumulation_steps` to compensate (i.e. try halving `per_device_train_batch_size` and doubling `gradient_accumulation_steps` until you can run the script without OOMs). We're now into the classic phase of finding a suitable training batch size for our model and accelerator device<|||||>@sanchit-gandhi I had reduced to even the smallest possible value for `per_device_gradient_accumulation_steps=2` with `per_device_train_batch_size=1`, but it still give memory resource exhaustion OOM error. Note: Removing all the logger code you provided earlier cleared this OOM error though.<|||||>Hey @buttercutter! Awesome, if gradient accumulation is working without the logging code it sounds like we're in a good position 🚀 I'll close this issue unless there's anything else regarding grad accumulation you wanted to ask!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,854
closed
XLM-R has extremely low accuracy after fine-tuning on MNLI
### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada about `xlm-roberta-large` performance on GLUR/MNLI ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Run below command from official example (run_glue.py for text classification): ```bash seed=42 epochs=3 lr=2e-5 max_length=128 batch_size=32 # Parameters for AltCLIP MODEL_NAME_OR_PATH=xlm-roberta-large device=0 for task_name in mnli do if [ $task_name = "mrpc" ]; then epochs=5 fi if [ $task_name = "stsb" ]; then metric=spearmanr elif [ $task_name = "qqp" ] || [ $task_name = "mrpc" ]; then metric=f1 else metric=accuracy fi model_name=${MODEL_NAME_OR_PATH##*/} output_dir=evaluation/$model_name/glue/$task_name/$seed if [ ! -d "$output_dir" ]; then mkdir -p $output_dir else echo "$output_dir does exist" fi CUDA_VISIBLE_DEVICES=$device python glue.py \ --model_name_or_path $MODEL_NAME_OR_PATH \ --task_name $task_name \ --cache_dir cache/$model_name \ --overwrite_cache \ --do_train \ --overwrite_output_dir \ --do_eval \ --do_predict \ --max_seq_length $max_length \ --per_device_train_batch_size $batch_size \ --per_device_eval_batch_size $batch_size \ --evaluation_strategy steps \ --learning_rate $lr \ --num_train_epochs $epochs \ --save_total_limit 2 \ --load_best_model_at_end \ --metric_for_best_model eval_$metric \ --greater_is_better true \ --seed $seed \ --output_dir $output_dir > $output_dir/log.txt 2>&1 done ``` 2. The accuracy is quite low: eval_mnli/acc: 35.44% eval_mnli-mm/acc:35.22% ### Expected behavior Higher performance on MNLI. Running script with `bert-base` or `roberta-base` instead yields around 85+ point.
12-21-2022 02:58:16
12-21-2022 02:58:16
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I think this kind of question is more suited to the `forum`, it is a discussion rather than a bug.
transformers
20,853
closed
Fix TF generation (especially for `TFMarian`)
# What does this PR do? Fix TF generation (especially for the `TFMarian` generation issue in #18149) Fix #18149
12-20-2022 18:29:42
12-20-2022 18:29:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @ydshieh 👋 Thank you for opening this PR, it made me realize a detail that is wrong in *both* frameworks 👀 We know that logprobs is a negative value, and we want to maximize it in beam search (i.e. make it as close to 0 as possible). Since logprobs is always negative, and the final score is the sum of the logprobs, we can anticipate the best possible score and use it to end beam search with no drawback. Well, it turns out that the method to compute the best possible score depends on `length_penalty`, and we are not accounting for that! - Scenario 1, length_penalty > 0.0: In this case, as the sentence grows, the denominator grows as well. This means the score can get closer to 0 (i.e. higher) as the sentence grows, and longer sentences are promoted. In this case, the best possible score can be determined from the maximum sequence length (TF implementation). - Scenario 2, length_penalty < 0.0: In this case, as the sentence grows, the denominator gets smaller. This means the score will get farther away to 0 (i.e. lower) as the sentence grows, and shorter sentences are promoted. In this case, the best possible score can be determined from the current sequence length (PT implementation). On top of this incomplete best score computation on both ends, your PR made me realize that the stopping condition for TF also had a problem (after factoring in the correct length penalty computation, a few tests failed). I'm opening a PR to compare against this one with what I think is the correct solution to this bug 🐛 <|||||>Close in favor of #20901
transformers
20,852
closed
Using TensorFlow XLA with MBart50 will result in a `OperatorNotAllowedInGraphError` error
### System Info I used `pip install transformers>=4.21.0` to upgrade to the latest version. ### Who can help? @gante ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import MBart50Tokenizer, TFMBartForConditionalGeneration model_name = "facebook/mbart-large-50-many-to-many-mmt" model = TFMBartForConditionalGeneration.from_pretrained(model_name, from_pt=True) tokenizer = MBart50Tokenizer.from_pretrained(model_name) # XLA import tensorflow as tf xla_generate = tf.function(model.generate, jit_compile=True) # Translation hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।जीवन एक चॉकलेट बॉक्स की तरह है।जीवन एक चॉकलेट बॉक्स की तरह है।जीवन एक चॉकलेट बॉक्स की तरह है।जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。生活就像一盒巧克力。生活就像一盒巧克力。生活就像一盒巧克力。生活就像一盒巧克力。生活就像一盒巧克力。生活就像一盒巧克力。" tokenizer.src_lang = "hi_IN" encoded_hi = tokenizer([hi_text]*32, padding=True, return_tensors="tf") tokenizer.src_lang = "zh_CN" encoded_zh = tokenizer([chinese_text]*32, padding=True, return_tensors="tf") # translate Hindi to French generated_tokens = xla_generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"]) x = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # translate Chinese to English generated_tokens = xla_generate(**encoded_zh, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"]) y = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) ``` It will result in the following error message: ``` --------------------------------------------------------------------------- OperatorNotAllowedInGraphError Traceback (most recent call last) <timed exec> in <module> /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 883 884 with OptionalXlaContext(self._jit_compile): --> 885 result = self._call(*args, **kwds) 886 887 new_tracing_count = self.experimental_get_tracing_count() /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 931 # This is the first call of __call__, so we have to initialize. 932 initializers = [] --> 933 self._initialize(args, kwds, add_initializers_to=initializers) 934 finally: 935 # At this point we know that the initialization is complete (or less /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 758 self._concrete_stateful_fn = ( 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access --> 760 *args, **kwds)) 761 762 def invalid_creator_scope(*unused_args, **unused_kwds): /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 3064 args, kwargs = None, None 3065 with self._lock: -> 3066 graph_function, _ = self._maybe_define_function(args, kwargs) 3067 return graph_function 3068 /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3461 3462 self._function_cache.missed.add(call_context_key) -> 3463 graph_function = self._create_graph_function(args, kwargs) 3464 self._function_cache.primary[cache_key] = graph_function 3465 /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3306 arg_names=arg_names, 3307 override_flat_arg_shapes=override_flat_arg_shapes, -> 3308 capture_by_value=self._capture_by_value), 3309 self._function_attributes, 3310 function_spec=self.function_spec, /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses) 1005 _, original_func = tf_decorator.unwrap(python_func) 1006 -> 1007 func_outputs = python_func(*func_args, **func_kwargs) 1008 1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors, /opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 666 # the function a weak reference to itself to avoid a reference cycle. 667 with OptionalXlaContext(compile_with_xla): --> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds) 669 return out 670 /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 992 except Exception as e: # pylint:disable=broad-except 993 if hasattr(e, "ag_error_metadata"): --> 994 raise e.ag_error_metadata.to_exception(e) 995 else: 996 raise OperatorNotAllowedInGraphError: in user code: /opt/conda/lib/python3.7/site-packages/transformers/generation_tf_utils.py:590 generate * seed=model_kwargs.pop("seed", None), /opt/conda/lib/python3.7/site-packages/transformers/generation_tf_utils.py:1641 _generate * input_ids, /opt/conda/lib/python3.7/site-packages/transformers/generation_tf_utils.py:2709 beam_search_body_fn * model_outputs = self( /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:703 run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) /opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:1328 call * outputs = self.model( /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:703 run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) /opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:1129 call * decoder_outputs = self.decoder( /opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:703 run_call_with_unpacked_inputs * return func(self, **unpacked_inputs) /opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:948 call * positions = self.embed_positions(input_shape, past_key_values_length) /opt/conda/lib/python3.7/site-packages/transformers/models/mbart/modeling_tf_mbart.py:129 call * bsz, seq_len = input_shape[:2] /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:520 __iter__ self._disallow_iteration() /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:513 _disallow_iteration self._disallow_when_autograph_enabled("iterating over `tf.Tensor`") /opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:491 _disallow_when_autograph_enabled " indicate you are trying to use an unsupported feature.".format(task)) OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. ``` ### Expected behavior This blog post shows exactly the same way to use it: https://huggingface.co/blog/tf-xla-generate
12-20-2022 16:33:35
12-20-2022 16:33:35
cc @gante <|||||>Hi @xhluca 👋 I was able to successfully run your example on my end. Can you try to install an updated version of `transformers` to see if it solves the problem? (`pip install -U transformers`)<|||||>Thanks it works now!
transformers
20,851
closed
Supporting `ImageProcessor` in place of `FeatureExtractor` for pipelines
# What does this PR do? ~As a bonus point, it enables `OneFormer` for `image-segmentation`.~ Moved to separate PR. Requires https://github.com/huggingface/transformers/pull/21278 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-20-2022 16:33:18
12-20-2022 16:33:18
I don't seem to have direct write access (git is asking for credentials). I opened a PR here: https://github.com/praeclarumjj3/transformers/pull/1.<|||||>Hi @Narsil, please let me know if you need my assistance with the pipeline. I would like to emphasize that we should have a `task_inputs` argument if possible and provide an option to change the task on the model page under the `Hosted Inference API` to demonstrate the task-dynamic nature of a **single** OneFormer model. Thanks! <img width="1152" alt="Screenshot 2022-12-24 at 4 09 46 PM" src="https://user-images.githubusercontent.com/54928629/209432305-793e150b-2dbd-4566-958c-a53ba69c3d75.png"> <|||||>> Hi @Narsil, please let me know if you need my assistance with the pipeline. I would like to emphasize that we should have a `task_inputs` argument if possible and provide an option to change the task on the model page under the `Hosted Inference API` to demonstrate the task-dynamic nature of a **single** OneFormer model. In my modifications (I'll create a PR when this one is merged, the branch already exists) it will be possible to use `subtask` which already exists as a parameter today and will work with oneformer out of the box. On the UI front, I'm not really convinced we should add the complexity of this. This requires either changing segmentation for ALL segmentation models (which most models will only support one form, so it's a source of confusion) or add a new task (and that's a big modification, which imo is not worth it) `panoptic` is just more general than `instance` and `semantic` so it is a sound default IMO. Since the widget is here to be simple, it really seems like a good way to showcase the model's performance. For advanced use cases, using the API with subtask will work, and specific spaces and colab can showcase them further. Just like `text-generation` 's widget doesn't display all the various generation params (which are indeed useful in a lot of cases when tinkering with such a model) I don't think we should display a choice of subtask for this one specific model on the UI. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @Narsil now that OneFormer has been merged, we can update the image segmentation pipeline :)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Merging since I removed the problematic part.<|||||>HI @amyeroberts @Narsil @sgugger , I notice some dev tests in Optimum that start to fail due to this PR. Notably, this PR break code snippets (that are working on 4.26) as: ```python from transformers import pipeline, AutoModelForImageClassification, AutoFeatureExtractor model_id = "microsoft/resnet-18" model = AutoModelForImageClassification.from_pretrained(model_id) feature_extractor = AutoFeatureExtractor.from_pretrained(model_id) pipe = pipeline(task="image-classification", model=model, feature_extractor=feature_extractor) ``` with error `Exception: Impossible to guess which image processor to use. Please provide a PreTrainedImageProcessor class or a path/identifier to a pretrained image processor.` It is only natural to want to pass `feature_extractor` and not `image_processor` from a user perspective, given that a lot of code snippets in README use them: https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads Is this breaking change intended? Given https://github.com/huggingface/transformers/pull/21401 @ydshieh I guess it is?<|||||>`ImageProcessor` are replacing `FeatureExtractor` for images (so `FeatureExtractor` will stay but just for the audio. Now the breaking change you've seen is not intended. We should automatically set the image_processor when you send a `FeatureExtractor`. Going forward it's getting extinct, but we should be able to maintain long term backward compatibility.<|||||>Quick question, any reason you're using this code instead of just `pipeline(model="microsoft/resnet-18")` ? <|||||>This line is supposed to fix the backward compability: https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L795 I created a PR to fix this snippet though.<|||||>Thank you @fxmarty @Narsil for rescuing the backward compatibility, and sorry, the CI didn't detect this edge case while I worked on #21401
transformers
20,850
closed
Adding `evaluate` to the list of libraries required in generated notebooks
This PR is based on the discussion in [doc-builder/Add custom first cell #50](https://github.com/huggingface/doc-builder/pull/50#issuecomment-1359312952). It modifies the config file that defines the contents of the first cell for the Colab notebooks generated from the doc pages. This change adds `evaluate` to the list of libraries that are installed in the first cell of every generated notebook. Currently, only `transformers` and `dataset` libraries are installed by default. However, many notebooks also require `evaluate`. See examples: https://huggingface.co/docs/transformers/tasks/sequence_classification https://huggingface.co/docs/transformers/tasks/semantic_segmentation
12-20-2022 15:18:27
12-20-2022 15:18:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger No problem! I don't have permissions to merge though :)
transformers
20,849
closed
[Time series] Temporal Fusion Transformer model
# What does this PR do? Adding Temporal Fusion Transformer time series model https://arxiv.org/pdf/1912.09363.pdf
12-20-2022 13:25:41
12-20-2022 13:25:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,848
closed
TF AdamWeightDecay fix for 2.11
The TF changelog said that the optimizer had been moved to `tf.keras.optimizer.legacy`, but the true path is `tf.keras.optimizers.legacy`. Because of the conditional in the PR, we didn't notice the error, but it's resolved now! Fixes #20847
12-20-2022 13:10:23
12-20-2022 13:10:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,847
closed
Unimplemented error when using AdamWeightDecay in TF
### System Info - `transformers` version: 4.26.0.dev0 - Platform: Linux-4.15.0-200-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @Rocketknight1 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Coming from here: #20750. Using the example code but with AdamWeightDecay triggers the error. The code: ```python from transformers import TFAutoModelForSequenceClassification from transformers.optimization_tf import create_optimizer from transformers import AutoTokenizer from tensorflow.keras.optimizers import Adam from datasets import load_dataset import tensorflow as tf import numpy as np dataset = load_dataset("glue", "cola") dataset = dataset["train"] # Just take the training split for now tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") tokenized_data = dict(tokenizer(dataset["sentence"], return_tensors="np", padding=True)) labels = np.array(dataset["label"]) # Label is already an array of 0 and 1 # Load and compile our model model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") # Lower learning rates are often better for fine-tuning transformers optimizer, _ = create_optimizer(3e-5, 600, 100, weight_decay_rate=0.3) model.compile(optimizer=optimizer, loss='binary_crossentropy') model.fit(tokenized_data, labels) ``` ```python Traceback (most recent call last): File "../test_mirrored.py", line 24, in <module> model.fit(tokenized_data, labels) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, tensorflow.python.framework.errors_impl.UnimplementedError: Graph execution error: Detected at node 'Cast_1' defined at (most recent call last): File "../test_mirrored.py", line 24, in <module> model.fit(tokenized_data, labels) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler return fn(*args, **kwargs) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1650, in fit tmp_logs = self.train_function(iterator) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1249, in train_function return step_function(self, iterator) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1233, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/engine/training.py", line 1222, in run_step outputs = model.train_step(data) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1559, in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize self.apply_gradients(grads_and_vars) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/transformers/optimization_tf.py", line 252, in apply_gradients return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_gradients return super().apply_gradients(grads_and_vars, name=name) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 632, in apply_gradients self._apply_weight_decay(trainable_variables) File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1159, in _apply_weight_decay tf.__internal__.distribute.interim.maybe_merge_call( File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1155, in distributed_apply_weight_decay distribution.extended.update( File "/home/user/bicleaner-ai-trainings/venv/lib/python3.8/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1151, in weight_decay_fn wd = tf.cast(self.weight_decay, variable.dtype) Node: 'Cast_1' 2 root error(s) found. (0) UNIMPLEMENTED: Cast string to float is not supported [[{{node Cast_1}}]] (1) CANCELLED: Function was cancelled before it was started 0 successful operations. 0 derived errors ignored. [Op:__inference_train_function_37329] ``` Setting weight decay to 0.0 does not trigger the error, so I imagine its something with [AdamWeightDecay](https://github.com/huggingface/transformers/blob/d1d3ac94033b6ea1702b203dcd74beab68d42d83/src/transformers/optimization_tf.py#L147). TensorFlow [changelog](https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0) says: > The tf.keras.optimizers.Optimizer base class now points to the new Keras optimizer, while the old optimizers have been moved to the tf.keras.optimizers.legacy namespace. and > Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam). > Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer. Could it be related to this? ### Expected behavior Train successfully.
12-20-2022 12:32:41
12-20-2022 12:32:41
Hi @ZJaume, we saw this issue earlier but thought we had fixed it with #20735. I'll investigate now and see if I can reproduce it<|||||>Reproduced. The cause was a typo that's also present in the TF Changelog for 2.11, will push a PR now!<|||||>PR is up at #20848<|||||>@ZJaume Should be fixed now, thanks for the bug report! Let me know if installing the latest version from main doesn't fix your problem.<|||||>Working. Thank you!
transformers
20,846
closed
Deprecate `clean_up_tokenization_spaces` for BLOOM
# What does this PR do? Currently in `transformers`: ```python >>> tok.decode(tok.encode("Hello , there")) 'Hello, there' # notice the missing space between "Hello" and "," >>> tok.decode(tok.encode("Hello , there"), clean_up_tokenization_spaces=False) 'Hello , there' ``` In order too prevent issues such as this: https://huggingface.co/bigscience/bloom/discussions/153#6397907b71eb2455d898e0a4 we suggest to add a warning, suggesting to users to use `clean_up_tokenization_spaces=False` instead. As BLOOM tokenizer was developped in order to be lossless encoding mechanism, it should make sense to always remove that option IMO, therefore I'm suggesting to deprecate that option from BLOOM tokenizer. Other option would be to change the default to `True`.
12-20-2022 11:24:13
12-20-2022 11:24:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20846). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR! After thinking a little more about it and in terms of user experience, I'm happy to have the warning if you think the use-case is frequent and the default behavior is misleading. However, I'm not too sure about deprecating/updating the value in v5. I think the current behavior isn't necessarily a bug, as the argument to toggle is clearly displayed in the docs (and I have no problem with making it more prominent, such as with the warning). Switching to `False` means that we'll start diverging between BLOOM and other tokenizers (like GPT-2) which work very similarly as of now. I'd be in favor of adding the warning mentioning to toggle it in this PR, and to wait until @sgugger is back so that we have a second opinion on the matter before mentioning that we will move it to `False` by default. Would that be ok for you @thomasw21?<|||||>@LysandreJik Sure! This isn't blocking anything really, the real issue is here: https://github.com/huggingface/text-generation-inference/issues/12 IMO as the tokenizer was build to be lossless, it's weird that by default it isn't. Would it make more sense to move `clean_up_tokenization_spaces` to be in `tokenizer` instead? Something like a special decoder? https://huggingface.co/docs/tokenizers/components#decoders . I understand that this is breaking, but we should be able to slightly migrate to newer setups using deprecation cycles?<|||||>Interesting proposal, WDYT @Narsil?<|||||>I think it's ok to move slowly, but touching `cleanup_tokenization_spaces` and its default are BIG changes. Personally, I think borderline too big to migrate in V5 (it's just a really big change, that's unfortunately probably not worth the effort). That being said, making it modifiable on a tokenizer per tokenizer basis (so updating Bloom alone) is still Ok, and is definitely a good way forward. Personally I would focus on this user's need first, which would be solved by implementing `return_full_text=False`, it seems the lowest hanging fruit to solve the user's need. We can move forward on the "decoder" (or any other type of config change) later. <|||||>Okay so in terms of actions: - [x] Assume it's not a `transformers` bug but a `text-generation-inference` bug right now. https://github.com/huggingface/text-generation-inference/issues/12 - [ ] Start thinking of a way to support `clean_up_tokenization_spaces` and `skip_special_tokens` in the tokenizer directly? Typically you want in order of priority: `user defined argument`, `tokenizer specific config`, `methods default` Would that make sense?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,845
closed
[Examples] Update big table
# What does this PR do? This PR updates the "big table of tasks" by - adding a hyperlink to each of the example datasets - add "image pretraining", and the Colab link for semantic segmentation
12-20-2022 10:37:54
12-20-2022 10:37:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Slightly related, wondering if we shouldn't link from this Big Table to the corresponding task in hf.co/tasks? (cc @merveenoyan)
transformers
20,844
closed
remove unused `use_cache` in config classes
# What does this PR do? `Lilt`, `Longformer` and `Canine` only implements encoder-only task heads (QA, Sequence/Token classification etc.), and `use_cache` is not used in their modeling file.
12-20-2022 10:36:41
12-20-2022 10:36:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,843
closed
Fix doctest
# What does this PR do? Fixes a bunch of failing doctest
12-20-2022 06:51:43
12-20-2022 06:51:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,842
closed
Changes to BART shift_token_right and using the proper shifting index EOS or BOS.
### System Info - `adapter-transformers` version: 3.0.1 - Platform: Linux-4.15.0-72-generic-ppc64le-with-debian-buster-sid - Python version: 3.6.11 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): 2.2.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Although - this is kinda version independent and general purpose. ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have been going through the GitHub issues and the history of the [modeling_bart.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py) and I basically found that from PR #9134 , #9135 to #9343 the shift_token_right function was modified to do a bunch of things. For most of "those things" the PR docs and comments are super helpful! (For Instance, the issue with -100 in a label and changing it to the PAD token, and other issues that detail that even if 2 consecutive `PAD PAD` are changed to `PAD` it is fine since the loss does not account for that.) However - one thing that was changed and never described or mentioned in the issues/docs/comments is how after shifting the input right the EOS index used to be appended to the beginning of the shifted input. But after #9343 the BOS index is being appended! I realize that if we're doing BartForConditionalGeneration, it is super rare to pass labels and generate the decoder_input_ids from the labels. And this is mainly for doing MLM. This is where the problem arises. Because example/reproducible projects and models are kinda divided on which shift_tokens_right function to use when preparing the input for BART fine-tuning and inference. Some projects do a simple: `from transformers.models.bart.modeling_bart import shift_tokens_right` and usually that is the newest and current version of shifting the input: ``` def shift_tokens_right_NEW(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): """ Shift input ids one token to the right. """ shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() shifted_input_ids[:, 0] = decoder_start_token_id if pad_token_id is None: raise ValueError("self.model.config.pad_token_id has to be defined.") # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids ``` But a lot of projects using BART tend to define and utilize the original shifting code-function: ``` def shift_tokens_right_OLD(input_ids, pad_token_id): """Shift input ids one token to the right, and wrap the last non pad token (usually <eos>).""" prev_output_tokens = input_ids.clone() index_of_eos = (input_ids.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) prev_output_tokens[:, 0] = input_ids.gather(1, index_of_eos).squeeze() prev_output_tokens[:, 1:] = input_ids[:, :-1] return prev_output_tokens ``` The difference between them is that old version shifts right and appends the EOS token to the beginning. While, the new version appends the BOS token. This has been causing some issues while preparing inputs for my project! I just wanted to know - is there a reason why the new code does not use EOS anymore? And - is one correct and one incorrect for Finetuning? Or does it make no difference? I am asking this question from the point-of-view of a seq2seq task so no need for labels and for preparing the target decoder_input_ids for training and fine-tuning. A sample difference between the old and true function can be seen from this example: ``` bos = 0 pad = 100 eos = 1 >>> input_id tensor([ [ 0, 13, 8, 11, 9, 2, 2, 17, 9, 1], [ 0, 14, 10, 7, 6, 10, 3, 1, 100, 100], [ 0, 4, 16, 14, 2, 14, 3, 1, 100, 100], [ 0, 16, 12, 7, 5, 14, 6, 10, 1, 100], [ 0, 3, 12, 7, 9, 1, 100, 100, 100, 100]]) #Here it can be assumed that this is the original random target from the tokenizer after padding and EOS/BOS. #0 = BOS, 1 = EOS, 100 = PAD. Now the prepared target after the old and new shift function will be: >>> shift_tokens_right_OLD(input_id,pad) tensor([ [ 1, 0, 13, 8, 11, 9, 2, 2, 17, 9], [ 1, 0, 14, 10, 7, 6, 10, 3, 1, 100], [ 1, 0, 4, 16, 14, 2, 14, 3, 1, 100], [ 1, 0, 16, 12, 7, 5, 14, 6, 10, 1], [ 1, 0, 3, 12, 7, 9, 1, 100, 100, 100]]) >>> shift_tokens_right_NEW(input_id,pad,bos) tensor([ [ 0, 0, 13, 8, 11, 9, 2, 2, 17, 9], [ 0, 0, 14, 10, 7, 6, 10, 3, 1, 100], [ 0, 0, 4, 16, 14, 2, 14, 3, 1, 100], [ 0, 0, 16, 12, 7, 5, 14, 6, 10, 1], [ 0, 0, 3, 12, 7, 9, 1, 100, 100, 100]]) ``` So the main difference in preparation is the EOS/BOS change in `decoder_input_ids[:,0]` position. ### Expected behavior I was hoping if I could get some guidance on 2 questions: 1) Is there a difference between finetuning/inference with using EOS or BOS index after shifting? from the POV of the BART model? 2) As an extension - is one better than the other and should be preferred? My main reason for this is to somehow combine existing projects/models and the new fine-tuning and models to have the same shift and target preparation approach!
12-20-2022 01:52:00
12-20-2022 01:52:00
I think I have a decent understanding of what is happening and just compiling my findings (Incase anyone else is confused?) and closing this issue. - The decoder_start_token_id is not the BOS token ID for BART. It means the token to start the decoding for BART. If the config.json is checked, it is actually forced to be = index(2) which is the EOS token. `</s>` I think. - The previous version of the code checks for the PAD token in an input and then considers the index before it as the one that contains the EOS token. This ways, it builds this with the overhead of a search and then shifts the EOS token to the beginning of the shifted decoder_input_ids - The new version of the code asks for this EOS id which is 2 in the BARTConfig and uses that. Discussion and comments by @sshleifer and other people who worked and submitted the BART model can be found here: https://discuss.huggingface.co/t/what-i-know-and-dont-know-about-sequence-to-sequence-batching/1046 https://github.com/huggingface/transformers/issues/7961 https://github.com/huggingface/transformers/issues/5212 https://huggingface.co/facebook/bart-base/blob/main/config.json#L19 (Here it shows the prefixed index 2 for decoder_start_token_id)
transformers
20,841
closed
Fix tiny typo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-20-2022 00:35:43
12-20-2022 00:35:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,840
closed
Clarify `use_fast` parameter in docstring
This PR addresses the ambiguity of the `use_fast` parameter raised in #20817.
12-19-2022 21:16:22
12-19-2022 21:16:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>> what happens if the architecture supports it but the model doesn't? Hmm that's a good question! Do you happen to know if any of the other architectures have this issue or if it is just a bug with OPT? I'll remove the suggestion to check the supported framework list so we don't end up confusing anyone.<|||||>It's the first time I run into this inconsistency across models of the same arch, but I have never needed to ensure it was `fast` before so who knows it may have happened a lot and I wasn't the wiser. So +1 to remove the suggestion unless we somehow can stand behind it.<|||||>This was the bug for OPT, so it shouldn't happen for other models.
transformers
20,839
closed
fix typo output not ouput in bitsandbytes trainer test
# What does this PR do? fixes a typo in the trainer test for bitsandbytes that was causing an error on pytest collection ## Before submitting - [ yes] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
12-19-2022 20:03:41
12-19-2022 20:03:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,838
closed
TypeError: TextInputSequence must be str
### System Info - `transformers` version: 4.23.1 - Platform: Linux-4.18.0-305.62.1.el8_4.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.8 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: YEs - Using distributed or parallel set-up in script?: ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Steps to Reproduce: 1. Trying the re-run this notebook: https://github.com/suppathak/aicoe-osc-demo/blob/teach-stu/notebooks/demo2/teacher_student_exp.ipynb (Reference: https://github.com/neuralmagic/sparseml/blob/main/integrations/huggingface-transformers/tutorials/sparsifying_bert_using_recipes.md) ### Expected behavior It should run without any error. The notebook was running perfectly until last Friday(3 days ago). When I try to re-run it again. It is failing and giving me this error. Has there been any kind of update in the system? Any helpful feedback is appreciated. Thanks! ![Screenshot from 2022-12-19 05-27-58](https://user-images.githubusercontent.com/30439457/208504300-62a1aa36-23d5-4a5a-bd0c-150cd97acd93.png)
12-19-2022 19:29:57
12-19-2022 19:29:57
Please use the [forums](https://discuss.huggingface.co/) to help debug your code or provide us with a short reproducer we can run. The notebook relies on credentials we do not have.<|||||>Closing the Issue. Resolved the error by replacing the dataset. There was some manually added extra values in the old dataset.
transformers
20,837
closed
Avoid collisions in writing metrics via 2 APIs - azureml + mlflow
MLflow tracking API is enabled by default in AzureML and HF MLflow integration is more fully featured. I'd remove the AzureML integration but leaving the current behavior for backwards compatibility (though it should really be removed) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-19-2022 19:11:46
12-19-2022 19:11:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger thanks, I suppose I need to get a circleci account first? I can take a look tomorrow or feel free to merge this small change with an account someone already has<|||||>@sgugger I have signed up and connected circleci, and the failed pipeline link doesn't seem to allow me to rerun, should I just bump it with a useless commit to retrigger the checks? <|||||>You can try an empty commit indeed: ``` git commit -m "Trigger CI" --allow-empty ```<|||||>Thanks again for yourcontribution!
transformers
20,836
closed
Remove unused `max_position_embeddings ` in config classes
# What does this PR do? Similar to #20596 and #20554, but here we removed unused `max_position_embeddings`.
12-19-2022 17:59:41
12-19-2022 17:59:41
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,835
closed
[mBART] fix erroneous italics in docstring
# What does this PR do? Corrects tensor dims from italics to code-blocks in the mBART doctoring, as discussed in https://github.com/huggingface/transformers/pull/20787#discussion_r1050620673. The changes as applied to mBART are contained in https://github.com/huggingface/transformers/pull/20835/commits/284e13af61871056f64cbfb7883ea36a4bc70a39. The changes as applied to all other models that inherit using `# Copied from MBart...` are in https://github.com/huggingface/transformers/pull/20835/commits/0a5cd7c66f3be8d8f21dbdb8065cc1d87bd6f405 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-19-2022 17:21:23
12-19-2022 17:21:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,834
closed
my colab do not load new notebook it has bug
### System Info #hi i am university student and need colab please help me i can not open new notebook in colab . do not load new notebook ,.... thanks [email protected] ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction hi i am university student and need colab please help me i can not open new notebook in colab . ### Expected behavior do not load new notebook ,.... thanks [email protected]
12-19-2022 17:09:21
12-19-2022 17:09:21
transformers
20,833
closed
[DETR and friends] Use AutoBackbone as alternative to timm
# What does this PR do? This PR makes it possible to leverage our own backbone classes, like ResNet or Swin Transformer, instead of relying on timm for the following models: - DETR - Conditional DETR - Deformable DETR - Table Transformer This allows people to use these frameworks without having to rely on the timm dependency. I've added an attribute to the config "use_timm_backbone" which is set to True by default, but can be set to False. To do: - [x] fix copies, once design gets approved
12-19-2022 15:30:05
12-19-2022 15:30:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger feel free to approve :)<|||||>Can you make all tests pass before asking for a final review?<|||||>The remaining tests which are failing are due to `make fix-copies`, however I'll only start updating the other models once this design is approved.<|||||>@sgugger thanks for the review, addressed all comments!
transformers
20,832
closed
Fix typing about next_beam_tokens and next_beam_indices
# What does this PR do? Looking at source code they seem to be int not float? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-19-2022 13:22:28
12-19-2022 13:22:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20832). All of your documentation changes will be reflected on that endpoint.<|||||>Please make sure to run `make style` on your branch so that the quality tests pass. cc @gante for review.<|||||>Sure, I will do it soon together with another PR in https://github.com/huggingface/transformers/issues/20820<|||||>Please have each of your PR focused on one thing. We don't want to group changes that are not linked to each other in the same PR :-)<|||||>Sure, I mean file two PRs but do the *work* in one single time slot of mine ;) No worries, I am experienced to make PRs (e.g. to Google Flutter https://github.com/flutter/flutter/pulls?q=is%3Apr+author%3Afzyzcjy)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>bump, will do it when having time<|||||>Hi, I wonder what version of `black` is required? I have tried: ``` (dev-transformers) ➜ transformers git:(patch-1) python3 -m black -- examples tests src utils Skipping .ipynb files as Jupyter dependencies are not installed. You can fix this by running ``pip install black[jupyter]`` reformatted src/transformers/utils/model_parallel_utils.py reformatted tests/models/xlm_prophetnet/test_modeling_xlm_prophetnet.py reformatted src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py reformatted src/transformers/models/markuplm/tokenization_markuplm.py reformatted src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py reformatted examples/research_projects/lxmert/modeling_frcnn.py reformatted examples/research_projects/visual_bert/modeling_frcnn.py reformatted src/transformers/models/prophetnet/modeling_prophetnet.py reformatted src/transformers/models/xlm_prophetnet/modeling_xlm_prophetnet.py reformatted src/transformers/models/reformer/modeling_reformer.py reformatted src/transformers/tokenization_utils_base.py All done! ✨ 🍰 ✨ 11 files reformatted, 2138 files left unchanged. ``` and so on. It is changing formats for a dozen of files that I did not touch, such as: ![image](https://user-images.githubusercontent.com/5236035/213868622-34147495-af56-4971-9a33-034ab81c7607.png) ![image](https://user-images.githubusercontent.com/5236035/213868628-1f9884c8-4b54-4862-9a27-b2c0fbdf1488.png) ![image](https://user-images.githubusercontent.com/5236035/213868632-3c24ef72-e74b-4fa4-923c-48c60e11c7ab.png) ![image](https://user-images.githubusercontent.com/5236035/213868664-61ac3b12-03d7-49f2-a00c-4d05d8b196d8.png) (P.S. I did not run `make style` but instead invoke that command directly because my `black` on PATH has a little conflict. I have followed https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md and create a brand new conda environment and install via pip)<|||||>@fzyzcjy We use black 22.3 (see [here](https://github.com/huggingface/transformers/blob/e1cd78634ae4ba2b0a3d548bd6663c08765a8b4d/setup.py#L101))<|||||>Hmm it is the correct version ``` python -m black --version python -m black, 22.3.0 (compiled: yes) ```<|||||>@fzyzcjy Suggestion: revert to the first commit (which only touches 2 lines), run `make fixup` (which only touches modified files), then force commit the result :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,831
closed
Fluent API for training arguments
### Feature request Provide a fluent API for defining the training arguments for the Trainer class. Instead of writing: ``` arguments = TrainingArguments(output_dir="output", do_train=True, do_eval=True, evaluation_strategy='epoch', per_device_train_batch_size=16, per_device_eval_batch_size=16, learning_rate=5e-05, num_train_epochs=1, logging_first_step=True, logging_strategy='steps', logging_steps=50, save_strategy='epoch', fp16=True, ) ``` one would be able to write: ``` arguments = TrainingArguments("output"). evaluate(strategy='epoch', batch_size=16). logging(first_step=True, strategy='steps', steps=50)... ``` ### Motivation Currently, the arguments submitted to Trainers are defined in a separate class, which is fine. Yet the constructor of that class has a tone of arguments. Some of these arguments naturally stick together. Providing a fluent API, with related arguments provided in one call, would have the following benefits: * One giant call to the constructor would be divided into smaller calls, making the documentation of the method much easier to read; * It would be possible to chain argument construction - i.e. it would be easy to define a default set of options (different than the defaults provided by the library) and then modify them according to some additional requirements, e.g. changing LR would be just a single call on the predefined arguments object. Currently, it is obtained by changing the value of one of the many arguments, which is harder to spot, if that number is large. * Related arguments would be grouped together via call, rather than prefix (e.g. logging), * Related arguments could be checked together, e.g. currently `do_evaluate` is ignored if `evaluation_strategy` is not `None` with fluent API an exception could be raised if someone set `evaluate=False` and sets `evaluation_strategy` to some meaningful value (this is actually possible to do currently, but would be much easier to implement if there's separate call for that. ### Your contribution Implementing the basic fluent API is easy (e.g. each argument has it's own corresponding method - no change to the API of the constructor), but does not fulfill all the objectives. Yet I could provide such a PR, as a starter for the more user friendly API.
12-19-2022 11:46:22
12-19-2022 11:46:22
While I understand the idea of grouping related arguments together, the proposed approach is very functional, which is not something we use anywhere in the Transformers library. So this API would be at odds with the rest of Transformers. Happy to explore other ways to group related arguments together however, if you have other ideas.<|||||>That would definitely help in terms of documentation and not having to scroll each time to find a description of the argument. And besides, it would help to shorten the names of the arguments. However, I would add some type of `common` arguments, either in the main constructor or in the dedicated method, because some of the arguments can be shared between different stages, and redefining them in each method could be misleading and a little ambiguous for the library itself, especially if they have different values. It could also help in integrating different argument passing packages, like `hydra`, in which we can group the arguments in the YAML file, which seems to be a more maintainable solution in comparison to the common `argparse` built-in package. As far as I know, it is widely used in corpos, such as NVIDIA.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I don't understand the argument against that approach, TBH. This is a builder pattern, which is very common in OOP. E.g. `StringBuilder` in Java uses almost the same idea (you modify the object and as far as I remember the builder is not returned). It's not functional, since you do not provide a function as an argument (i.e. this is what I understand as "functional"). Anyway, even though that approach is not present in Transformers, maybe it's a good moment to introduce it? Currently, the API and the documentation of argument objects is harder and harder to use. The arguments are not sorted and there are 96!!! arguments in the TrainingArguments object. I am using the API and I am teaching NLP and OOP. When I introduce the object to my students, they have very hard time to go through and understand all the options. Many of them are completely unrelated to the training process (from the ML perspective). But the user cannot differentiation between the important and unimportant arguments. So I think the argument classes should be extended with the suggested mechanism or a different mechanism which supports modularisation.<|||||>Hi @apohllo Sorry for the delay on this. Would something like in the PR linked above work for you?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,830
closed
Pipeline support for image similarity
### Feature request Given that we have a [tutorial notebook on image similarity](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_similarity.ipynb) and an [upcoming blog post](https://github.com/huggingface/blog/pull/663), and given the usefulness of the use case, it's time we added a pipeline for this task. ### Motivation Image similarity is an important use case in the industry. ### Your contribution Happy to contribute the pipeline. Following describes some of the design decisions I had in mind for this pipeline. By default, we provide the [most downloaded image classification model](https://huggingface.co/models?pipeline_tag=image-classification&sort=downloads) (trained on ImageNet-1k). Image inputs to the `__call__()` of the pipeline would be similar to an [`ImageClassificationPipeline`](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.ImageClassificationPipeline) except the the input needs to be a list of two images / URLs, etc. We return a matrix quantifying the similarity scores (cosine similarity) between all the input images. We might want to also provide recommendations to the users when using this pipeline. For example, the input images would need to be provided in accordance with the provided model. If you're using a model that was pre-trained / fine-tuned on medical images then there's no point in passing images of cats and dogs to compute similarity over. Related: https://github.com/huggingface/hub-docs/issues/572
12-19-2022 11:36:28
12-19-2022 11:36:28
Ccing @NielsRogge @osanseviero @nateraw <|||||>We don't currently have a `text-similarity`/`sentence-similarity` pipeline either, right? I think for that task, folks use the `feature-extraction` pipeline to get embeddings, then just compute the similarity. [Here's an example.](https://huggingface.co/optimum/sbert-all-MiniLM-L6-with-pooler) So, with that in mind, maybe the pipeline could be an equivalent `image-feature-extraction` for vision? Unfortunately, the name '<modality>-feature extractor' is quite confusing since that's what the image processing utils are called still (I think?).<|||||>> So, with that in mind, maybe the pipeline could be an equivalent `image-feature-extraction` for vision? Yes, let's do that! > Unfortunately, the name '-feature extractor' is quite confusing since that's what the image processing utils are called still (I think?). @amyeroberts worked on porting the image feature extractors to `***ImageProcessor` ([example](https://huggingface.co/docs/transformers/model_doc/vit#transformers.ViTImageProcessor)). We also throw a warning when users cal `XXXFeatureExtractor` from the library. With that in mind, `image-feature-extraction` does seem alright to me. <|||||>Even with it being legacy, I'm slightly concerned this may become confusing to some users. I'll let some others weight in here! I can live with `image-feature-extraction` if nobody else vetos <|||||>Afaik `feature-extraction` already works for image feature extraction as well (see https://huggingface.co/google/vit-base-patch16-224-in21k for example)<|||||>> Afaik `feature-extraction` already works for image feature extraction as well (see https://huggingface.co/google/vit-base-patch16-224-in21k for example) I need to try it out to verify if it works with the feature extraction pipeline. Will confirm soon. <|||||>Yes I'm not sure there's a need for a new `image-feature-extraction` pipeline, one can leverage the `feature-extraction` pipeline already<|||||>I verified the feature-extraction pipeline, and it seems like it always assumes the preprocessing will use a tokenizer as opposed to an image processor: ```py --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-10-55a624159a32> in <module> 1 image_one = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png" 2 ----> 3 image_feature_extractor(image_one) 3 frames /usr/local/lib/python3.8/dist-packages/transformers/pipelines/feature_extraction.py in __call__(self, *args, **kwargs) 103 A nested list of `float`: The features computed by the model. 104 """ --> 105 return super().__call__(*args, **kwargs) /usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py in __call__(self, inputs, num_workers, batch_size, *args, **kwargs) 1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params) 1073 else: -> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) 1075 1076 def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params): /usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py in run_single(self, inputs, preprocess_params, forward_params, postprocess_params) 1078 1079 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params): -> 1080 model_inputs = self.preprocess(inputs, **preprocess_params) 1081 model_outputs = self.forward(model_inputs, **forward_params) 1082 outputs = self.postprocess(model_outputs, **postprocess_params) /usr/local/lib/python3.8/dist-packages/transformers/pipelines/feature_extraction.py in preprocess(self, inputs, **tokenize_kwargs) 77 def preprocess(self, inputs, **tokenize_kwargs) -> Dict[str, GenericTensor]: 78 return_tensors = self.framework ---> 79 model_inputs = self.tokenizer(inputs, return_tensors=return_tensors, **tokenize_kwargs) 80 return model_inputs 81 TypeError: 'NoneType' object is not callable ``` The design choice seems reasonable in case image feature extraction was not considered. [Here's](https://colab.research.google.com/gist/sayakpaul/4782955c210397dfcc5306f028cbad3d/feature_extraction_image_similarity.ipynb) my Colab Notebook. @nateraw @NielsRogge @osanseviero <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this as we internally deprioritized this pipeline.
transformers
20,829
closed
[Swin2SR] Add doc tests
# What does this PR do? This PR adds Swin2SR to the doc tests.
12-19-2022 11:12:06
12-19-2022 11:12:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yeah I remember a discussion with @patrickvonplaten and @sanchit-gandhi, I used `Swin2SRImageProcessor` here instead of the Auto class to make it more explicit. But happy to change. For me, the Auto API is handy when a model doesn't have its own preprocessing class, and for usage in the pipelines.<|||||>Yes I'm with @LysandreJik here! The conclusion was to move towards `AutoProcessor`/`AutoTokenizer` in the docs (_c.f._ the compelling argument from @LysandreJik https://huggingface.slack.com/archives/C01N44FJDHT/p1667824904971239?thread_ts=1667816128.702279&cid=C01N44FJDHT).
transformers
20,828
closed
GPT Neo - no attention weights scaling in pytorch implementation of GPT Neo
It seems that there is no scaling of attention weights in GPT-Neo implementation of SelfAttention. Probably https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L188 should be modified as follows: ``` attn_weights = torch.matmul(query, key.transpose(-1, -2)) scale = self.head_dim ** -0.5 attn_weights = attn_weights * scale ```
12-19-2022 10:39:49
12-19-2022 10:39:49
Do you have a source available that says they do use scaling here? Though it is common practice, I don't think it is necessarily required.<|||||>Yes, it seems you are right. Originally they use mesh_tensorflow realization of multi-head self-attention in which i don't see such scaling. But at the same time in flax realization (https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L230) flax.linen.attention.dot_product_attention_weights is used. And in its source code there is scaling. But i don't think that this incosistency is crucial.
transformers
20,827
closed
add: task guide on video classification model fine-tuning.
This PR adds a task guide on fine-tuning video classification models.
12-19-2022 10:08:51
12-19-2022 10:08:51
@nateraw I leveraged the video classification pipeline with the custom fine-tuned model and a URL from the [UCF-101 subset](https://huggingface.co/datasets/sayakpaul/ucf101-subset) (`avi` format). It worked like a charm! 🔥<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@MKhalusova pushed a few changes. Will add another commit adding you as a co-author when you're back online. <|||||>@amyeroberts thank you! Addressed all your comments. <|||||>@sgugger one small pending part is https://github.com/huggingface/transformers/pull/20827#discussion_r1061285987. Once it's resolved, we should be good to merge. <|||||>Updated the links. Will wait for the tests to pass and will merge afterward.
transformers
20,826
closed
Add-warning-tokenizer
# What does this PR do? Adds a warning when the user wants to use a fast tokenizer but it doesn't exist. Should help with #20817
12-19-2022 10:07:36
12-19-2022 10:07:36
_The documentation is not available anymore as the PR was closed or merged._<|||||>Should I just change the argument? I'm in favor of raising an issue rather than a warning here, WDYT? <|||||>No I was talking about the docstring, but this is actually addressed by another PR. We can't suddenly raise an error for this behavior as it would be breaking.
transformers
20,825
closed
[`FSMT`] Make it compatible with `xxxForConditionalGeneration` models
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/20824 `FSMT` model is an encoder-decoder model. Most of `xxxForConditionalGeneration` models can use `decoder_inputs_embeds` and `inputs_embeds` in replacement respectively of `input_ids` and `decoder_input_ids`. `FSMT` does not implement this functionality which breaks some assumptions made by external libraries / APIs, an issue has been flagged and reported in https://github.com/inseq-team/inseq/issues/153 This PR fixes this behavior, by adding `inputs_embeds` and `decoder_inputs_embed` support for `FSMT` to make it consistent with other `xxxForConditionalGeneration` models. The PR also adds `get_encoder` and `get_decoder` class functions for `FSMT`, following the implementation of `T5` `self.embed_positions(input_ids)` and `self.embed_positions(inputs_embeds[:, :, 0])` are equivalent as the positional embedding is computed with respect to the hidden states shape. But I added an extra check by assuming that 0 hidden states correspond to padding tokens, as this information is needed later by the position embedding layer. All FSMT slow tests pass
12-19-2022 09:58:29
12-19-2022 09:58:29
I think that `inputs_embed` cannot be added as T5, since [positional embeddings needs to be computed too](https://github.com/huggingface/transformers/blob/ecd7de3dff7ea5713004b2f05e3869c24b8eb6e2/src/transformers/models/fsmt/modeling_fsmt.py#L721), and this requires having `input_ids`, except if the user can pass `inputs_position_embeds` too <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @gsarti Here is an attempt to fix #20824 , could you double check if this fixes your root issue I ran: ``` import inseq model = inseq.load_model("facebook/wmt19-en-de", "integrated_gradients") out = model.attribute( "The developer argued with the designer because her idea cannot be implemented.", n_steps=100 ) out.show() ``` but getting an error which is different from the one described in https://github.com/inseq-team/inseq/issues/153 ``` ValueError: Both `max_new_tokens` and `max_length` have been set but they serve the same purpose -- setting a limit to the generated output length. Remove one of those arguments. Please refer to the documentation for more information. (https://huggingface.co/docs/transformers/main/en/main_classes/text_generation) ```<|||||>The error you see is a problem of Inseq due to force-setting `max_new_tokens` for generation, you can bypass it by passing a custom `generated_text` argument to `model.attribute` so that no call to model.generate is performed, but only forwards for the attribution. Trying this code now: ```python import inseq model = inseq.load_model("facebook/wmt19-en-de", "integrated_gradients") out = model.attribute( "The developer argued with the designer because her idea cannot be implemented.", "Hallo Welt ich bin Gabriele", n_steps=100 ) out.show() ``` I still encounter an error: ```shell /usr/local/lib/python3.8/dist-packages/transformers/models/fsmt/modeling_fsmt.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, use_cache, output_attentions, output_hidden_states, inputs_embeds, inputs_position_embeds, decoder_inputs_embeds, decoder_inputs_position_embeds, return_dict) 1090 decoder_padding_mask, causal_mask = None, None 1091 -> 1092 assert decoder_input_ids is not None 1093 1094 if encoder_outputs is None: AssertionError: ``` @younesbelkada Is this check needed now? I presume extra tests for the forward call using embeddings as inputs would fail, too, if they were present!<|||||>Thank you very much @gsarti for double checking, I adapted the assert condition based on your suggestion, getting now an error since `inputs_position_embeds` needs to be passed too. I think the next fix should go on `inseq` side to support sending `inputs_position_embeds` too<|||||>Is it standard to have `inputs_position_embeds` as forward inputs? Don't recall seeing them in other models. In principle if the positions are just created from sinusoidals they could be omitted and added in the forward itself, right?<|||||>Thanks for the heads up @gsarti , you are correct, I overlooked at the code and thought the positional embedding was using nn.Embedding, I can confirm the script you sent me above runs now!<|||||>You can refer to other models for the usual docstring that we write for `input_ids` etc. Let's keep consistency wherever we can<|||||>I will try to find time to review tomorrow, but I also wanted to point out @patil-suraj's sync https://github.com/huggingface/transformers/pull/11218 which never got merged, but might be a useful reference as I think the same work was done there.
transformers
20,824
closed
FSTM compatibility issues with other `ForConditionalGeneration` models
### Description The implementation of FSMT models like [`facebook/wmt19-en-de`](https://huggingface.co/facebook/wmt19-en-de) is atypical with respect to different aspects that are normally supported in other `ForConditionalGeneration` models. In particular: - Both `FSMTModel` and `FSMTForConditionalGeneration` lack utility methods like `get_encoder` and `get_decoder` - `FSMTForConditionalGeneration` and all its subclasses do not accept `inputs_embeds` and `decoder_inputs_embeds` as possible alternatives to `input_ids` and `decoder_input_ids` for the forward pass. In particular, this renders the model unusable when using feature attribution methods through the `inseq` library (see related issue inseq-team/inseq#153). ### Expected behavior We would expect all models belonging to the same family to expose a consistent API for external usage. ### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA ### Who can help? @ArthurZucker @younesbelkada
12-19-2022 09:44:01
12-19-2022 09:44:01
transformers
20,823
closed
[OPT] Adds `GPT2TokenizerFast` to the list of tokenizer to use for OPT.
# What does this PR do? Adresses the issues with OPT where `use_fast = True` does not use the Fast GPT2 tokenizer. A follow PR should add a warning when the Fast tokenizer is not available. This should allow people to do : ```python >>> from transformers import AutoTokenizers >>> tok = AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast = True) >>> tok.is_fast True ```
12-19-2022 09:22:50
12-19-2022 09:22:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>The previous tests were passing but the tokenizer was `slow` were it should have been `fast` 😅 <|||||>a gentle ping here, as the m4 group needs to have all official opt models to support fast tokenizers. Thank you, @ArthurZucker!<|||||>Merging ASAP.<|||||>cc @ydshieh the failing tests are related to the length of the dictionary of the tokenizer. Spaces are encoded to `222`, which is then passed to the model, while the vocab and embeddings are smaller. I am probably gonna skip it WDYT?<|||||>> cc @ydshieh the failing tests are related to the length of the dictionary of the tokenizer. Spaces are encoded to `222`, which is then passed to the model, while the vocab and embeddings are smaller. I am probably gonna skip it WDYT? Haven't looked this in very detail, but it looks like some bug exists in fast tokenizers. If this is really the case, I am not sure why we want to go ahead to enable the fast tokenizers (by skipping the failing tests). But if I miss any context and saying any non-sense, please correct me.<|||||>The problem is not from the Fast tokenizer (it is a GPT2 tokenizer) but the tiny config test. GPT2TokenizerFast is pretty much full proof at this points. I was just wondering if you have a quick fix for this, ( as I said, the tokenizer's vocab lenght is `258` while the model tiny expects no more than `99`, which is causing this issue. This just means that the initialisation of the tiny tokenizer is not correct for this test ( it is using OPT-350m). <|||||>Sir, the PR is based on a quite old commit on `main` (2 weeks ago). Could you rebase (well, better to use merge as you have already done `merge` previously) on `main`, and see if things go better/well. FYI, the pipeline testing has some (non-trivial) change(s) in the merged PR #20426. Hint: it's always nice to rebase (or any way you prefer) to have new commits on `main` in a PR.<|||||>we can also re-do the tiny tokenizers if they don't conform with the needs of the CI.<|||||>As @ydshieh said it might have been fixed, the problem is that the CI is still kind of stuck/not working ... not really sure if it is only this PR, but otherwise should be good to merge<|||||>- pipeline test is fine - torch test failed with `test_save_load_fast_init_to_base ` which is known to be flaky. @ArthurZucker Could you check other failing tests - probably they are just flaky ones ..? <|||||>I will run them locally 😉 <|||||>Good to go! <|||||>cc @stas00 sorry for the long wait<|||||>Thank you very much for taking care of this, Arthur!
transformers
20,822
open
Train mobileBERT from scratch for other languages
### Model description Hi, I am thinking of training a mobileBERT model from scratch for the German language. Can I use the [English mobileBERT model from HuggingFace](https://huggingface.co/google/mobilebert-uncased) to apply it to a dataset in another language? It makes sense that I would have to adapt the teacher model of mobileBERT to a BERT model of the corresponding language. Unfortunately, I could not find a parameter to adapt the teacher model. Are there any other ideas on how best to train a mobileBERT model for another language? Many greetings and many thanks! ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
12-19-2022 09:11:02
12-19-2022 09:11:02
Hi. I am interested in working on this.
transformers
20,821
closed
Write inference evaluation
### Feature request This feature request is strongly inspired by T5X. They write a log every time they do an evaluation. The log is a json-lines-file that is saved in inference_eval/task-1000.jsonl. "task" is the current task, and "1000" is the checkpoint. The file is generic, and looks like this: ```json { "input": { "inputs_pretokenized": "Hello World", "inputs": [###,###,###], "targets_pretokenized": "Hallo verden", "targets": [###,###,###] }, "target": "Hallo verden", "output": "Hei verden", "prediction": "Hei verden" } ``` Long inputs, like audio and images, will typically be truncated. ### Motivation This simply makes debugging a lot easier. The jsonl-format makes it really easy to open this in another program for more thorough study. Since the same file is predicted at every step, it makes it really easy to follow the development of a single target. ### Your contribution I would be glad to make feedback on such an implementation. I am not exactly sure where this feature should be implemented today.
12-19-2022 09:00:29
12-19-2022 09:00:29
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,820
closed
(Will make a PR) `BeamScorer` is super slow and takes 2x time compared with model itself, and can speed up by 1000%
### System Info - `transformers` version: 4.25.1 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.23 - Python version: 3.10.0 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run `.generate(num_beams=3)` thus use beam search. ### Expected behavior Should be fast, but it is super slow. I am working on a fix and will soon make a PR (today?). Just want to open this issue first so that I can get some early feedbacks (e.g. do you welcome PRs?). One cause I realize is that, the BeamScorer.process etc is working on torch tensors on gpu, *one by one*. I refactored it so it works on numpy arrays on cpu, and it is 3x faster. --- If you are interested, here are some *very early* results: before (scorer takes 7.37s) ![image](https://user-images.githubusercontent.com/5236035/208365686-8d00f36a-364a-4310-a4fc-a2f71a7015b7.png) after (scorer 2.72s, still slow but better) ![image](https://user-images.githubusercontent.com/5236035/208366916-b3773eb1-1f4d-4d9d-be9d-ea2a286b7b1e.png)
12-19-2022 06:52:52
12-19-2022 06:52:52
cc @gante <|||||>Update: I made a vectorized version of BeamScorer. ## Performance results: 1000% faster <details> ### MyBeamSearchScorer: 0.45s ![image](https://user-images.githubusercontent.com/5236035/208571326-1502c8a3-fc7d-429a-a3c5-3addf8aab556.png) ### BeamSearchScorer: 5.72-6.28s ![image](https://user-images.githubusercontent.com/5236035/208571462-fdd1ce15-7eff-4aee-ab20-26a028d707d5.png) </details> ## Code Quite messy currently, but the core is beam_search.py and the rest are glues and benchmarks https://gist.github.com/fzyzcjy/fab4bf82c62f23b3432123c84f14a2c6<|||||>If you are interested please tell me and I will make a PR :)<|||||>Hey @fzyzcjy 👋 We thrive on external contributions, so you're more than welcome to open a PR. In general, these would be the requirements: 1. Has a significant speedup on either CPU or GPU 2. All existing tests pass 3. The code maintains its readability, such that beam search is easy to understand Looking forward to the PR :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>bump, will submit PR when having time (indeed all code are already completed - see link above, just no time to sit down and PR)<|||||>PR: https://github.com/huggingface/transformers/pull/21234<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,819
closed
Add `min_new_tokens` argument in generate() implementation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #20756 #20814 #20614 (cc @gonced8 @kotikkonstantin) As many said, it is better to add an argument `min_new_tokens` to the `.generate()` method to limit the length of newly generated tokens. The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of `newly generated tokens`. It seems that all tests are passed ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - @gante <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-19-2022 03:03:06
12-19-2022 03:03:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>(Note: this PR depends on the resolution of https://github.com/huggingface/transformers/issues/20814, so I'm waiting for it before I review this one)<|||||>> (Note: this PR depends on the resolution of #20814, so I'm waiting for it before I review this one) Great, let me know if there are anything I can help.<|||||>Sorry that I have accidently rebased my PR to include the change of #20892. It seems that my operation will link this PR to all pervious commits that have been merged to main. I think it would be more convenient to make a new PR from the main branch. So I am closing this one. @gante <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20819). All of your documentation changes will be reflected on that endpoint.<|||||>I have made a new PR #21044
transformers
20,818
closed
[clip] fix error message
This PR is fixing the error message: ``` You have to specify either input_ids ``` which in the code doesn't have any `either` options that I can see, so probably it was there by mistake. @sgugger
12-18-2022 22:37:50
12-18-2022 22:37:50
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,817
closed
`AutoTokenizer` not enforcing `use_fast=True`
This issue is about `AutoTokenizer` not enforcing `use_fast=True`. This works: ``` $ python -c "from transformers import AutoTokenizer; t=AutoTokenizer.from_pretrained('facebook/opt-13b', use_fast=True); \ assert t.is_fast, 'tokenizer is not fast'; print('Success')" Success ``` now the same code, but a different model 'facebook/opt-1.3b' that doesn't have a fast optimizer: ``` $ python -c "from transformers import AutoTokenizer; t=AutoTokenizer.from_pretrained('facebook/opt-1.3b', use_fast=True); \ assert t.is_fast, 'tokenizer is not fast'; print('Success')" Traceback (most recent call last): File "<string>", line 1, in <module> AssertionError: tokenizer is not fast ``` now the doc says: ``` use_fast (bool, optional, defaults to True) — Whether or not to try to load the fast version of the tokenizer. ``` so it sort of hints with "try to load" that it won't enforce it. But would you be open to a less ambiguous definition? something like: ``` use_fast (bool, optional, defaults to True) — Will try to load the fast version of the tokenizer if there is one and will quietly fallback onto the normal (slower) tokenizer if the model doesn't provide a fast one. ``` I think the `use_fast` arg name is ambiguous - I'd have renamed it to `try_to_use_fast` since currently if one must use the fast tokenizer one has to additionally check that that `AutoTokenizer.from_pretrained` returned the slow version. not sure, open to suggestions. context: in m4 the codebase currently requires a fast tokenizer. Thank you! cc: @ArthurZucker
12-18-2022 19:15:14
12-18-2022 19:15:14
The name has been around for so long that we won't change it. It's not ideal but it is what is 🤷‍♂️ We can definitely improve the documentation however! Unrelated: why does OPT not create the fast tokenizer on the fly from the slow one @ArthurZucker ? This seems like abug.<|||||>It is indeed a bug and people seem to be confused. IMO we should add a warning when `use_fast` is set to `True` but a fast tokenizer does not exists. Will have a look at why OPT does not create the fast tokenizer 😉 <|||||>If you have to use a warnings in this situation it's a sign that API needs to be improved. Warnings rarely work as there are dozens/hundreds of them emitted by most applications and a user is unlikely to notice it. That's just my experience-based opinion, of course. If the old name can't be deprecated, I'd leave it alone and update the doc as a I suggested in the OP and add a new arg `require_fast=True` which would assert if the requirement can't be met. So the first one is preference, the second one is a requirement. That would make for an unambiguous yet flexible API. > Unrelated: why does OPT not create the fast tokenizer on the fly from the slow one @ArthurZucker ? This seems like abug. some of the OPT models do and some don't, you can see in the OP both examples are OPT models.<|||||>Agreed, the problem is now the inconsistency between two models. If it is only `OPT` related we can leave it as is, otherwise will have a look<|||||>It is indeed a bug, the `facebook/opt-1.3b` tokenizer config is missing the `tokenizer_type` variable. And the use_fast argument is not passed down properly in that case. The fix is here #20823 <|||||>so where are we with this Issue, @ArthurZucker? Thank you! As it will get closed by the stale bot.<|||||>I think the doc has been updated and the OPT model where there was a problem has been fixed, so the issue is ready to be closed no?<|||||>Yes, I re-opened it because I thought we should probably raise and error if the tokenizer is not fast, but feel free to close. <|||||>As was said before here, either raising an error or renaming the argument would be too much of a breaking change for something that has been around for three years.
transformers
20,816
closed
Add visual prompt to processor of CLIPSeg model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Currently, integrated CLIPSeg model only supports textual prompts. However, a main advantage of CLIPSeg is that one can provide visual prompts instead of textual prompts in order to do semantic segmentation. For further details, you can refer to the original _Image Segmentation Using Text and Image Prompts (CVPR 2022)_ paper [here](https://openaccess.thecvf.com/content/CVPR2022/html/Luddecke_Image_Segmentation_Using_Text_and_Image_Prompts_CVPR_2022_paper.html). This change can easily be adapted to current `CLIPSegProcessor` by just providing an additional parameter which processes the visual prompt via image processor and returns the embedding with an additional key, i.e. `conditional_pixel_values`. This PR complements the work done in [this](https://github.com/huggingface/transformers/pull/20066) previous pull request. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -> Not discussed, but only requires a minor change to fully support CLIPSeg model. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? -> Previous tokenizer and image processor tests apply. ## Who can review? Anyone in the community is free to review the PR. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge @sgugger @alaradirik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-18-2022 16:08:54
12-18-2022 16:08:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @alaradirik, thanks for the review! Added a test to [test_processor_clipseg.py](https://github.com/huggingface/transformers/blob/main/tests/models/clipseg/test_processor_clipseg.py) as well.<|||||>> Hello @sgugger - I am aware that the argument can be passed at the end, but this also opens ways for faulty usage to users who do not know how CLIPSeg model processes their input. Let's see a working example: ``` import torch from transformers import CLIPSegProcessor, CLIPSegForImageSegmentation processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined") from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = ["background", "cat"] images = [image]*2 ``` ``` inputs = processor(text, images, return_tensors="pt") # the processor also returns the text embedding (which should not be used) with torch.no_grad(): outputs = model(**inputs, conditional_pixel_values=inputs.pixel_values) ``` What did the model process in the above line? Is it visual prompt + image or text prompt + image? It seems like it is still processing the textual prompt + image pair. Why? Let's try to fail it: ``` inputs = processor(text, images, return_tensors="pt") visual_prompt_input = processor(images=[image], return_tensors="pt") # Additional prompt with length 1 with torch.no_grad(): outputs = model(**inputs, conditional_pixel_values=visual_prompt_input.pixel_values) ``` Here first processor computes `text` and `images` arguments with length 2. Second one, however, only takes a single image. This does not fail the model as it still processes a text prompt + image pair rather than visual prompt (the one passed via `conditional_pixel_values`). **Side note:** The [processor of OWL-ViT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/owlvit/processing_owlvit.py) also has an additional argument (i.e. `query_images`) in addition to `images` and `text`. An idea might be to add `visual_prompt` as the third argument (as done in OWL-ViT) so that it would not break anything as @NielsRogge suggested. Thanks for taking your time!
transformers
20,815
open
Cannot export Deberta to TorchScript
### System Info `transformers-cli env` ``` - `transformers` version: 4.10.2 - Platform: Linux-3.10.0-1127.18.2.el7.x86_64-x86_64-with-glibc2.23 - Python version: 3.9.13 - PyTorch version (GPU?): 1.9.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to convert the Deberta Model to TorchScript using the instructions provided in the [HF tutorial](https://huggingface.co/docs/transformers/torchscript). `Code:` ``` import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base") model = AutoModel.from_pretrained("microsoft/deberta-base", torchscript=True) tokenized_dict = tokenizer( ["Is this working",], ["Not yet",], return_tensors="pt" ) input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask']) traced_model = torch.jit.trace(model, input_tuple) torch.jit.save(traced_model, "compiled_deberta.pt") ``` `Error Message:` From torch.jit.save: ``` Could not export Python function call 'XSoftmax'. Remove calls to Python functions before export. Did you forget to add @script or @script_method annotation? If this is a nn.ModuleList, add it to __constants__: ``` ### Expected behavior The Traced model should be successfully saved. After loading, it should have the same functional behavior as the model it was traced from.
12-18-2022 12:22:34
12-18-2022 12:22:34
Yes, this model is not compatible with torchscript, cc @ArthurZucker <|||||>Thanks, will take that into account when refactoring<|||||>Go away stalebot<|||||>Any update here?<|||||>Just started working on this! 😉 <|||||>Sorry! Seem like I had to postpone this! If anyone want to take over feel free to do it, otherwise will be my priority once #23909 is merge! <|||||>More delays given the recent sprints! But I think it should calm down during this summer! 😉
transformers
20,814
closed
`min_new_tokens` argument in generate() implementation
### Feature request As many said: - [link1](https://github.com/huggingface/transformers/issues/20614) cc @silverriver - [link2](https://github.com/huggingface/transformers/issues/20756) cc @gonced8 A new parameter `min_new_tokens` to the `.generate()` method to limit the length of newly generated tokens. The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of newly generated tokens. I've come up with a solution by creating a new logits processor: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM from transformers.generation_utils import MinLengthLogitsProcessor class MinNewTokensLengthLogitsProcessor(MinLengthLogitsProcessor): r""" [`MinLengthLogitsProcessor`] enforcing a min-length of new tokens by setting EOS probability to 0. Args: min_length (`int`): The minimum length below which the score of `eos_token_id` is set to `-float("Inf")`. eos_token_id (`int`): The id of the *end-of-sequence* token. """ def __init__(self, min_length: int, eos_token_id: int): super().__init__(min_length, eos_token_id) self.prompt_length_to_skip = None def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: if self.prompt_length_to_skip is None: self.prompt_length_to_skip = input_ids.shape[-1] current_length = input_ids.shape[-1] - self.prompt_length_to_skip if current_length < self.min_length: scores[:, self.eos_token_id] = -float("inf") return scores if __name__ == '__main__': device = "cuda" if torch.cuda.is_available() else "cpu" model_name = "inkoziev/rugpt_chitchat" tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'}) model = AutoModelForCausalLM.from_pretrained(model_name) model.to(device) model.eval() input_text = """<s>- Привет! Что делаешь? - Привет :) В такси еду -""" encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device) output_sequences = model.generate(input_ids=encoded_prompt, logits_processor=[MinNewTokensLengthLogitsProcessor(16, tokenizer.eos_token_id)], max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id ) text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:] text = text[: text.find('</s>')] print(f"Length of generated text W/ MinNewTokensLengthLogitsProcessor: {len(text)}") print(text) output_sequences = model.generate(input_ids=encoded_prompt, min_length=16, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id ) text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text) + 1:] text = text[: text.find('</s>')] print(f"Length of generated text W/O MinNewTokensLengthLogitsProcessor: {len(text)}") print(text) ``` **outcome of script executing**: <img width="1575" alt="image" src="https://user-images.githubusercontent.com/22777646/208291420-9874c4ad-e63e-4ef8-bf8a-ec249748c50f.png"> Used transformers package version: 4.24.0 **But I'd recommend to do it simpler, by requested `min_new_tokens` argument in `.generate()`** ### Motivation **The motivation is to control min length of the newly generated replica** ### Your contribution by submitting a PR
12-18-2022 09:55:04
12-18-2022 09:55:04
I have made a PR #20819 to add this augment to the `generate()` implementation. r<|||||>Hey @kotikkonstantin 👋 Can you open a PR with your proposed `MinNewTokensLengthLogitsProcessor`? It looks good to me, except for one detail -- it shouldn't inherit from `MinLengthLogitsProcessor`, as our long run goal is to deprecate it :) After we merge `MinNewTokensLengthLogitsProcessor`, we can integrate it with `generate` with @silverriver's PR!<|||||>Hi @gante ! Sure) I'm making it for a couple of days. Thank you very much for the feedback!<|||||>Hey @gante 👋 A PR is ready <|||||>Actually, this is not done yet, as #20819 needs to be merged to be usable with `generate` :)<|||||>Hi, I have closed my original PR #20819 and made a new one #21044 to avoid messing with a bunch of other commits when I tried to rebase my commit. #21044 is implemented based on the new `MinNewTokensLengthLogitsProcessor`. @gante Please have a look.<|||||>(should be done now :) )
transformers
20,813
closed
Relative path causes error when calling push_to_hub to upload a custom model
### System Info - `transformers` version: 4.24.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.15 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction By running the following code and fill `<YOUR-REPO>` and `<YOUR-TOKEN>`: ``` from transformers import AutoConfig, AutoModel config = AutoConfig.from_pretrained("Lazyhope/python-clone-detection") model = AutoModel.from_pretrained("Lazyhope/python-clone-detection", config=config) config.register_for_auto_class() model.register_for_auto_class("AutoModel") model.push_to_hub("<YOUR-REPO>", use_auth_token = "<YOUR-TOKEN>") ``` ### Expected behavior Hi, when I was trying to upload a custom model which inherent from transformers.RobertaPreTrainedModel, the following error occurs: ``` FileNotFoundError (note: full exception trace is shown but execution is paused at: _run_module_as_main) [Errno 2] No such file or directory: '/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/models/roberta/.bert.configuration_bert.py' File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 70, in get_relative_imports with open(module_file, "r", encoding="utf-8") as f: File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 97, in get_relative_import_files new_imports.extend(get_relative_imports(f)) File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/dynamic_module_utils.py", line 439, in custom_object_save for needed_file in get_relative_import_files(object_file): File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/configuration_utils.py", line 441, in save_pretrained custom_object_save(self, save_directory, config=self) File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1579, in save_pretrained model_to_save.config.save_pretrained(save_directory) File "/Users/rino/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/utils/hub.py", line 790, in push_to_hub self.save_pretrained(work_dir, max_shard_size=max_shard_size) File "/Users/rino/Desktop/RepoAnalysis/huggingface/register.py", line 9, in <module> model.push_to_hub("Lazyhope/python-clone-detection", user_auth_token=<Hidden>) ``` It seems that relative imports like https://github.com/huggingface/transformers/blob/7032e0203262ebb2ebf55da8d2e01f873973e835/src/transformers/models/roberta/modeling_roberta.py#L27 were directly turned into paths like `/miniconda3/envs/CloneDetection/lib/python3.9/site-packages/transformers/models/roberta/..modeling_outputs.py` and opening this kind of path in https://github.com/huggingface/transformers/blob/7032e0203262ebb2ebf55da8d2e01f873973e835/src/transformers/dynamic_module_utils.py#L70 would caused a FileNotFound error.
12-18-2022 08:49:24
12-18-2022 08:49:24
I'm not too sure I understand your use case. The code sample you provide is indeed not supported as you are just re-using the code of the library, so you should just remove the line registering your new model. When defining a custom model, the modeling file will be exported in the repo and shouldn't indeed contain any relative imports (you'll need to convert them to regular imports)<|||||>> I'm not too sure I understand your use case. The code sample you provide is indeed not supported as you are just re-using the code of the library, so you should just remove the line registering your new model. When defining a custom model, the modeling file will be exported in the repo and shouldn't indeed contain any relative imports (you'll need to convert them to regular imports) I have a custom RobertaRBERT model stored in my local directory `custom/robertarbert.py`, previously I use the following code to load the checkpoint: ``` config = AutoConfig.from_pretrained(PLM, use_auth_token=access_token) config.dropout_rate = 0.1 model = RobertaRBERT.from_pretrained(PLM, config=config, use_auth_token=access_token) ``` Now I want to upload this local `RobertaRBERT` class definition to the hub, and hopefully use `AutoModel.from_pretained` to load it directly without loading config again, so I followed this document: https://huggingface.co/docs/transformers/custom_models, and running the following code: ``` from custom.robertarbert import RobertaRBERT from transformers import AutoConfig config = AutoConfig.from_pretrained("Lazyhope/python-clone-detection") model = RobertaRBERT.from_pretrained("Lazyhope/python-clone-detection", config=config) config.register_for_auto_class() model.register_for_auto_class("AutoModel") model.push_to_hub("Lazyhope/new_model", use_auth_token = access_token) ``` caused the error I mentioned above, is it because my code was incorrect?<|||||>Like I said, the custom modeling file shouldn't contain any relative imports.<|||||>> Like I said, the custom modeling file shouldn't contain any relative imports. Here is my custom model file: ``` import torch.nn as nn from transformers import ( RobertaPreTrainedModel, RobertaModel, ) from transformers.modeling_outputs import SequenceClassifierOutput class RobertaRBERT(RobertaPreTrainedModel): ``` I think it doesn't contain any relative imports<|||||>@sgugger You could also reproduce the error by calling `get_relative_import_files('src/transformers/models/roberta/configuration_roberta.py')` which is defined in https://github.com/huggingface/transformers/blob/f76518e56a5ef0836a780630de6f5b4456e9aa4a/src/transformers/dynamic_module_utils.py#L81 I suppose there is a bug in the relative import extracting function?<|||||>It's a known limitation: the relative imports are only permitted at one level and no more for custom models.<|||||>> It's a known limitation: the relative imports are only permitted at one level and no more for custom models. Is there a workaround? As you can see above my custom model file doesn't contain any relative import but it still doesn't work.<|||||>Was a little bit confused by the doc, turns out I just need to use the following code: ``` CloneDetectionModel.register_for_auto_class("AutoModel") custom_model = CloneDetectionModel.from_pretrained("<PLM>", config=config) custom_model.push_to_hub("<DIR>") ```
transformers
20,812
closed
Add visual prompt to clipseg processor
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Currently, integrated CLIPSeg model only supports textual prompts. However, a main advantage of CLIPSeg is that one can provide visual prompts instead of textual prompts in order to do semantic segmentation. For further details, you can refer to the original _Image Segmentation Using Text and Image Prompts (CVPR 2022)_ paper [here](https://openaccess.thecvf.com/content/CVPR2022/html/Luddecke_Image_Segmentation_Using_Text_and_Image_Prompts_CVPR_2022_paper.html). This change can easily be adapted to current `CLIPSegProcessor` by just providing an additional parameter which processes the visual prompt via image processor and returns the embedding with an additional key, i.e. `conditional_pixel_values`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -> Not discussed, but only requires a minor change to fully support CLIPSeg model. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? -> Previous tokenizer and image processor tests apply. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-17-2022 23:41:21
12-17-2022 23:41:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
20,811
closed
Add visual prompt to processor
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Currently, integrated CLIPSeg model only supports textual prompts. However, a main advantage of CLIPSeg is that one can provide visual prompts instead of textual prompts in order to do semantic segmentation. For further details, you can refer to the original _Image Segmentation Using Text and Image Prompts (CVPR 2022)_ paper [here](https://openaccess.thecvf.com/content/CVPR2022/html/Luddecke_Image_Segmentation_Using_Text_and_Image_Prompts_CVPR_2022_paper.html). This change can easily be adapted to current `CLIPSegProcessor` by just providing an additional parameter which processes the visual prompt via image processor and returns the embedding with an additional key, i.e. `conditional_pixel_values`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -> Not discussed, but only requires a minor change to fully support CLIPSeg model. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? -> Previous tokenizer and image processor tests apply. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-17-2022 23:05:04
12-17-2022 23:05:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,810
closed
group_by_length in Seq2SeqTrainer
### System Info Hey, huggingface team! - `transformers` version: 4.24.0 - Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, but error is does not depend on it - Using distributed or parallel set-up in script?: Yes, but error is does not depend on it ### Who can help? @ArthurZucker @younesbelkada @sg ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I was working on a project and tried `group_by_length` in [Seq2SeqTrainer](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.Seq2SeqTrainer) and observed spikes in the loss function. I attribute it to the implementation of the `get_length_grouped_indices` function. This function groups samples in `megabatches` of size `mega_batch_mult` and reorders samples w.r.t. to length inside of the `megabatche`. Here is an example of why it may not be the right way to do that: ``` import numpy as np import torch from transformers.trainer_pt_utils import get_length_grouped_indices lengths = np.random.permutation(list(range(20))).tolist() batch_size = 2 ids = get_length_grouped_indices(lengths=lengths, mega_batch_mult=3, batch_size=batch_size) [lengths[i] for i in ids] ``` The output is like ``` [19, 14, 11, 10, 4, 2, 17, 13, 12, 8, 7, 5, 18, 16, 9, 6, 1, 0, 15, 3] ``` And after sequential batching: ``` batches = [ [19, 14], [11, 10], [4, 2], [17, 13], [12, 8], [7, 5], [18, 16], [9, 6], [1, 0], [15, 3] ] ``` So after tokenization, we will have a very spiky `max_length` of the batch. In the example: `[19, 11, 4, 17, 12, 7, 18, 9, 1, 15]`. On a bigger scale, it will result in spikes in the loss function (`mega_batch_mult` has a default value of 50, so `max_length` of batch gradually decreases for 50 steps, and so the spike happens every 50 steps, example in the comments) ### Expected behavior Maybe one more additional shuffling inside of the `megabatch` is required
12-17-2022 21:15:24
12-17-2022 21:15:24
![image](https://user-images.githubusercontent.com/47659865/208266592-1338018c-cec5-42fb-a90b-f605f792225d.jpeg) Spiking loss example <|||||>I am not too sure where the bugs lies here. The training loss with samples of varying length will always be noisy. If we had a shuffle you will see random spikes instead of regular ones, but it won't make this noisy behavior disappear.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I met the same issue. I think we should reopen this issue
transformers
20,809
closed
[WIP] RWKV4Neo the RNN and GPT Hybrid Model
# What does this PR do? Adds the model from issue Fixes # (https://github.com/huggingface/transformers/issues/20737) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @younesbelkada @ArthurZucker
12-17-2022 20:50:12
12-17-2022 20:50:12
Hi @ArEnSc ! Thanks for starting over the PR 💪 Let us know whenever you need help with @ArthurZucker ! <|||||>> Hi @ArEnSc ! Thanks for starting over the PR 💪 Let us know whenever you need help with @ArthurZucker ! Will do still doing some research, just figured out how the training notebook works, model executes in notebook so that's a positive<|||||>Update: tracing the model and came up with a state based api for the RNN inference mode on my own code base to experiment with<|||||>Thanks a lot for the status update! Feel free to ping whenever you need help<|||||>Sometimes I look at working on this a little. Here are my notes and possible tasks, started 2023-01-16. - The template appears to be from a T5 style model. The RWKV state could be the encoder hidden state (a little intuitive) and/or the past key values (normative generation). It will take some algebra and tests to add input state to the GPT training form from the RNN inference form. - [ ] The tensorflow loading code appears complicating to me. I might move it out to another file for now. - [ ] The embeddings can likely be adjusted to reflect parts "i" and "ii" of the high level outline below - [ ] It could be helpful to organize the file to retain layout similarity with blinkdl’s files. - [ ] For below outline, next step is reviewing timemix. Draft of architecture (maybe leave out optional parts to start). High level: 1. word embeddings `emb` 2. layernorm `ln0` - optional 2-axis trained position embeddings seen in training code for image modeling `pos_emb_x` `pos_emb_y`. this is converted to 1-axis `pos_emb` and used prior to ln0 in inference. 3. layers of blocks 1. layernorm `ln1` 2. timemix self attention `time_mix_k`, `time_mix_v`, `time_mix_r`, `time_first`, `time_decay`, `key`, `value`, `receptance`, `output`. `time_first` and `time_decay` are kept as float32 in inference. 3. layernorm `ln2` 4. feedforward channelmix `time_mix_k`, `time_mix_r`, `key`, `value`, `receptance` (see channelmix section below) - timemix self attention optionally replaced with feedforward channelmix for block 0 in training code - for one optional block, tiny attention `tiny_ln`, `tiny_q`, `tiny_k`, `tiny_v`, `tiny_mask` seen in training code, inference code in development - optionally inference code uses what looks like a numeric stability trick to extract a factor of 2 from the weights every 6 layere 7. layernorm `ln_out` - optional "copy" attention `head_q`, `head_k`, `copy_mask` then summed to head in training code, inference code in development 8. linear language modeling `head` - for training loss, blink presently has a function after cross entropy called `L2Wrap` to reduce magnitudes GPT(training) and RNN (inference) equivalence: - i think special training initialization values may be used in timemix, channelmix - for inference `time_decay` = -exp(time_decay) is factored out when loaded, but for training this is done in the forward pass. - 5 state elements per layer: - 0 = ChannelMix/FF `xx` - 1 = TimeMix/SA `xx` - 2 = `aa` - 3 = `bb` - 4 = `pp` in inference, `o` in training TimeMix: 1. the previous state is shifted into the `x` vector to make `xx`. in training this is done by "time shifting" with `nn.ZeroPad2d((0, 0, 1, -1))`; in single token inference it is passed as state element 1, which is then replaced by `x`. 2. linear interpolation between the old state xx and the new state x, weighting `x` by a ratio of `time_mix_k`, `time_mix_v`, and `time_mix_r` to make `xk`, `xv`, and `xr` respectivly. 3. k = key @ xk 4. v = value @ xv 5. sr = sigmoid(receptance @ xr) # called simply `r` in inference code - the GPT training form of this is now handed off to a hand-written cuda kernel, compiled on first run, from cuda/wkv_cuda.cu - kernel parameters: `B` = batchsize; `T` = sequence length; `C` = channel count; `_w` = `time_decay`; `_u` = `time_first`; `_k` = `k`; `_v` = `v`; `_y` = `wkv`. - i think this used to be a convolution; i'm not sure whether it still is - `o` and `no` appear to be running values for magnitude management in exponential space, initialized to -1e38; p and q are initialized to 0 - `k` and `v` are indexed by thread so the `token` offset may represent different subregions. i'm not quite clear on that and should test or ask. 1. no = max(o, time_first[channel] + k[token]) 2. A = exp(o - no) # this is e1 in the RNN form 3. B = exp(time_first[channel] + k[token] - no) # this is e2 in RNN 4. wkv[token] = (A * p + B * v[token]) / (A * q + B) 5. no = max(time_decay[channel] + o, k[token]) 6. A = exp(time_decay[channel] + o - no) 7. B = exp(k[token] - no) 8. p = A * p + B * v[token] 9. q = A * q + B 10. o = no; token += 1 - ... here would be the remaining core algebra and code inspection - WIP unified summary of wkv kernel between inference and training: 1. ww = time_first + k[token] 2. next_pp = max(pp, ww) 3. A = exp(pp - next_pp ... - rwkv = sr * wkv - return output @ rwkv ChannelMix: 1. the previous state is shifted into the `x` vector to make `xx`. in training this is done by "time shifting" with `nn.ZeroPad2d((0, 0, 1, -1))`; in single token inference it is passed as state element 0, which is then replaced by `x`. 3. linear interpolation between the old state xx and the new state x, weighting `x` by a ratio of `time_mix_k` and `time_mix_r` to make `xk` and `xr` respectivly. 4. r = sigmoid(receptance @ xr) 5. k = square(relu(key @ xk)) 7. kv = value @ k 8. rkv = r * kv 9. return rkv - [ ] review or improve model file further <|||||>@ArEnSc do you need any help?<|||||>> @ArEnSc do you need any help? if you want to help pm me! on discord, otherwise I should have something end of week minor update<|||||>Hi @ArEnSc, Can you share with us your discord handle? Thanks!<|||||>> Hi @ArEnSc, Can you share with us your discord handle? Thanks! ARENSC#5905 yeah still working on it haha it will be a while <|||||>Working on having GPT Encoder to generate the context and RNN mode inference and sharing weights<|||||>Deleted a bunch of not needed stuff<|||||>Added the [WIP] Label to prevent the bot from coming back 😉 <|||||>@ArEnSc Please let us know if you won't have time to finish this PR. The model is heavily requested as you may see from the linked issue, do you want us to take over this PR and finish this?<|||||>> @ArEnSc Please let us know if you won't have time to finish this PR. The model is heavily requested as you may see from the linked issue, do you want us to take over this PR and finish this? Sure yes, sorry been busy at the hospital these days! I think it's probably important that you guys take this on =)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,808
closed
KeyError: overflow_to_sample_mapping - LayoutLMv3
### System Info Running on private server with no public internet access transformers version: 4.25.1 Platform: RHEL 7.9 Python version: 3.8.12 Huggingface_hub version: 0.11.1 Torch version: 1.13.0 nvidia-cublas-cu11: 11.10.3.66 nvidia-cuda-nvrtc-cu11: 11.7.99 nvidia-cuda-runtime-cu11: 11.7.99 nvidia-cudnn-cu11: 8.5.0.96 Tensorflow version (GPU?): 2.8.2 (True) Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @NielsRogge for LayoutLMv3 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Details regarding the dataset which is used for token classification: ![image](https://user-images.githubusercontent.com/12809547/208253684-71e34cef-69c5-43cb-8f8f-f8b0f42c52bc.png) **Code snippet:** ``` processor = AutoProcessor.from_pretrained("<dir-to-layoutlmv3-base>", apply_ocr=False) processor ``` **Output** LayoutLMv3Processor: - feature_extractor: LayoutLMv3ImageProcessor { "apply_ocr": false, "do_normalize": true, "do_rescale": true, "do_resize": true, "feature_extractor_type": "LayoutLMv3FeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_processor_type": "LayoutLMv3ImageProcessor", "image_std": [ 0.5, 0.5, 0.5 ], "ocr_lang": null, "resample": 2, "rescale_factor": 0.00392156862745098, "size": { "height": 224, "width": 224 }, "tesseract_config": "" } - tokenizer: PreTrainedTokenizerFast(name_or_path='dir-to-layoutlmv3-base', vocab_size=50265, model_max_len=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("< s >", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("< /s >", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'sep_token': AddedToken("< /s >", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'cls_token': AddedToken(" < s > ", rstrip=False, lstrip=False, single_word=False, normalized=True), 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True)}) ``` def prepare_examples(examples): images = examples["image"] words = examples["value"] boxes = examples["bbox"] word_labels = examples["label"] processor_kwargs = {"return_offsets_mapping": False, "return_overflowing_tokens": True, "stride": 100, "max_length": 512} encoding = processor(images, words, boxes=boxes, word_labels=word_labels, truncation=True, padding="max_length", **processor_kwargs) return encoding features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')) }) eval_dataset = globaldataset["test"].map( prepare_examples, batched=True, remove_columns=column_names, features=features, ) ``` **Traceback:** --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[215], line 1 ----> 1 eval_dataset = globaldataset["test"].map( 2 prepare_examples, 3 batched=True, 4 remove_columns=column_names, 5 features=features, 6 ) File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2585, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2582 disable_tqdm = not logging.is_progress_bar_enabled() 2584 if num_proc is None or num_proc == 1: -> 2585 return self._map_single( 2586 function=function, 2587 with_indices=with_indices, 2588 with_rank=with_rank, 2589 input_columns=input_columns, 2590 batched=batched, 2591 batch_size=batch_size, 2592 drop_last_batch=drop_last_batch, 2593 remove_columns=remove_columns, 2594 keep_in_memory=keep_in_memory, 2595 load_from_cache_file=load_from_cache_file, 2596 cache_file_name=cache_file_name, 2597 writer_batch_size=writer_batch_size, 2598 features=features, 2599 disable_nullable=disable_nullable, 2600 fn_kwargs=fn_kwargs, 2601 new_fingerprint=new_fingerprint, 2602 disable_tqdm=disable_tqdm, 2603 desc=desc, 2604 ) 2605 else: 2607 def format_cache_file_name(cache_file_name, rank): File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:585, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 583 self: "Dataset" = kwargs.pop("self") 584 # apply actual function --> 585 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 586 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 587 for dataset in datasets: 588 # Remove task templates if a column mapping of the template is no longer valid File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:552, in transmit_format.<locals>.wrapper(*args, **kwargs) 545 self_format = { 546 "type": self._format_type, 547 "format_kwargs": self._format_kwargs, 548 "columns": self._format_columns, 549 "output_all_columns": self._output_all_columns, 550 } 551 # apply actual function --> 552 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 553 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 554 # re-apply format to the output File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2999, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2997 writer.write_table(batch) 2998 else: -> 2999 writer.write_batch(batch) 3000 if update_data and writer is not None: 3001 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_writer.py:533, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size) 526 cols = ( 527 [col for col in self.schema.names if col in batch_examples] 528 + [col for col in batch_examples.keys() if col not in self.schema.names] 529 if self.schema 530 else batch_examples.keys() 531 ) 532 for col in cols: --> 533 col_type = features[col] if features else None 534 col_try_type = try_features[col] if try_features is not None and col in try_features else None 535 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) KeyError: 'overflow_to_sample_mapping **Changed return_offsets_mapping to True with everything else unchanged returns another error:** ``` processor_kwargs = {"return_offsets_mapping": True, "return_overflowing_tokens": True, "stride": 100, "max_length": 512} #Erstellen eines Processors-Objektes encoding = processor(images, words, boxes=boxes, word_labels=word_labels, truncation=True, padding="max_length", **processor_kwargs) ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[219], line 1 ----> 1 eval_dataset = globaldataset["test"].map( 2 prepare_examples, 3 batched=True, 4 remove_columns=column_names, 5 features=features, 6 ) File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2585, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2582 disable_tqdm = not logging.is_progress_bar_enabled() 2584 if num_proc is None or num_proc == 1: -> 2585 return self._map_single( 2586 function=function, 2587 with_indices=with_indices, 2588 with_rank=with_rank, 2589 input_columns=input_columns, 2590 batched=batched, 2591 batch_size=batch_size, 2592 drop_last_batch=drop_last_batch, 2593 remove_columns=remove_columns, 2594 keep_in_memory=keep_in_memory, 2595 load_from_cache_file=load_from_cache_file, 2596 cache_file_name=cache_file_name, 2597 writer_batch_size=writer_batch_size, 2598 features=features, 2599 disable_nullable=disable_nullable, 2600 fn_kwargs=fn_kwargs, 2601 new_fingerprint=new_fingerprint, 2602 disable_tqdm=disable_tqdm, 2603 desc=desc, 2604 ) 2605 else: 2607 def format_cache_file_name(cache_file_name, rank): File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:585, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 583 self: "Dataset" = kwargs.pop("self") 584 # apply actual function --> 585 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 586 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 587 for dataset in datasets: 588 # Remove task templates if a column mapping of the template is no longer valid File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:552, in transmit_format.<locals>.wrapper(*args, **kwargs) 545 self_format = { 546 "type": self._format_type, 547 "format_kwargs": self._format_kwargs, 548 "columns": self._format_columns, 549 "output_all_columns": self._output_all_columns, 550 } 551 # apply actual function --> 552 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 553 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 554 # re-apply format to the output File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) 482 # Update fingerprint of in-place transforms + update in-place history of transforms 484 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_dataset.py:2999, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2997 writer.write_table(batch) 2998 else: -> 2999 writer.write_batch(batch) 3000 if update_data and writer is not None: 3001 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File ~/.conda/envs/maschinellebelegauslesung_dev_gpu/lib/python3.8/site-packages/datasets/arrow_writer.py:533, in ArrowWriter.write_batch(self, batch_examples, writer_batch_size) 526 cols = ( 527 [col for col in self.schema.names if col in batch_examples] 528 + [col for col in batch_examples.keys() if col not in self.schema.names] 529 if self.schema 530 else batch_examples.keys() 531 ) 532 for col in cols: --> 533 col_type = features[col] if features else None 534 col_try_type = try_features[col] if try_features is not None and col in try_features else None 535 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) KeyError: 'offset_mapping' **Changing the value of the truncation-parameter to False does not alter the KeyErrors.** ### Expected behavior I´d expect a dataset with few more entries than before, because for every entry which has more than 512 token a second entry with the overflowing token + stride-overlap would have been created with the sliding window approach. Regarding this KeyError overflow_to_sample_mapping I found one Git Issue https://github.com/huggingface/transformers/issues/18726, which reports a bug for the LayoutXLM with the non-fast tokenizer. But in my case as shown in the code snippet the processor loads the PreTrainedTokenizerFast for the LayoutLMv3. Thank you very much!
12-17-2022 17:06:54
12-17-2022 17:06:54
@ArthurZucker Hi Arthur, I just saw you self-assigned this ticket, were you already able to reproduce this error? Do you need any further information/ context? Thank you very much! Best regards, Marcel<|||||>Hey, I didn't have the chance to do so yet! If you could provide me with a minimal reproducing script, it would be really great! <|||||>Hey @ArthurZucker, yes sure, please find a google colab notebook here: https://colab.research.google.com/drive/1Ce4H6r7PecaLbohqiGr8Uw-7zBn0-CVr?usp=share_link This is a public notebook from @rajshah4 which I modified a little bit to implement the sliding window approach, which runs into the same error "overflow_to_sample_mapping". The example dataset CORD doesn´t contain documents longer than 512 token, that´s why I set the max_length=100 to artificially create the need for a sliding window approach. (I cannot share the original notebook of mine because it contains sensible data.) Thank you very much! <|||||>Hi @ArthurZucker was the script for reproducing the error helpful? If you need any further information just let me know, thank you very much! <|||||>Hi @Marcel1805 When you activate the sliding window approach a new key is added to the endcoding-dict (overflow_to_sample_mapping) A simple `encoding.pop('overflow_to_sample_mapping', None)` should do it<|||||>Hi @makra89 perfect, it works now as intended, thank you so much! 👍 <|||||>Thanks @makra89 😉 <|||||>Hi, just to clarify since it took me a bit to understand where the new line is needed. The "encoding-dict" is what you get after applying the processor to your data, something like: ``` encoding = processor( images, words, boxes=boxes, word_labels=ner_tags, truncation=True, padding="max_length", return_overflowing_tokens=True, return_offsets_mapping=None, stride=51, ) encoding.pop("overflow_to_sample_mapping") ```
transformers
20,807
closed
How to finetune MBART on an single language?
Hello, Can anyone please suggest, how can I finetune MBART for a specific language. I found this Asian Bert repo. https://github.com/hyunwoongko/asian-bart where they used MBART (using [mBart](https://arxiv.org/abs/2001.08210) by embedding layer pruning) for single language. I want to do the same. I am unable find any good resource on this. Any suggestions? Thanks and Regards.
12-17-2022 06:47:47
12-17-2022 06:47:47
Please use the [forums](https://discuss.huggingface.co/) for question like this, the whole community will be there to help! We keep issues for bugs and feature requests only :-)<|||||>@sgugger Thank you I asked the same in forum, but no response there. If anyone know here, can suggest.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,806
closed
Add AWS Neuron torchrun support
# What does this PR do? This PR adds support for torchrun for AWS Neuron SDK. Existing [HF tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html) for Neuron SDK requires users to modify the HF example script (ie run_glue.py). This change will help minimize the changes required. This change will require future AWS Neuron PyTorch 1.13 support. This is an update to https://github.com/huggingface/transformers/pull/19907 . ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
12-17-2022 00:24:39
12-17-2022 00:24:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>@jeffhataws could you maybe please explain a bit more about how users would benefit from that? I quickly checked the [HF tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html) and with the change you propose users would still need to modify the scripts, e.g., for ```python # Fixup to enable distributed training with XLA from packaging import version from transformers import __version__ if version.parse(__version__) < version.parse("4.20.0"): Trainer._wrap_model = lambda self, model, training=True: model else: Trainer._wrap_model = lambda self, model, training=True, dataloader=None: model # Workaround for NaNs seen with transformers version >= 4.21.0 # https://github.com/aws-neuron/aws-neuron-sdk/issues/593 if os.environ.get("XLA_USE_BF16") or os.environ.get("XLA_DOWNCAST_BF16"): transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16 ``` <|||||>> Thanks for adding this new integration. The test won't be run on our CI since `torch_neuroncore` is not installed. Is it possible to install it in regular images or do we need to be on an AWS instance> Yes for this test we will need Trainium instance. Over time, once https://github.com/pytorch/xla/pull/3609 is released, we can make it more generic for GPU/XLA. For now, Neuron team will test this. Test is currently passing on Trainium instance.<|||||>> @jeffhataws could you maybe please explain a bit more about how users would benefit from that? I quickly checked the [HF tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html) and with the change you propose users would still need to modify the scripts, e.g., for > > ```python > # Fixup to enable distributed training with XLA > from packaging import version > from transformers import __version__ > if version.parse(__version__) < version.parse("4.20.0"): > Trainer._wrap_model = lambda self, model, training=True: model > else: > Trainer._wrap_model = lambda self, model, training=True, dataloader=None: model > > # Workaround for NaNs seen with transformers version >= 4.21.0 > # https://github.com/aws-neuron/aws-neuron-sdk/issues/593 > if os.environ.get("XLA_USE_BF16") or os.environ.get("XLA_DOWNCAST_BF16"): > transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16 > ``` The first workaround is for missing DDP support which will be available in Neuron's PyTorch-XLA version 1.13 (future release). The second workaround is already fixed in transformers==4.25.1 by https://github.com/huggingface/transformers/pull/20562.<|||||>Thanks for the precisions. Let's wait until the release of Neuron's PyTorch-XLA version 1.13 to merge this, then?<|||||>> Thanks for the precisions. Let's wait until the release of Neuron's PyTorch-XLA version 1.13 to merge this, then? @sgugger since we already have a workaround for DDP wrapper by overwriting the _wrap_model function, we can actually merge this first. The reason is that 1) we want it in for next transformer release ahead of 1.13, and 2) I will need this change to post another PR for the default compiler flag for transformer model type. Let me know if this is acceptable.<|||||>Thanks for your patience on this.
transformers
20,805
closed
Add Object Detection task tutorial to the transformers documentation
Currently, the transformers documentation has two how-to guides for CV tasks - image classification and semantic segmentation. Transformers support other CV tasks, such as object detection. This issue describes a proposal to add a how-to guide for object detection similar in structure to the existing guides to help the community members get started with object detection on their own data using the transformers library. Here’s an approximate outline for the page: 1. Intro: what object detection is, including the video from https://huggingface.co/tasks/object-detection 2. Fine-tuning either DETR or YOLOS on a new dataset (TODO: decide which one to pick). 
2.1. Loading a dataset
 
2.2. Preprocessing the dataset
 
2.3. Setting up an evaluation metric 
2.4. Training and pushing a model to the hub 3. Using the fine-tuned model for inference. Some existing notebooks for reference: - [Fine-tuning YOLOS on a custom dataset for object detection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/YOLOS/Fine_tuning_YOLOS_for_object_detection_on_custom_dataset_(balloon).ipynb) - [Fine-tuning DETR on a custom dataset for object detection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) Related [WIP] PR: https://github.com/huggingface/transformers/pull/20874
12-16-2022 15:58:15
12-16-2022 15:58:15
This looks great to me, looking forward to seeing the guide and let me know if there is anything I can help with! 🙂 I would recommend using DETR since it is a lot more popular than YOLOS (263k downloads versus 28.4k).<|||||>@stevhliu That's a good reason to go with DETR, thanks for the tip! <|||||>The issue is fixed with https://github.com/huggingface/transformers/pull/20925
transformers
20,804
closed
Generate: post-generate config doctest fix
# What does this PR do? Fixes doctests that were broken as a result of the `generation_config` PR merge. Note: the failing pipeline test was fixing by adding a missing field to `gpt2`'s `generate_config.json` (which was created before these recent `generation_config` changes). See [this hub commit](https://huggingface.co/gpt2/commit/e7da7f221d5bf496a48136c0cd264e630fe9fcc8).
12-16-2022 15:49:34
12-16-2022 15:49:34
Just a question, so if users do things in the old way like ``` model.config.pad_token_id = model.config.eos_token_id ``` they might get different results before/after the generation config PR ..?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> Just a question, so if users do things in the old way like > > ``` > model.config.pad_token_id = model.config.eos_token_id > ``` > > they might get different results before/after the generation config PR ..? `generate()` supports control from ad hoc model config changes (it has an extra check and handles differences [here](https://github.com/huggingface/transformers/blob/26dd041c6e45379141302e2d293ab4cd9cf805d4/src/transformers/generation/utils.py#L1131)), for retrocompatibility and ease of use. It is deprecated and will be removed soon. Individual generation methods do not have this check, so they are not supporting control from ad hoc model config changes. The doctests in `main` are failing for this reason. It means that fixing the doctests can be done in two ways: 1. Add the same check to all individual generation methods 2. [current implementation in the PR] Change the doctest itself so they don't rely on ad hoc model config changes I decided to follow 2. since we are deprecating it soon anyways AND calling the methods directly is an advanced feature (users should not be relying on side effects from the model config to support advanced functionality, it's a recipe for disaster 👀 ). WDYT? (cc @sgugger)<|||||>Agreed with option 2!
transformers
20,803
closed
[`Vision`] [Refactor] Initialize weights on the correct place
# What does this PR do? This PR forces some modules to be initialised on the correct place (i.e. on the `_init_weights` method). With more vision models being added, contributors are copying the practice to initialise some weights outside `_init_weights`. I think that we should centralize weights initialisation on the `_init_weights` method, by applying this on most-copied / downloaded models. Related: - https://github.com/huggingface/transformers/pull/20716#discussion_r1049764368
12-16-2022 14:46:25
12-16-2022 14:46:25
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,802
closed
Use `baddbmm` to reduce the number of kernel calls when running T5
# What does this PR do? Reduce the number of kernel calls by using `baddbmm` and built-in `F.softmax(..., dtype=torch.float))` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
12-16-2022 13:34:00
12-16-2022 13:34:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>Arf turns out this doesn't work without having extra copies. Closing as I'm not sure if it's worth it.
transformers
20,801
closed
Add script to convert T5X T5 (v1.0 and v1.1) checkpoints to PyTorch
# What does this PR do? Adds a script that can convert Google T5X (Flax) T5 and T5-v1.1 checkpoints into PyTorch checkpoints. This allows users to convert non-standard checkpoints that have been trained with T5X and use them with the Transformers library in PyTorch. Usage: - In case you don't have `gsutil`, install according to https://cloud.google.com/storage/docs/gsutil_install - Native T5X checkpoints are at https://github.com/google-research/t5x/blob/main/docs/models.md#t5-11-checkpoints. Example: `gsutil -m cp -r gs://t5-data/pretrained_models/t5x/t5_1_1_small $HOME/` - Create a corresponding `config.json` for the downloaded checkpoint. Often one already exists, e.g. here we can use https://huggingface.co/google/t5-v1_1-small/blob/main/config.json - Finally `python3 convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path=$HOME/t5_1_1_small --config_file=config.json --pytorch_dump_path=$HOME/t5_1_1_small_pt` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Discussed with @thomwolf . - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? The code is tested but not part of this PR, since the test requires manually downloading the T5X checkpoints from a cloud bucket. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten @sanchit-gandhi @ArthurZucker @younesbelkada
12-16-2022 13:31:08
12-16-2022 13:31:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>I could use some clarification on the following: I'm missing a configuration option for T5 for the 1.0/original T5 checkpoints to have an `lm_head` that shares parameters with the token embeddings. Currently there is `T5Model` (which returns hidden states) and `T5ForConditionalGeneration` (which returns logits, used for T5 v1.1 models among others). The latter assumes there is an `lm_head` layer, but for the 1.0 checkpoints there is no such thing, it reuses the embedding matrix to map to the vocab space.<|||||>Hey @bastings, when there is no `lm_head` you have to set the `tie_word_embeddings` to `True` <|||||>I added the instructions to the top docstring. Maybe it's ready? :-)<|||||>A last nit and we can merge! Thanks a lot for bearing with me 😄 <|||||>Thanks! Committed your suggestion :)<|||||>Once the quality tests are green (requires `make fixup`) we can merge!<|||||>Oh looks like the suggestion made it fail ;)<|||||>Ah, sorry then ahha, I guess the ` make style`will correct that 😅 <|||||>> Ah, sorry then ahha, I guess the ` make style`will correct that 😅 Fixed! :)
transformers
20,800
closed
Fix whisper export
# What does this PR do? Fix the export for the whisper model The current export for whisper fails with error `Invalid Feed Input Name:past_key_values.3.encoder.value`, because the cross attention key values are not exported as input in the ONNX model after the new condition introduced in the [transformers@97a51](https://github.com/huggingface/transformers/commit/97a51b0c7d483cdf13ea878a987f9aa1c9eecc91). The error occurs due to incorrect dummy input generation for cross attention key values for export. The PR fixes the same. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @lewtun
12-16-2022 13:03:11
12-16-2022 13:03:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ArthurZucker <|||||>Hey, would you mind adding a bit more context? What is the issue related to current export? <|||||>> Hey, would you mind adding a bit more context? What is the issue related to current export? Hi @ArthurZucker I have updated the PR description.<|||||>@ArthurZucker could you please merge this. I do not have the permissions. Thanks!
transformers
20,799
closed
ImportError: cannot import name 'AutoModelForMaskedLM' from 'transformers' (unknown location)
### System Info Environment: ``` python 3.8 transformers==4.4.2 ubuntu 20.04 cuda 11.3 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:15:13_PDT_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.11 Driver Version: 525.60.11 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | Off | | 36% 57C P2 252W / 450W | 18490MiB / 24564MiB | 91% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1021 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 1775 G /usr/lib/xorg/Xorg 72MiB | | 0 N/A N/A 1924 G /usr/bin/gnome-shell 185MiB | | 0 N/A N/A 14033 C python 18176MiB | +-----------------------------------------------------------------------------+ ``` `pip install transformers==4.4.2` ``` Python 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import AutoModelForMaskedLM Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'AutoModelForMaskedLM' from 'transformers' (unknown location) >>> ``` Is this a bug? When I install the latest version `pip install transformers==4.25.1 `, it shows: ``` Python 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import AutoModelForMaskedLM 2022-12-16 19:08:29.243002: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib:/usr/local/cuda/lib64:/usr/local/lib/ 2022-12-16 19:08:29.243021: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. ``` why the latest version need cuda 10.1 ? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction step 1: `pip install transformers==4.4.2` step 2: open the terminal step 3: python step 4: from transformers import AutoModelForMaskedLM ### Expected behavior I want to know how to deal with this problem, I can not use the `transformers` package and its functions.
12-16-2022 11:25:33
12-16-2022 11:25:33
Hi @medlen Thanks for raising the issue! Your installation might be broken, as I managed to import `AutoModelForMaskedLM` with `transformers==4.4.2` using the same hardware specs as you: ``` >>> import transformers >>> transformers.__version__ '4.4.2' >>> from transformers import AutoModelForMaskedLM >>> ``` Can you double check your `transformers` version and let us know?<|||||>Thanks for your quick reply. This is my `transformers` version : `pip show transformers` ``` Name: transformers Version: 4.4.2 Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch Home-page: https://github.com/huggingface/transformers Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors Author-email: [email protected] License: Apache Location: /home/jinhl/anaconda3/envs/py38/lib/python3.8/site-packages Requires: tokenizers, packaging, filelock, tqdm, numpy, regex, sacremoses, requests Required-by: simpletransformers ``` ``` Python 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import transformers >>> transformers.__version__ '4.4.2' >>> from transformers import AutoModelForMaskedLM Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'AutoModelForMaskedLM' from 'transformers' (unknown location) >>> ``` Additionally, my GPU device is GeForce 4090.<|||||>Are you using conda or python venv? can you run `which python` and `which pip`?<|||||>I am using conda. These are their locations: ``` which conda /home/jinhl/anaconda3/condabin/conda which pip /home/jinhl/anaconda3/envs/py38/bin/pip which python /home/jinhl/anaconda3/envs/py38/bin/python ```<|||||>There is maybe something wrong with my python environment. I create a new conda environment with python=3.8, and install `transformers==4.4.2`. Repeating the above steps. It raise a new version invalid error: ``` >>> import transformers Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jinhl/PythonWorkspace/transformers/__init__.py", line 43, in <module> from . import dependency_versions_check File "/home/jinhl/PythonWorkspace/transformers/dependency_versions_check.py", line 40, in <module> require_version_core(deps[pkg]) File "/home/jinhl/PythonWorkspace/transformers/utils/versions.py", line 94, in require_version_core return require_version(requirement, hint) File "/home/jinhl/PythonWorkspace/transformers/utils/versions.py", line 85, in require_version if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)): File "/home/jinhl/anaconda3/envs/py38-test/lib/python3.8/site-packages/packaging/version.py", line 52, in parse return Version(version) File "/home/jinhl/anaconda3/envs/py38-test/lib/python3.8/site-packages/packaging/version.py", line 197, in __init__ raise InvalidVersion(f"Invalid version: '{version}'") packaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11' >>> ``` after my test, this is because tokenizers version invalid. in: https://github.com/huggingface/transformers/blob/v4.4.2/src/transformers/utils/versions.py#L85 ``` if want_ver is not None and not ops[op](version.parse(got_ver), version.parse(want_ver)): raise pkg_resources.VersionConflict( f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}" ) ``` for `tokenizers` , `got_ver=0.10.3` `want_ver=0.10.1,<0.11`, the later cause the error. I am not sure it is a special case for my machine or a bug. After I comment out the version check code, `AutoModelForMaskedLM ` can be correctly imported. <|||||>+1<|||||>+1<|||||>+1 <|||||>You can keep posting `+1`s without any useful information, it will definitely help us fix the issue.<|||||>I encountered the same error: `packaging.version.InvalidVersion: Invalid version: '0.10.1,<0.11'` This error seems to be caused by a problem in handling multiple requirements, which has been resolved by a change in [this PR](https://github.com/huggingface/transformers/pull/11110). Therefore, the problem can be resolved by changing the version of `transfomers` to 4.6 or higher.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,798
closed
Fix object detection2
# What does this PR do? Fixes https://github.com/huggingface/transformers/pull/20776 better. - Reverts the previous PR. - The previous model was using a mix of LayoutLM and LayoutLMv2 leading to a bit of madness. The previous fix was wrong because it made other pipelines try to load the feature extractor which might not have existed :(. This fixes differently, by using another new models, and fixing it's config too. `Narsil/layoutlmv3-finetuned-funsd` is a fork of `nielsr/layoutlmv3-finetuned-funsd` with the `tokenizer_config.json` fixed (to not use a Roberta tokenizer, but it's proper LayoutLMv3Tokenizer). Also modified the README.md to include examples in the widget, and force the pipeline_tag to be `object-detection`. https://huggingface.co/Narsil/layoutlmv3-finetuned-funsd ![Screenshot from 2022-12-16 11-03-23](https://user-images.githubusercontent.com/204321/208074245-8f718e27-3864-42c5-aeae-558ff4fd2e0b.png) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-16-2022 09:59:23
12-16-2022 09:59:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks, looks great! > > Since when is LayoutLM supported by the object detection pipeline? :D https://github.com/huggingface/transformers/pull/20143
transformers
20,797
closed
MaskedLM models doesn't output CLS and weights not initialized in MaskedLM models
### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31 - Python version: 3.9.15 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction When I try to use a Masked LM model some CLS weights are not initialized, thus when running a text through the model CLS and other special tokens as SEP will not be predicted correctly, producing misleading loss values when evaluating the models. ```python from transformers import AutoModelForMaskedLM, AutoTokenizer model_name = 'bert-base-cased' model = AutoModelForMaskedLM.from_pretrained(model_name) ``` Gives: > Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight'] > - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). > - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). > So then when I run: ```python tokenizer = AutoTokenizer.from_pretrained(model_name) text = "Hello my name [MASK] Jhon, how can I [MASK] you?" inputs = tokenizer(text, return_tensors='pt') out_argmaxes = model(**inputs).logits[0].argmax(-1) print(inputs['input_ids']) print(out_argmaxes) print(tokenizer.decode(out_argmaxes)) ``` I got: ``` tensor([[ 101, 8667, 1139, 1271, 103, 147, 8613, 117, 1293, 1169, 146, 103, 1128, 136, 102]]) tensor([ 119, 119, 1139, 1271, 1110, 147, 8613, 117, 1293, 1169, 146, 1494, 1128, 136, 119]) '.. my name is Jhon, how can I help you?.' ``` This happens with different models, not just **bert-base-uncased**, the only case where this is not happening is with a custom **Roberta MaskedLM** model trained with a **custom tokenizer** where CLS and other special tokens are mapped as the first ids in the tokenizer, such as PAD: 0 / <mask>: 1 / CLS: 2 ecc. ### Expected behavior I would expect that the model would output logits with correctly predicted CLS and SEP tokens as the first and last tokens of the output, as they should be so that they can be evaluated producing a correct loss value. ``` '[CLS]Hello my name is Jhon, how can I help you?[SEP]' ```
12-16-2022 09:52:20
12-16-2022 09:52:20
Hey! This is expected, you should be using the following : ```python from transformers import pipeline unmasker = pipeline('fill-mask', model='bert-base-uncased') unmasker("Hello my name [MASK] Jhon, how can I [MASK] you?") ``` ```python [[{'score': 0.8831212520599365, 'sequence': '[CLS] Hello my name is Jhon, how can I [MASK] you? [SEP]', 'token': 1110, 'token_str': 'is'}, {'score': 0.03171379491686821, 'sequence': '[CLS] Hello my name, Jhon, how can I [MASK] you? [SEP]', 'token': 117, 'token_str': ','}, {'score': 0.020678386092185974, 'sequence': '[CLS] Hello my name? Jhon, how can I [MASK] you? [SEP]', 'token': 136, 'token_str': '?'}, {'score': 0.013670953921973705, 'sequence': '[CLS] Hello my name am Jhon, how can I [MASK] you? [SEP]', 'token': 1821, 'token_str': 'am'}, {'score': 0.009090826846659184, 'sequence': '[CLS] Hello my name was Jhon, how can I [MASK] you? [SEP]', 'token': 1108, 'token_str': 'was'}], [{'score': 0.9777076244354248, 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I help you? [SEP]', 'token': 1494, 'token_str': 'help'}, {'score': 0.006017779931426048, 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I meet you? [SEP]', 'token': 2283, 'token_str': 'meet'}, {'score': 0.00487362127751112, 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I reach you? [SEP]', 'token': 2519, 'token_str': 'reach'}, {'score': 0.0022672810591757298, 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I be you? [SEP]', 'token': 1129, 'token_str': 'be'}, {'score': 0.0018145894864574075, 'sequence': '[CLS] Hello my name [MASK] Jhon, how can I call you? [SEP]', 'token': 1840, 'token_str': 'call'}]] ``` Where you have a list of length `2`, with the predictions and their different scores. This can be seen in the model cards' how to use. <|||||>@ArthurZucker Thanks for your reply, but my goal here is not to extract the predicted tokens, I would like to extract the raw model logits to calculate loss value with respect to the correct labels, for example using `torch.nn.CrossEntropyLoss`, how can I achieve this without incurring in those wrong logits? If I use the bare model without a pipeline I get an unexpected loss value, so maybe I'm using this wrong: ```python text_original = "Hello my name is Jhon, how can I help you?" text = "Hello my name [MASK] Jhon, how can I [MASK] you?" inputs = tokenizer(text, return_tensors='pt') labels = tokenizer(text_original, return_tensors='pt')['input_ids'] out = model(**inputs, labels=labels) out.loss ``` ``` tensor(3.2894, grad_fn=<NllLossBackward0>) ``` Thanks <|||||>Any suggestion on how to use the model to extract the correct loss value? Thanks<|||||>Hey, as mentioned in the [model's documentation](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertForMaskedLM.forward.example), you should be using the following : ```python >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] # mask labels of non-[MASK] tokens >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(outputs.loss.item(), 2) 0.81 ``` Tell me if this is not fixing the issue 😉 <|||||>Thanks a lot, I also used to not mask all the non-masked tokens in the labels during training, this would give me losses in the range of 0.10 - 0.20 while training my MaskedLM, I don't how this was affecting my models, anyway I updated the code accordingly to your suggestion, leaving only the masked tokens in the labels, now the loss is up to 1.6ish during training, so maybe this will produce better gradients to train the network? I also was able to evaluate my models correctly.
transformers
20,796
closed
lazy import torch._softmax_backward_data for better compatibility
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Dear huggingface team, Thanks for the great library! I'm from [OneFlow](https://github.com/Oneflow-Inc/oneflow), a deep learning framework with PyTorch-compatible APIs and better performance. We want OneFlow users to run 3rd libraries by simply replacing all `torch` with `oneflow`. For `transformers` library, a blocker is the import of internal API `torch._softmax_backward_data` (OneFlow doesn't have the same internal APIs with PyTorch). This PR moves the import from the global scope into the function `softmax_backward_data`, so in most cases it will not be triggered. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @fxmarty @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-16-2022 09:32:04
12-16-2022 09:32:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger thanks for your review! Tests all pass now
transformers
20,795
closed
Install `sentencepiece` in `DeepSpeed` CI image
# What does this PR do? Install `sentencepiece` in `DeepSpeed` CI image. - The new base image has no `sentencepiece` pre-installed, but it's required for DeepSpeed CI tests - With this PR, the tests all pass (on single GPU runner)
12-16-2022 09:16:10
12-16-2022 09:16:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>It's done :-) as suggested
transformers
20,794
closed
When I use the following code on tpuvm and use model.generate() to infer, the speed is very slow. It seems that the tpu is not used. What is the problem?
### System Info When I use the following code on tpuvm and use model.generate() to infer, the speed is very slow. It seems that the tpu is not used. What is the problem? jax device is exist ```python import jax num_devices = jax.device_count() device_type = jax.devices()[0].device_kind assert "TPU" in device_type from transformers import AutoTokenizer, FlaxAutoModelForCausalLM model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small") tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") input_context = "The dog" # encode input context input_ids = tokenizer(input_context, return_tensors="np").input_ids # generate candidates using sampling outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True) print(outputs) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import jax num_devices = jax.device_count() device_type = jax.devices()[0].device_kind assert "TPU" in device_type from transformers import AutoTokenizer, FlaxAutoModelForCausalLM model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small") tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") input_context = "The dog" # encode input context input_ids = tokenizer(input_context, return_tensors="np").input_ids # generate candidates using sampling outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True) print(outputs) ``` ### Expected behavior Expect it to be fast
12-16-2022 09:15:32
12-16-2022 09:15:32
cc @gante and @sanchit-gandhi <|||||>Hey @joytianya! Sorry about the late reply here! Cool to see that you're using the Flax MT5 model! The big speed-up from using JAX on TPU comes from JIT compiling a function: https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html. It's worth reading this guide to get a feel for how JAX + XLA + TPU work in combination to give you fast kernel execution. I've written an ipynb notebook that demonstrates how you can JIT compile the generate method: https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_flaxmt5_jit_generate.ipynb Running this using a 'tiny' version of the Flax MT5 model on CPU, I get a 75x speed-up JIT compiling the generate function vs the vanilla generate function! That's fast right! You can adapt the script for the `mt5-small` checkpoint as you require 🤗 You'll need to pass any additional args that use boolean control flow in the generate method under `static_argnames` (as done with `max_length`, `top_k`, `do_sample`). Let me know if you have any other questions, happy to help!<|||||>Thank you very much for your reply, I tried it, it is indeed effective In addition, It reports OOM on the V3-8TPU to use MT5-XXL. do you have any suggestions? Make me can inference MT5-XXL with v3-8 TPU ```shell jax._src.traceback_util.UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: RESOURCE_EXHAUSTED: Attempting to reserve 320.03M at the bottom of memory. That was not possible. There are 1.20G free, 0B reserved, and 196.31M reservable. If fragmentation is eliminated, the maximum reservable bytes would be 1.20G, so compaction will enable this reservation. The nearest obstacle is at 196.31M from the bottom with size 160.00M. ```<|||||>Hey @joytianya! Glad to hear that JIT'ing the generate function worked well! The MT5-XXL checkpoint is 13 billion params (2.33GB) - this is pretty significant! We have to get pretty advanced to fit such a big model on a single TPU v3-8. There are two things that you can try: 1. Half-precision inference: set the computation dtype and model parameters to bfloat16 (half) precision. This will save a significant amount of memory vs float32 (full) precision and should get you numerically equivalent results 2. Model partitioning: use [`pjit`](https://jax.readthedocs.io/en/latest/jax-101/08-pjit.html) for model parallelism 1 is quite straightforward! 2 is very involved 😅. Let's start with 1! Here's a code snippet on how you can achieve 1: https://github.com/sanchit-gandhi/codesnippets/blob/main/flaxmt5_inference_half_precision.ipynb For pjit, you'll need to modify the code for Flax MT5 to add the sharing annotations. You can see an example for Flax BLOOM here: https://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/modeling_bloom/modeling_bloom.py#L200-L202 This is pretty advanced stuff! I can explain how it works a bit more if you really need to use pjit. Best of luck! Hope these answers provide some pointers as to how you can fit the XXL model on a v3-8!<|||||>One other thing I forgot! If you're running inference on _batches_ of data, using [`pmap`](https://jax.readthedocs.io/en/latest/jax-101/06-parallelism.html) for data parallelism across TPU devices is by far your best shout. You can do this easily using the example script [run_clm_flax.py](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_clm_flax.py) with the `--do_eval` flag. This example wraps up the model loading, data loading and data parallelisation using pmap into one script, so you can run it using a single command: ``` python run_clm_flax.py \ --output_dir="./eval-out" \ --model_name_or_path="google/mt5-small" \ --dataset_name="oscar" \ --dataset_config_name="unshuffled_deduplicated_no" \ --do_eval \ --per_device_eval_batch_size="64" \ --overwrite_output_dir \ ``` Currently, the evaluation step will only return the eval loss. You can modify it to also return the logits to get the actual predictions as well: https://github.com/huggingface/transformers/blob/9edf37583411f892cea9ae7d98156c85d7c087b1/examples/flax/language-modeling/run_clm_flax.py#L711 If nothing else, you can use the run_clm_flax.py script as an example of how we can pmap to effectively parallelise across TPU devices.<|||||>great! Thank you very much for your suggestion. I will try it next<|||||>Put together a quick codesnippet that isolates `pmap`: https://github.com/sanchit-gandhi/codesnippets/blob/main/pmap_flaxmt5_generate.ipynb This doesn't require any optimiser initialisation so should be much more memory efficient than using the previous suggestion of [run_clm_flax.py](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_clm_flax.py).<|||||>ok, Does this method also support the XXL model on the TPU V3-8?<|||||>The methodology remains the same for any checkpoint. As to whether the XXL model fits in memory you'll have to experiment for yourself! Definitely worth trying converting the model params to half-precision and running the computations in bf16 for this size model (as done in this code snippet: https://github.com/sanchit-gandhi/codesnippets/blob/main/flaxmt5_inference_half_precision.ipynb)<|||||>ok, I am very grateful for your suggestion, I plan to try and experiment further<|||||>When I load it with this model"ClueAI/ChatYuan-large-v1", the following error will occur. How to solve this problem? ```shell Some weights of the model checkpoint at ClueAI/ChatYuan-large-v1 were not used when initializing FlaxT5ForConditionalGeneration: {('decoder', 'embed_tokens', 'kernel'), ('encoder', 'embed_tokens', 'kernel')} - This IS expected if you are initializing FlaxT5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing FlaxT5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-16-6177c268ed70>](https://localhost:8080/#) in <module> 1 model_name = "ClueAI/ChatYuan-large-v1" 2 #model, params = FlaxMT5ForConditionalGeneration.from_pretrained(model_name, _do_init=False) ----> 3 model, params = FlaxT5ForConditionalGeneration.from_pretrained(model_name, from_pt=True) 4 5 tokenizer = T5Tokenizer.from_pretrained(model_name) TypeError: cannot unpack non-iterable FlaxT5ForConditionalGeneration object ``` ```python model, params = FlaxT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", from_pt=True) ```<|||||>Hey @joytianya! It's not possible to use `from_pt=True` with `_do_init=False`. Currently, you need to load PyTorch weights with `_do_init=True`: ```python model = FlaxT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", from_pt=True) params = model.params ``` Or directly load Flax weights **if they are saved in the repo**. If you want to load the model instance and weights separately, you can set `_do_init=False` (see https://github.com/huggingface/transformers/pull/16148#issue-1168756524): ```python model, params = FlaxT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", _do_init=False) ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> #16148 (comment) while i try, error occur, How to solve this problem? ```python model, params = FlaxT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", _do_init=False) ``` ```python OSError: ClueAI/ChatYuan-large-v1 does not appear to have a file named flax_model.msgpack but there is a file for PyTorch weights. Use `from_pt=True` to load this model from those weights. ````<|||||>i try it, and How to configure "max_length", "top_k", "do_sample" and other parameters with this ? https://github.com/sanchit-gandhi/codesnippets/blob/main/pmap_flaxmt5_generate.ipynb <|||||>outputs = jit_generate(input_ids=input_ids, max_new_tokens=512, top_k=30, do_sample=True, temperature=0.7).sequences I found that the generated shape is max_new_tokens , Whether the end character can be reached and terminated , so as to save time What shall I do?<|||||>I found that the results of each run are the same, but do_ Sample=True, how to configure it to generate randomly<|||||>hi, @sanchit-gandhi I look forward to your reply<|||||>Hey @joytianya! Answering your questions sequentially: 1. `_do_init=False` is only supported when we directly load Flax weights. The error message we're getting is telling us that the model only has PyTorch weights available. Let's first load the model in PyTorch on CPU, save it as a Flax model, then re-load in on TPU: ```python import jax from transformers import FlaxMT5ForConditionalGeneration SAVE_DIR = "/path/to/save/dir" # change this to where you want the model to be saved with jax.default_device(jax.devices("cpu")[0]): model = FlaxMT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", from_pt=True) model.save_pretrained(SAVE_DIR) ``` Now the next time you load the model, you can do so with `_do_init=False` and the default TPU device: ```python model, params = FlaxT5ForConditionalGeneration.from_pretrained(SAVE_DIR, _do_init=False) ``` 2. Can you try using `static_broadcasted_argnums` and passing the argument indices of the variables you want to control: ```python pmap_generate = jax.pmap(model.generate, "batch", static_broadcasted_argnums =[ <PUT A LIST OF THE ARGNUMS YOU WANT TO PASS>]) ``` See https://jax.readthedocs.io/en/latest/_autosummary/jax.pmap.html for details. 3. > Whether the end character can be reached and terminated , so as to save time The model will stop generating when the EOS token is reached. Make sure you have configured your tokenizer correctly: https://huggingface.co/docs/transformers/model_doc/mt5#transformers.T5Tokenizer 4. > I found that the results of each run are the same, but do_ Sample=True, how to configure it to generate randomly Do you have a codesnippet you could share that demonstrates this? Thanks!<|||||>In order to explain the problem 3 and 4 in detail, I wrote this code and after execution. For 4. The result of each generation is exactly the same For 3. Different from max_length, time is very different. Time and max_length are proportional. It doesn’t seem to end early ```python from transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration import jax model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small", from_pt=True) tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") # vanilla generate -> JIT generate jit_generate = jax.jit(model.generate, static_argnames=["max_length", "top_k", "do_sample"]) def answer(max_length): input_context = ["The dog is", "The cat is"] input_ids = tokenizer(input_context, return_tensors="np").input_ids outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences res = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(outputs) print(res) return res answer(20) import time start_time = time.time() for i in range(10): answer(20) print(time.time() - start_time) answer(1024) import time start_time = time.time() for i in range(10): answer(1024) print(time.time() - start_time) ``` ```python from transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration import jax import jax.numpy as jnp model = FlaxMT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", from_pt=True, dtype=jnp.bfloat16) model.params = model.to_bf16(model.params) tokenizer = T5Tokenizer.from_pretrained("ClueAI/ChatYuan-large-v1") # copy (replicate) the params across your TPU devices #params = jax_utils.replicate(params) # pmap generate (like jit, but replicated across our JAX devices) jit_generate = jax.jit(model.generate, static_argnames=["max_length", "max_new_tokens", "top_k", "do_sample", "temperature", "eos_token_id"]) def answer(max_length): input_context = ["The dog is", "The cat is"] input_ids = tokenizer(input_context, return_tensors="np").input_ids outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences res = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(outputs) print(res) return res answer(256) import time start_time = time.time() for i in range(10): answer(256) print(time.time() - start_time) answer(1024) import time start_time = time.time() for i in range(10): answer(1024) print(time.time() - start_time) ```<|||||>for 2, Is this correct? ```python pmap_generate = jax.pmap(model.generate, "batch", static_broadcasted_argnums = [ 2, 3, 4, 5, 6]) outputs = pmap_generate(input_ids, attention_mask=attention_mask, max_new_tokens=max_new_tokens, top_k=30, do_sample=True, temperature=0.7, params=params).sequences ``` error occur: ```python outputs = pmap_generate(input_ids, attention_mask=attention_mask, max_new_tokens=max_new_tokens, top_k=30, do_sample=True, temperature=0.7, params=params).sequences ValueError: pmapped function has static_broadcasted_argnums=(2, 3, 4, 5, 6) but was called with only 1 positional argument. All static broadcasted arguments must be passed positionally. ```<|||||>hi, @sanchit-gandhi I look forward to your reply<|||||>Hey @joytianya, If you don't want to change the generation params in `.generate`, you can just fix them like this: ```python from flax.training.common_utils shard def generate(params, batch): outputs = model.generate(batch["input_ids"], attention_mask=batch["attention_mask"], max_new_tokens=128, top_k=30, do_sample=True, temperature=0.7, params=params).sequences # anything that does not depend on `batch` is fixed return outputs p_generate = jax.pmap(generate, "batch") input_context = ["The dog is" for _ in range(8)] # batch size needs to be a multiple of the number of TPU devices batch = tokenizer(input_context, return_tensors="np") batch = shard(batch) # slow - we're compiling outputs = p_generate(batch) # fast! outputs = p_generate(batch) ``` <|||||>> In order to explain the problem 3 and 4 in detail, I wrote this code and after execution. > > For 4. The result of each generation is exactly the same > > For 3. Different from max_length, time is very different. Time and max_length are proportional. It doesn’t seem to end early > > > > ```python > > from transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration > > import jax > > model = FlaxMT5ForConditionalGeneration.from_pretrained("google/mt5-small", from_pt=True) > > tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") > > # vanilla generate -> JIT generate > > jit_generate = jax.jit(model.generate, static_argnames=["max_length", "top_k", "do_sample"]) > > > > > > def answer(max_length): > > input_context = ["The dog is", "The cat is"] > > input_ids = tokenizer(input_context, return_tensors="np").input_ids > > outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences > > res = tokenizer.batch_decode(outputs, skip_special_tokens=True) > > > > print(outputs) > > print(res) > > return res > > > > answer(20) > > > > import time > > start_time = time.time() > > for i in range(10): > > answer(20) > > print(time.time() - start_time) > > > > > > answer(1024) > > > > import time > > start_time = time.time() > > for i in range(10): > > answer(1024) > > print(time.time() - start_time) > > ``` > > > > ```python > > from transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration > > import jax > > import jax.numpy as jnp > > model = FlaxMT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", from_pt=True, dtype=jnp.bfloat16) > > model.params = model.to_bf16(model.params) > > tokenizer = T5Tokenizer.from_pretrained("ClueAI/ChatYuan-large-v1") > > # copy (replicate) the params across your TPU devices > > #params = jax_utils.replicate(params) > > # pmap generate (like jit, but replicated across our JAX devices) > > jit_generate = jax.jit(model.generate, static_argnames=["max_length", "max_new_tokens", "top_k", "do_sample", "temperature", "eos_token_id"]) > > > > def answer(max_length): > > input_context = ["The dog is", "The cat is"] > > input_ids = tokenizer(input_context, return_tensors="np").input_ids > > outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences > > res = tokenizer.batch_decode(outputs, skip_special_tokens=True) > > > > print(outputs) > > print(res) > > return res > > > > answer(256) > > > > import time > > start_time = time.time() > > for i in range(10): > > answer(256) > > print(time.time() - start_time) > > > > > > answer(1024) > > > > import time > > start_time = time.time() > > for i in range(10): > > answer(1024) > > print(time.time() - start_time) > > ``` Is this phenomenon correct?<|||||>Hey @joytianya > The result of each generation is exactly the same We can't really rely on the outputs of the model since it's only been pre-trained, not fine-tuned, so it's bound to output gibberish regardless of what we give it (see https://huggingface.co/google/mt5-small for details). You can try using a fine-tuned checkpoint if you want to look at the actual token predictions. > Different from max_length, time is very different. Time and max_length are proportional. It doesn’t seem to end early This is because the model has only been pre-trained (not fine-tuned): the model never hits the end-of-sequence token, it generates random outputs until it hits max length. Therefore, it always generates to max length and never terminates early. So if you increase max length, the model generates more tokens, and so decoding takes longer.<|||||>hey @sanchit-gandhi , 1. I can try using a fine-tuned checkpoint ClueAI/ChatYuan-large-v1, The phenomenon is the same. I used sample sampling. With the same code, when I use GPU, the results of each run are different. But the results on TPU are still the same. 2. Additionally, you can see that the length of the generated sentence is much smaller than the max length of tokens, so it should have already hit the end-of-sequence token. Hope you can give it a try. ```python from transformers import T5Tokenizer, FlaxMT5ForConditionalGeneration import jax import jax.numpy as jnp model = FlaxMT5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v1", from_pt=True, dtype=jnp.bfloat16) model.params = model.to_bf16(model.params) tokenizer = T5Tokenizer.from_pretrained("ClueAI/ChatYuan-large-v1") # copy (replicate) the params across your TPU devices #params = jax_utils.replicate(params) # pmap generate (like jit, but replicated across our JAX devices) jit_generate = jax.jit(model.generate, static_argnames=["max_length", "max_new_tokens", "top_k", "do_sample", "temperature", "eos_token_id"]) def answer(max_length): input_context = ["The dog is", "The cat is"] input_ids = tokenizer(input_context, return_tensors="np").input_ids outputs = jit_generate(input_ids=input_ids, max_length=max_length, top_k=30, do_sample=True).sequences res = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(outputs) print(res) return res answer(256) import time start_time = time.time() for i in range(10): answer(256) print(time.time() - start_time) answer(1024) import time start_time = time.time() for i in range(10): answer(1024) print(time.time() - start_time) ```<|||||>Hey @joytianya - if running this on a GPU gives one answer and running it on a TPU another, I'm not really sure this is a transformers based issue but probably a JAX or Flax one. Could you try re-running the code-snippet under the highest JAX matmul precision? We should then get equivalence on CPU/GPU/TPU. See https://github.com/huggingface/transformers/issues/15754#issuecomment-1048163411 for details.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,793
closed
is:issue is:open Parameters which did not receive grad for rank 5: encoder.block.0.layer.0.TransientGlobalSelfAttention.global_relative_attention_bias.weight
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.31 - Python version: 3.9.13 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction simply using the official example https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization_no_trainer.py 1. using 2. export CUDA_LAUNCH_BLOCKING=1, TORCH_DISTRIBUTED_DEBUG=INFO accelerate launch --config_file='./accelerate.yaml' run_summarization_notrainer.py --seed=42 --preprocessing_num_workers=1 --weight_decay='0.001' --output_dir="arxiv_summarization/longt5/5_beam/" --per_device_train_batch_size=1 --per_device_eval_batch_size=1 --dataset_name='ccdv/arxiv-summarization' --num_train_epochs=10 --model_name_or_path='google/long-t5-tglobal-base' --tokenizer_name='google/long-t5-tglobal-base' --num_beams=5 --with_tracking --report_to='wandb' --checkpointing_steps='epoch' runing the script I got this error after runing over 12 examples Parameters which did not receive grad for rank 5: encoder.block.0.layer.0.TransientGlobalSelfAttention.global_relative_attention_bias.weight What is interesting that when I put the number of processes equal to 1 in accelerate.yaml > compute_environment: LOCAL_MACHINE > deepspeed_config: {} > distributed_type: MULTI_GPU > fsdp_config: {} > machine_rank: 0 > main_process_ip: null > main_process_port: null > main_training_function: main > mixed_precision: 'no' > num_machines: 1 > num_processes: 1 > use_cpu: false The script runs normally but when I put it equal to 8 then I got this error after runing over 12 examples ### Expected behavior script run normally
12-16-2022 08:41:50
12-16-2022 08:41:50
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger Hi! Iam trying to use longt5 for summarizing task I am using this [script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization#with-accelerate) and this [model](https://huggingface.co/google/long-t5-tglobal-base) I am getting this error > Traceback (most recent call last): > File "/cephfs/home/arij/Memory-transformer-with-hierarchical-attention_MLM/Summarization/run_summarization_notrainer-Copy1.py", line 947, in <module> > main() > File "/cephfs/home/arij/Summarization/run_summarization_notrainer-Copy1.py", line 821, in main > outputs = model(**batch) > File "/home/arij/anaconda3/envs/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl > return forward_call(*input, **kwargs) > File "/home/arij/anaconda3/envs/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1026, in forward > if torch.is_grad_enabled() and self.reducer._rebuild_buckets(): > RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by > making sure all `forward` function outputs participate in calculating loss. > If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). > Parameters which did not receive grad for rank 1: encoder.block.0.layer.0.TransientGlobalSelfAttention.global_relative_attention_bias.weight > Parameter indices which did not receive grad for rank 1: 6 I have tried many times to use it, but it does not work any hints?
transformers
20,792
closed
Add Mask2Former
# What does this PR do? Adds Mask2Former to transformers. Original repo: https://github.com/facebookresearch/Mask2Former/ Paper: https://arxiv.org/abs/2112.01527 Co-authored with @shivalikasingh95 To Do: - [x] Fix model tests (hidden state shapes, loading the config) - [X] Test model, visualize outputs - [X] Update model cards ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests?
12-16-2022 06:29:29
12-16-2022 06:29:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>Gently pinging @sgugger here for a final review.<|||||>> Thanks for adding this new model! I have a couple of nits but nothing major, this is very clean already! @sgugger Thanks for the review! I have resolved all comments from yesterday's review. Please do let me know if the code needs any further changes/improvements. Would be happy to take them up! <|||||>So it seems there are 2 todo's left: - [x] leverage AutoImageProcessor instead of adding a new one - [x] make sure slow integration tests of Donut and Swin are still passing, possibly using `MaskFormerSwin` as backbone<|||||>> So it seems there are 2 todo's left: > > * [x] leverage AutoImageProcessor instead of adding a new one > * [x] make sure slow integration tests of Donut and Swin are still passing, possibly using `MaskFormerSwin` as backbone Sure I'll connect with @alaradirik and we'll fix these shortly and update you.<|||||>@NielsRogge Just wanted to update that backbone for Mask2Former has been switched to `MaskFormerSwin`. Changes to modeling_swin.py and modeling_donut_swin.py have been reverted so slow integration tests of Donut and Swin are passing now. Conversion of all 30 checkpoints from [Mask2Former model zoo](https://github.com/facebookresearch/Mask2Former/blob/main/MODEL_ZOO.md) using swin backbone corresponding to all 4 datasets and segmentation tasks is done and are available on the Hub. I just need to update the model cards. Will finish that shortly too. <|||||>Thank you! I'm just wondering why the issue was occurring only on Swin-base on one specific dataset. It would definitely be nice to clear that up, does it have to do with the image resolution? For instance for UperNet (at #20648) I was able to perfectly convert all checkpoints that leverage Swin-base by using our `SwinBackbone`. This one was ported from the mmsegmentation library whose Swin implementation is [here](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/backbones/swin.py#L166). So it's a bit strange. Might it be that we were just "lucky" with UperNet and OneFormer?
transformers
20,791
closed
Embed circle packing chart for model summary
This PR embeds an interactive chart of the most popular models by modality so users have a nice high-level visual overview of the 🤗 Transformers modelscape.
12-15-2022 23:28:30
12-15-2022 23:28:30
_The documentation is not available anymore as the PR was closed or merged._<|||||>I think the number of downloads dictates the size, right? Impressive! I like how visual it is. Makes it understandable straight away. I guess a potential improvement would be to be able to read the subcategories without first clicking on a major category, but I don't know how feasible that is. Thanks for working on this! Really cool.<|||||>Thanks for the feedback! I think it might make the visual more difficult to read if we also included the subcategories (modality) in addition to the main category. For example, the decoders bubble is already quite small, and adding more text might make it more cluttered (same for some of the smaller encoder-decoder bubbles).
transformers
20,790
closed
[Pipeline] skip feature extraction test if in `IMAGE_PROCESSOR_MAPPING`
# What does this PR do? Fixes the [following failing test](https://app.circleci.com/pipelines/github/huggingface/transformers/53884/workflows/8df76bfb-b6d2-493e-afdf-257b59672b02/jobs/648580) ## Context: Currently `FeatureExtractionPipelineTests` are skipped for multi-modal models by checking if the model config is in `FEATURE_EXTRACTOR_MAPPING`. The check is done [on this line](https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/tests/pipelines/test_pipelines_feature_extraction.py#L181) Recent vision and multimodal models will deprecate the usage of `xxxFeatureExtractor` in favor of `xxxImageProcessors`. For [Blip](https://github.com/huggingface/transformers/pull/20716), the test fails because `BlipFeatureExtractor` is not implemented at all in favor of `BlipImageProcessor`. ## Why this fix is relevant? Blip seems to be the first multimodal model that relies on `xxxImageProcessor` only. cc @Narsil @amyeroberts @NielsRogge
12-15-2022 22:13:13
12-15-2022 22:13:13
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks I am not sure about how do we want to approach that exactly, but I think that's the plan at some point cc @amyeroberts :D <|||||>> Thanks I am not sure about how do we want to approach that exactly, but I think that's the plan at some point cc @amyeroberts :D I think it's good to disambiguate Audio from Vision (both are currently named `FeatureExtractor` I think). In that regard I'd like to stress to include no-code (or almost none) into `Processor` the general class that encapsulates `Tokenizer`, `FeatureExtractor` and `ImageProcessor` . It's great for demos and quick hacks, but it's much more cumbersome to reason about within a lib, as it's impossible to know what it should be able to do since it's by definition not standard. (It doesn't have any invariant). For instance `Tokenizer.encode(text)` is always going to be a valid call and will return ids (that's an invariant).
transformers
20,789
closed
ImportError while trying to get the OS and software versions
### System Info OS: Ubuntu 22.04; Python version: Python 3.10.6 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Simply run the command `python3 src/transformers/commands/transformers_cli.py env`. ### Expected behavior I wanted to get the OS and software versions but instead got the error ``` Traceback (most recent call last): File "/home/skywalker/Downloads/transformers/transformers/src/transformers/commands/transformers_cli.py", line 18, in <module> from .add_new_model import AddNewModelCommand ImportError: attempted relative import with no known parent package ```
12-15-2022 18:23:05
12-15-2022 18:23:05
The CLI is not supported this way, you should run `transformers-cli env`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,788
closed
Recompile `apex` in `DeepSpeed` CI image
# What does this PR do? The based image ships with a version of `apex`. We need to recompile the new version for torch 1.13 though. This should fix some CI failures, but not all - we will check again the CI report in the next run, if this is ok for you @stas00. Otherwise I can run the full suite locally, and discuss with you how to fix all of them before merge.
12-15-2022 17:50:17
12-15-2022 17:50:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,787
closed
[S2T, Whisper] Add copied from statements
# What does this PR do? Adds 'copied from MBart' statements to Speech2TextEncoderLayer and Speech2TextDecoderLayer. Since the WhisperEncoderLayer and WhisperDecoderLayer are copied from Speech2Text, these classes are updated with 'copied from MBart' statements to minimise the chain of 'copied from' statements. Previously: * (mBart -> ) Speech2Text -> Whisper Updated: * mBart -> Speech2Text * mBart -> Whisper ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-15-2022 17:37:34
12-15-2022 17:37:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,786
closed
Stop calling expand_1d on newer TF versions
Tensorflow changed their default `train_step` in version 2.11 to no longer user `data_adapter.expand_1d`, and also deleted that method. Since we copied that code for our train step, this made our `train_step` stop working in 2.11 when the user was using a non-dummy loss! This PR resolves the issue by not calling `expand_1d` for TF versions >= 2.11. Fixes #20750
12-15-2022 17:18:48
12-15-2022 17:18:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,785
closed
Add test_image_processing_common.py
# What does this PR do? Creates an equivalent `test_feature_extraction_common.py` for image processors: `test_image_processing_common.py` and moves any vision and image processing specific logic to this file. This is necessary for creating `ImageProcessingSavingTestMixin` in order to rename any feature extractor references in the image processor tests. This is left for a [future PR](https://github.com/huggingface/transformers/pull/20768). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
12-15-2022 17:16:09
12-15-2022 17:16:09
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh @sgugger Yes, sorry, I could have been clearer above. The reason for not replacing `FeatureExtractionSavingTestMixin` with `ImageProcessingSavingTestMixin` in the `test_image_processing_xxx.py` files in this PR is that the tests in the original mixin use class attributes like `self.feature_extraction_class`. When updating the mixin, then attributes in the testing class `XxxImageProcessingTest` for each test file `test_image_processing_xxx.py` have to be updated. This results in either 1) updating just the FE references to have the tests running or 2) updating all of the FE references. In the case of 1) it results in the code being mixed between feature extractors and image processors which I found confusing to read (subjective opinion) and for 2) it introduced hundreds of lines of additional diff. I decided to leave the switch to a follow up PR. <|||||>> @ydshieh @sgugger Yes, sorry, I could have been clearer above. > > The reason for not replacing `FeatureExtractionSavingTestMixin` with `ImageProcessingSavingTestMixin` in the `test_image_processing_xxx.py` files in this PR is that the tests in the original mixin use class attributes like `self.feature_extraction_class`. When updating the mixin, then attributes in the testing class `XxxImageProcessingTest` for each test file `test_image_processing_xxx.py` have to be updated. This results in either 1) updating just the FE references to have the tests running or 2) updating all of the FE references. In the case of 1) it results in the code being mixed between feature extractors and image processors which I found confusing to read (subjective opinion) and for 2) it introduced hundreds of lines of additional diff. I decided to leave the switch to a follow up PR. Understand! Thank you for explaining! BTW, it would be super nice to give a link like https://github.com/huggingface/transformers/blob/997cbebf3483d500de8f85cc8834b704f6b410be/tests/models/beit/test_image_processing_beit.py#L110 (I agree it is a bit more time consuming :-) )
transformers
20,784
closed
Move convert_to_rgb to image_transforms module
# What does this PR do? Moves the `convert_to_rgb` function to `image_transforms` so all image processors can easily import it. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
12-15-2022 16:27:08
12-15-2022 16:27:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,783
closed
[Pipeline-asr] `batch_size` ignored when using a single file
This is not a very important issue but when using the `asr` pipeline, if you give a single audio file that you want to `chunk`, and provide both a `batch_size` and `chunk_lenght_s`, the pipeline still runs sequentially. Reproducing script : ```python from transformers import pipeline from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ja", split="test", streaming=True) ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000)) input_speech = next(iter(ds))["audio"]["array"] pipe = pipeline("automatic-speech-recognition","facebook/wav2vec2-base-960h") pipe(input_speech, return_timestamps ="char", chunk_length_s = 30, stride_length_s=[3,3], batch_size = 1024, device = 0) ``` (thanks @Narsil for the hack with `pipe([input_speech], return_timestamps ="char", chunk_length_s = 30, stride_length_s=[3,3], batch_size = 1024, device = 0)`
12-15-2022 16:07:51
12-15-2022 16:07:51
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,782
closed
Do not Preload a Deep Learning Framework
### Feature request I need some toggle (say envvar `HF_FRAMEWORK` or configuration option `transformers.config.framework`) or runtime check on importing `transformers` in order not to import `tensorflow`, `torch`, and `flax` simulteneously. ### Motivation At the moment `transformers ` package imports all deep learning packages even if they are not used. In other words, if there are `tensorflow` and `torch` installed simultaneously then both packages will be imported despite the only `torch` is actually used. ### Your contribution I'll see but I am not sure that it is easy to remove explicit imports of DL framework on `transformer` imports.
12-15-2022 16:02:11
12-15-2022 16:02:11
Those environment variable exist, but Transformers also only imports the frameworks as needed, they are named `USE_TF`, `USE_TORCH` and `USE_JAX`.<|||||>Many thanks! Are they described somewhere in documentation? I didn't manage to find them. Is it possible to add `HF_` prefix to be consistent with `HF_HOME` as an example?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,781
closed
[Tokenizers] Missmatch between fast and slow
When I worked on the implementation of Whisper I realised that two different behaviors appear when you use a `fast` or `slow` tokenizer and have a OOV. Simple snippet : ```python >>> from transformers import GPT2Tokenizer, GPT2TokenizerFast >>> fast = GPT2TokenizerFast.from_pretrained("gpt2") >>> slow = GPT2Tokenizer.from_pretrained("gpt2") >>> # the vocab size is 50257 >>> fast.decode(50258) '' ``` ```python >>> slow.decode(50258) Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/arthur_huggingface_co/transformers/src/transformers/tokenization_utils_base.py", line 3468, in decode return self._decode( File "/home/arthur_huggingface_co/transformers/src/transformers/tokenization_utils.py", line 938, in _decode for token in filtered_tokens: TypeError: 'NoneType' object is not iterable ``` My question I guess is : which one is the expected one? Here is my take : - It should work, but output a warning saying that an OOV was encountered and was ignored. WDYT @sgugger @LysandreJik @Narsil
12-15-2022 14:41:41
12-15-2022 14:41:41
Thanks for reporting. - `tokenizers` CAN do warnings (although I was under the impression there was an effort to reduce warnings.). - In my personal opinion, raising an Exception is better when OOV than silently ignoring. That being said, it would be a massive breaking change, so I'm hesitant to "fix" that way.<|||||>Yes, an exception should be raised and it's more of a bug fix than a breaking change IMO. Users will be surprised, but they should be surprised when there is an out-of-vocab index.<|||||>@Narsil @sgugger I can take this up. Will raise an exception in PreTrainedTokenizer when OOV is encountered . <|||||>(This might break the tokenizer used by Whisper, as all of the timestamp tokens are not `in` the vocabulary, but are still used and need to be decoded as `''`)<|||||>Couldn't we put them in the vocab as `Timestamp <n>` (being the seconds offset ?) (Even as special tokens ?)<|||||>we can, but we would also have to help the openAi team with their tokenizer that is based on pour GPT2TokenizerFast 😅<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,780
closed
Vilt - use image_transforms pad
# What does this PR do? When Vilt was first implemented, image transforms library didn't have `pad` implemented. This PR removes the old pad implementation in `image_processing_vilt.py` and uses the standard library. It also adds some missing `# Copied from ` statements that apply to other module level functionality in `image_processing_detr.py` spotted when comparing `pad` between the models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
12-15-2022 13:19:47
12-15-2022 13:19:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,779
closed
How to reset dataloader or global_step for continued training?
I have finetuned MT5-base model for machine translation task on a corpus `A` and I would want to swap the dataset and continue training on corpus `B` (this is not finetuning but a form of continued training), and force the dataloader to start afresh instead of continuing from the point where it stopped previously on corpus `A`. Currently, to do continue training, I used `resume_from_checkpoint` to provide the checkpoint of the MT model for corpus `A` and provide the path to the new data. Is there an argument or parameter to reset data loader? I noticed there is a `ignore_data_skip` parameter but this does not solve the issue.
12-15-2022 13:03:57
12-15-2022 13:03:57
if the continued training stops on `B` due to GPU limit, how do I ensure that when I try to continue training on `B`, it has the right data index to start from? <|||||>Please use the [forums](https://discuss.huggingface.co/) for questions like this, as we keep issues for bugs and feature requests only. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,778
closed
[Pipeline] fix failing bloom `pipeline` test
# What does this PR do? This PR fixes the `pipeline` test: `tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_small_model_pt_bloom_accelerate` - Link to failing job: https://github.com/huggingface/transformers/actions/runs/3691174891/jobs/6248989365 - Why this fix is relevant? Before https://github.com/huggingface/transformers/pull/20602 there was an inconsistency between models loaded with `accelerate` with no `device_map` set and with `device_map` set (i.e. without `accelerate`). Before the the aforementioned PR, if you load a model as follows: ``` from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-bloom", device_map="auto") print(model.lm_head.weight.dtype) model = AutoModelForCausalLM.from_pretrained("hf-internal-testing/tiny-random-bloom") print(model.lm_head.weight.dtype) ``` You get: ``` torch.bfloat16 torch.float32 ``` Which is inconsistent. Since that, to load a model with its native dtype, you need to provide `torch_dtype="auto"`. This PR fixes the failing test by setting `torch.float32` for the expected `dtype`. cc @ydshieh @sgugger
12-15-2022 11:38:14
12-15-2022 11:38:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20778). All of your documentation changes will be reflected on that endpoint.
transformers
20,777
closed
Install video dependency for pipeline CI
# What does this PR do? PR #20151 added video classification pipeline, which requires video dependency (`decord`). It is not currently installed in the CI image used for Pipeline CI - so we have failure. This PR adds the dependency (same as in CircleCI).
12-15-2022 10:43:43
12-15-2022 10:43:43
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,776
closed
Fixing object detection with layoutlm.
# What does this PR do? Fixes the slow test by making sure we're loading the FeatureExtractor. `LayoutLM` doesn't have a `FeatureExtractor` while `LayoutLMV2` does and this repo uses a combination of both. Putting `LayoutLM` in the MULTI_MODAL config enables the pipeline to load `feature_extractor` regardless of `FEATURE_EXTRACTION_MAPPING`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-15-2022 10:37:41
12-15-2022 10:37:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20776). All of your documentation changes will be reflected on that endpoint.
transformers
20,775
closed
Add BridgeTower model
# What does this PR do? This PR implements a HuggingFace Transformers version of **BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning** from the paper https://arxiv.org/abs/2206.08657.pdf This paper has been accepted to https://aaai.org/Conferences/AAAI-23/ The model's pre-trained checkpoints and configurations have been released here: https://huggingface.co/BridgeTower under: - https://huggingface.co/BridgeTower/bridgetower-base-itm-mlm - https://huggingface.co/BridgeTower/bridgetower-base The following heads have been implemented: - BridgeTowerForMaskedLM - BridgeTowerForImageAndTextRetrieval ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? @amyeroberts @NielsRogge @ArthurZucker could you please assist with review and feedback. @philschmid
12-15-2022 03:52:04
12-15-2022 03:52:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for the model here https://moon-ci-docs.huggingface.co/docs/transformers/pr_20775/en/model_doc/bridgetower show up under Text Models. This needs to be under **Multimodal Models**. Can someone please assist?<|||||>Thanks a lot for your review @younesbelkada We have addressed your comments in our latest commits. We have cleaned up the code based on your feedback! We do plan to upload some more to the hub once this PR is merged successfully. Please do let us know if you have any more suggestions and feedback. Any help on the failing style checks and tests will be really appreciated. > Thanks so much for adding this new model into `transformers` and introducing another multimodal model on the ecosystem! The PR is in a very good shape! Very strong efforts on the integration side, we should be close merging this once the main comments will be addressed. I left a couple of comments, my main comments being code readability / structure comments: > > * Feature extractors are deprecated and should be replaced by Image Processors only. This would require a very minimal change. Check what is done in BLIP for instance: #20716 > * From what I have understood only 2 models are uploaded on the Hub, therefore I think that the model initialization and forward functions of `FusionHead` and `LinkTower` can be much more simplified > * I understand that some layers needs to be freezed. You can wrap the freezing procedure in a class method, and call it only if the module is in a training model (i.e., if `self.training = True` ) > * Please avoid using variable names such as `x`, `x1`, or `x1_` as it makes harder to understand what the variable is meant to be. Consider calling these variables `hidden_states`, or any. Same comments for variables such as `image_embeds` and `image_embeds_`. > * I think that there is no need to assert that the tokenizer in a Roberta tokenizer. Tokenization auto does it magically for you > * Let's wrap all the weights initialization methods in the method `_init_weights` and call `self.post_init()` at the end of the init method for each module that inherits from `BridgeTowerPreTrainedModel`. > * Regarding your question about documentation, you should add it together with CLIP, [here](https://github.com/huggingface/transformers/blob/1543cee7c8c95ef47f832b1f37625ba2923c4994/docs/source/en/_toctree.yml#L498) in the multi modal models section. > Again thank you very much! <|||||>@abhiwand thanks a lot for your PR! Are you also planning to add classes for the downstream tasks (like VQA)?<|||||>> @abhiwand thanks a lot for your PR! Are you also planning to add classes for the downstream tasks (like VQA)? @NielsRogge We are hoping to add VQA/other downstream tasks in the coming months and also release some more models to the model-hub. In this PR however, we won't be doing so. We have addressed most of the review feedback. Could you please help merge this PR if you it looks good to you?<|||||>@amyeroberts @NielsRogge @younesbelkada I think we've handled almost all your comments and simplified and streamlined the code significantly wherever possible. If you'll approve, can you'll please merge this PR. You are welcome to make changes too. Thank you very much for your valuable feedback!<|||||>> @amyeroberts @NielsRogge @younesbelkada I think we've handled almost all your comments and simplified and streamlined the code significantly wherever possible. > > If you'll approve, can you'll please merge this PR. You are welcome to make changes too. Thank you very much for your valuable feedback! @amyeroberts @NielsRogge @younesbelkada Could you please help merge this PR :) Thanks! Happy Holidays.<|||||>@NielsRogge Thanks a lot for your review! We have addressed your review feedback as possible. If it looks good to you, could you please help merge this PR. Could you also please merge https://huggingface.co/datasets/huggingface/documentation-images/discussions/28 PR - @abhiwand moved it to the right folder.<|||||>Dear @NielsRogge and @sgugger, Thanks a lot for your review! We have addressed your review feedback as possible. If it looks good to you, could you please help merge this PR. Sincerely,<|||||>> Thanks for your work on adding this new model! There are still a few things to do before we can merge it: > > * the model type should not be used to make tests inside modeling code (see comments below) > * make sure all of the modules defined take the config for all arguments directly extracted from it > * make sure all of the modules defined are prefixed with `BridgeTower` > > I've added comments below. @sgugger Thanks a lot for your review! We have updated our code to reflect your suggestions. If it looks good to you can you please help merge the PR. Thanks again!<|||||>> Just added last comments around the `device` use: there is no need to add a `device` property to some of the modules introduced in this PR, you should rely on the device of other tensors. Thanks @sgugger, we have addressed your feedback in the latest commit!<|||||>> Nice work! > > Left a few comments, mostly nits. Only major comments are about removing `pad` from the image processing file and allowing `eps` to be configurable for the layer norm layers. Otherwise looks good to go for me :) Thanks for your suggestions @amyeroberts and approval. I have addressed your changes in the PR. Can you please help merge the PR?<|||||>@NielsRogge Our PR keeps failing at tests/pipelines/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_return_timestamps_in_preprocess. Would you please help to see if it is because of BridgeTower or because of something else? Thanks a lot<|||||>@abhiwand @tileintel Thanks for address all of the comments! On Monday there were two PRs merged into main which added `test_image_processing_common.py` (#20785) and updated the feature extractor references in the `test_image_processing_xxx.py` files (#20768). Could you update `test_image_processing_bridgetower.py` to reflect these please? <|||||>@amyeroberts We have updated test_image_processing_bridgetower.py as you suggested. Thanks for the suggestion. @NielsRogge @amyeroberts @sgugger We have addressed all of the comments. Thanks a lot for helping us to review and approve this. We are very looking forward to having this PR merged into main soon. <|||||>Thanks again for your contribution!<|||||>@sgugger Thank you for merging this PR. May I ask when BridgeTower model will go to HuggingFace's production and what release is that? Thanks<|||||>The next release will be in a month roughly (given the fast last release was yesterday).<|||||>Thank @sgugger for letting us know.
transformers
20,774
closed
Enable PyTorch/XLA Fully Sharded Data Parallel (FSDP) for a Specific Class of Transformer Models
# What does this PR do? This PR enables the user to make use of the [PyTorch/XLA implementation of FSDP](https://github.com/pytorch/xla/tree/master/torch_xla/distributed/fsdp). Three arguments have been added to `training_args.py` to facilitate this functionality: - `xla_fsdp`: this flag is a string containing the location of a `.json` file which specifies the FSDP arguments the user wants to use when wrapping their model. - `xla_fsdp_nested`: this flag is a bool which determines whether each transformer block is also FSDP wrapped. Only models which expose their transformer blocks through the class attribute `transformer.h` can use this feature. - `xla_fsdp_grad_ckpt`: this flag is a bool which determines whether gradient checkpointing is enabled for nested FSDP wrapped layers. # Design notes and future work 1) For very large model sizes (greater than, say, 128B parameters), users may see host-side OOMs on TPUs during initialization. This can be mitigated by initializing layer weights immediately after construction, wrapping with FSDP, and moving onto the XLA device, as can be seen in [this branch](https://github.com/AlexWertheim/transformers/blob/einsum/src/transformers/models/gpt2/modeling_gpt2.py#L690-L723). We opted to enable FSDP wrapping at the trainer level, since it does not necessitate model-specific changes and does not disrupt the existing architecture for model construction and initialization. 2) Checkpointing support for XLA FSDP is not included as part of this PR. We hope to add it soon via another PR. 3) As indicated above, nested FSDP is only supported for models which expose their transformer blocks in a specific way. This is because naively wrapping every child layer introduces errors. There is [a PR](https://github.com/pytorch/xla/pull/4318) which will introduce auto-wrapping functionality into FSDP, and we expect that this feature will offer a much better way for all model classes to leverage nested wrapping. We also expect that auto-wrapping will enable users to perform nested wrapping multiple layers deep, which has been seen to introduce performance gains. This auto-wrap functionality needs more testing, but we hope to add this feature in a future PR. 4) We have not included testing for XLA FSDP as part of this PR. We would like to add this in a future PR. Thanks to @ronghanghu for his assistance in the preparation of this PR. Among other contributions, the observations that one must copy the model's forward method and replace the optimizer step are his. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? --> ## Who can review? @sgugger @JackCaoG <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
12-15-2022 02:17:58
12-15-2022 02:17:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20774). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,773
closed
trainer.save_model load error
reference code : https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/go_emotion_of_transformers_multilabel_text_classification_v2.ipynb ---- ----> 6 save_model ='./model/emotion' ----> 7 trainer.save_model(save_model) ----> 8 load_model = AutoModel.from_pretrained(save_model) 4 frames [/usr/local/lib/python3.8/dist-packages/transformers/configuration_utils.py](https://localhost:8080/#) in __init__(self, **kwargs) 315 f"{self.id2label}. The number of labels wil be overwritten to {self.num_labels}." 316 ) --> 317 print(id2label) 318 self.id2label = dict((int(key), value) for key, value in self.id2label.items()) 319 # Keys are always strings in JSON so convert ids to int here. AttributeError: 'list' object has no attribute 'items' --- After fine-tuning the emotion classification model with 28 classes, I learned and saved it through trainer() , but the model is not loaded. can someone help?
12-15-2022 02:14:03
12-15-2022 02:14:03
Please use the [forums](https://discuss.huggingface.co/) to help debug your code. Happy to help here when you have a short reproducer, but otherwise we keep issues for bugs and feature requests only.<|||||>Hi @oosij, this is a very annoying issue from the HF implementation of the id2label management. The model will load correctly if id2label is provided as a dict when you specify the model before fine-tuning. Providing id2label in a form of a list causes no trouble during training / inference until you try to restore the fine-tuned model. Consider something like this: ```python id2label = {i: e for i, e in enumerate(tags)} label2id = {e: i for i, e in enumerate(tags)} log.info('Loading pre-trained checkpoint of a model...') tokenizer = AutoTokenizer.from_pretrained(config.model.checkpoint, add_prefix_space=True) model = AutoModelForTokenClassification.from_pretrained( config.model.checkpoint, id2label=id2label, label2id=label2id, ) # ... fine-tune the model trainer.save_model(config.training.save_path) debug = AutoModelForTokenClassification.from_pretrained(config.training.save_path) # This will work now ```<|||||>@IINemo Would you like to make a PR adding a sanity check that immediately raises an error if the user tries to provide label2id/id2label that are lists instead of dictionaries?<|||||>@sgugger Ok, give me some time.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,772
closed
OPT model sizes mismatch between code and webpage
### System Info N/A ### Who can help? @ArthurZucker, @sgugger and @stevhliu ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was trying to figure out the architecture of OPT models, specifically the pre-layernorm vs post-layernorm setting. I came across these two lines that specified the setting for OPT of different model sizes. https://github.com/huggingface/transformers/blob/67acb07e9ef40e6ea08997261e1d14a02530cf8b/src/transformers/models/opt/modeling_opt.py#L322 https://github.com/huggingface/transformers/blob/67acb07e9ef40e6ea08997261e1d14a02530cf8b/src/transformers/models/opt/modeling_opt.py#L346 However it doesn't match with the model sizes available here (if you expand the model list and search `opt-`). I didn't see 1.7B and 175B. https://huggingface.co/facebook?sort_models=alphabetical#models Which one is more accurate? ### Expected behavior N/A
12-14-2022 20:59:39
12-14-2022 20:59:39
Hey! Basically the `1.7B` is now the `1.3b` and the `175B` is private and was not released by the META AI team. If you want you can open a PR to just correct the model size 😉 <|||||>Thanks for the clarification!
transformers
20,771
closed
Install vision for TF pipeline tests
# What does this PR do? Install vision for TF pipeline tests
12-14-2022 18:26:23
12-14-2022 18:26:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,770
closed
[Trainer] Optimize the use of datasets.IterableDataset in distributed setup
Right now the Trainer uses `IterableDatasetShard` to skip examples on each node and avoid ending up with duplicate data. This is not efficient for vision or audio tasks since we waste I/O and CPU time reading and decoding files that are not used. We consider implementing an optimized sharding for distributed training directly in `datasets`. Right now a `datasets.IterableDataset` is already a `torch.utils.data.IterableDataset` that automatically takes care of distributing the necessary input shards to subprocesses in single node (since `datasets` 2.3.0). The idea would be to also take into account the rank and world size to distribute the input shards. Maybe distributing the. `datasets.IterableDataset` across nodes should be asked explicitly by the user. cc @sgugger WDYT ? Do you have other ideas in mind to optimize the use of `datasets.IterableDataset` for distributed training in pytorch ?
12-14-2022 16:42:54
12-14-2022 16:42:54
It seems that `webdataset` also does this when you use `wds.split_by_node` and `wds.split_by_worker`, and is used to train clip or diffusion models at scale.<|||||>That would be awesome! I don't think I need more than an API to tell the iterable dataset that it should take care of the data on process i over n. Maybe one thing that might be needed is the total length (when available) in some kind of attribute, since the length of the itreable dataset would then be smaller (and the total length is not going to be just a round multiple of the number of processes).<|||||>Opened a PR here, introducing `datasets.distributed.split_dataset_by_node`: https://github.com/huggingface/datasets/pull/5369 feel free to play with it and share your feedbacks :)<|||||>That'll be for when I'm back from vacation ;-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Re-ping me in one month GitHub bot ;-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.