repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 22,076 | closed | [Time-Series] fix past_observed_mask type | small fix to make `past_observed_mask` bool type in Informer and vanilla tests
@kashif | 03-10-2023 07:52:52 | 03-10-2023 07:52:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looked into the falling tests, looks like it's not related to the change (correct me if I'm wrong) 🙂<|||||>thanks! |
transformers | 22,075 | closed | [Time-Series] time-series patching | ### Model description
"time-series patching" refers to the process of segmentation the series into subseries-level patches which are served as input tokens to the transformer. It's really similar to what's done in ViT, but for time-series. This idea was first propsed in a recent ICLR paper:
[A Time Series is Worth 64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730)
code: https://github.com/yuqinie98/PatchTST
@kashif @NielsRogge
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
@yuqinie98
Edit: I think that "new model" is not the best label to this issue, maybe there is a better label for this? | 03-10-2023 07:39:32 | 03-10-2023 07:39:32 | PatchTST is in Gluon, thanks to @kashif . Closing here :) https://github.com/awslabs/gluonts/pull/2748 |
transformers | 22,074 | closed | Fix hint in src/transformers/modeling_utils.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This type hint is a little strange, should it be `torch.device`?
The reason why I want to change it to `torch.device` is that this hint confused a graph trace tool I used.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-10-2023 07:12:36 | 03-10-2023 07:12:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,073 | closed | Add TensorFlow Wav2Vec2 for sequence classification | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #21778
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-10-2023 06:52:45 | 03-10-2023 06:52:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Kindly ping @sanchit-gandhi and adding @Rocketknight1 for the TensorFlow side.<|||||>Hi @nandwalritik, and sorry for the extremely long delay in catching this! Ordinarily one of the TF maintainers reviews TF pull requests, but this one slipped through the cracks somehow. If you want to file TF PRs in future, you can directly ping me or @gante to make sure that we don't miss it.
This PR actually looks almost perfect, but there are a couple of TF-specific details that are causing some tests to fail. I'll mark them in a code review in just a sec, but they shouldn't take too long to fix. Thanks again for submitting this!<|||||>>
for `serving` and `serving_output` methods I added changes, but now sure they are correct or not.<|||||>Hi @nandwalritik, I'm seeing the issue when you move it to `build()` - the problem is the weight name, as it usually is in our TensorFlow ports! TF isn't very consistent about the name scope used for weights, and it can differ depending on when the weight is created in the `init`, the `build` or lazily in the `call()`, which makes it tricky because we use the names to match weights between PT and TF models.
I'll see if I can push a solution to your repo, hang on.<|||||>Ok<|||||>Try:
```
with tf.name_scope(self._name_scope()):
self.layer_weights = self.add_weight(
shape=(self.num_layers,), initializer="ones", trainable=True, name="layer_weights"
)
```
in the `__init__`, not the `build()`. I know that contradicts what I said earlier, but it turns out to be a bit different for a base model class than a sublayer.
I also see a couple of other errors - you can see them by clicking the `Details` beside `tests_tf` in the checklist at the bottom of this PR. If you can't figure out what's causing them, ping me over the weekend or on Monday and I'll try to debug them!<|||||>> Try:
>
> ```
> with tf.name_scope(self._name_scope()):
> self.layer_weights = self.add_weight(
> shape=(self.num_layers,), initializer="ones", trainable=True, name="layer_weights"
> )
> ```
>
> in the `__init__`, not the `build()`. I know that contradicts what I said earlier, but it turns out to be a bit different for a base model class than a sublayer.
>
> I also see a couple of other errors - you can see them by clicking the `Details` beside `tests_tf` in the checklist at the bottom of this PR. If you can't figure out what's causing them, ping me over the weekend or on Monday and I'll try to debug them!
Ok, so after adding this change, the weights are getting loaded without any warning or error, but the output of pytorch and tensorflow model doesn't have `rtol` of `1e-5`.
Although I checked shape and absolute sum of tensors of both the models they are almost equal
```
PT model
1,292,768 -> 29877.8750
1,292,256 -> 29711.7109
pooled_output
1,256 -> 38.7491
TF model
hidden_state
1,292,768 -> 29877.879
1,292,256 -> 29711.715
pooled_output
1,256 -> 38.811996
```
What should i try next to satisfy rtol criteria.<|||||>Hm, those are some fairly large discrepancies! The debugging process we recommend when something like that happens is:
- Make a test environment and load the PT and TF models with the same weights
- Try to isolate the earliest point where the model outputs diverge. You can use options like `output_hidden_states` to get the model to return all hidden states, not just the final ones.
- Once you find the first point of divergence, try to see if you can dig into the layer where the divergence happened. You can place breakpoints, or extract sublayers and try passing test inputs into them.
- Eventually you will find the single specific place where the divergence creeps in - now you can check what the cause is. Make sure the weights for that operation really do match between the two frameworks, and make sure both frameworks are doing the same thing at that point.
As always, if you can't figure it out, let me know! This kind of work can be quite gruelling, but we really appreciate the work you're doing on the model port.<|||||>Hi @Rocketknight1 I added test cases and fixed the feed forward part, but the CI is failing due to `flax`, I think this might not be related to my changes. Please review the PR and let me know if any more changes are required. <|||||>Yep, those flax issues are unrelated, just ignore them. I'll review everything today, but the CI looks good!<|||||>@sanchit-gandhi @Rocketknight1 let me know if any more changes are required or else can you guys get this pr merged.<|||||>Just looked over the last few changes - I'm happy to merge it at this point. Thanks again for putting in the work on this! |
transformers | 22,072 | closed | Enable traced model for text-generation task | @sywangyi
Enable traced model for text-generation task.
I changed beam_search and greedy_search of generation for traced model. If a traced model has been set on the attribute of "trace_graph", then we will use the model.trace_grapg to forward. I also changed the text-generation example and found that model optimized by jit trace performs better on text-generation task. The data running on a A100 is as below:
model: gptj-6b
beam search: input_tokens=32, output_tokens=32, num_beam=4
data type: bf16
original model's latency: 0.96s
jit trace model's latency: 0.72s | 03-10-2023 06:35:31 | 03-10-2023 06:35:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22072). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger please help review<|||||>@gante Could you have a first look at the changes in generate?<|||||>@gante Hi, Gante. Thanks for your delicate comment, it's reasonable and I agree with it.
Here I have two solutions:
1. For `trace_graph` in the main body of `generate`, we can add a doc to explain `trace_graph` with details, including what it is and how to implement it, and how it helps accelerate inference; For tensor manipulation, the method of preparing input tensors for `trace_graph` is general for text-generation task across all kinds of models. It can also adapt to any task easily with a few changes(it is in progress) instead of a specific use case. We can put this method on utils in general.
2. As you said, we can redefine `prepare_inputs_for_generation` for both inputs and `model.trace_graph` outputs. However, redefining `model.prepare_inputs_for_generation()` is not a general way since different model classes have different functions of `prepare_inputs_for_generation()`, and it is not convenient to inherit different model classes every time we changed the type of model.
I strongly recommend the first way. There are many ways to optimize `model.forward`, if we can support the attribute `trace_graph` in the main body of `generate`, it will be convenient for users to pass their custom models.
BTW, you set `return_dict=True` in the main body of generate, so it would not work if I set `return_dict=False` in the `.from_pretrain`. Could I remove this so the users can decide whether or not to return the dictionary by themselves?
Thanks!
<|||||>@jiqing-feng Thank you for your comment.
To clarify my position further, in an attempt to find a solution that pleases us all: from the `transformers` perspective, our current priority is the ease of use and experimentation. We also welcome performance-enhancing solutions like the one in the PR, but they must fulfill one of three requirements: (i) they are commonly requested by the community; (ii) they require minimal changes to existing functionality; (iii) the benefits of the new technique are very big, like int8 quantization. If we don't adhere to these principles, the codebase will quickly be unusable and hard to maintain, as there are many possible strategies to improve the code.
From my perspective, I haven't seen any request for `torch.jit` support in `.generate()`, and I get tagged in pretty much everything `.generate()`-related. This PR also includes a diff of 50 lines to existing functions in `utils.py` and the benefit is up to 20% speedup. This means that, according to the principles stated above, I'm afraid can't support the changes as they are 🤗
This doesn't mean that my perspective is static on the subject! I've suggested above what can be done to showcase `torch.jit` in the example. That is a way to increase the visibility of the technique, which may increase the community demand for it -- and, if this demand does materialize, I'd be more than happy to include the additional logic in `utils.py`.
I apologize if this is not the answer you'd like to read, but we do have to be picky with the changes we introduce in actively maintained cross-model functionality. I'm also working towards increasing the modularity of `.generate()`, so that use cases like yours can be more easily added!
<|||||>Just my +1 , generation speed improvement, especially with torch 2.0 is something very nice for make the model production ready<|||||>Yes, echo. W/ PyTorch 2.0 introduced, suppose we will see more and more performance benefit out of jit for deployment. |
transformers | 22,071 | closed | Why Bloom tokenizer use padding to max_length it will placed the padding tokens to the head ? | As the title say,
The following snippet produce:
```python
native_tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m",
use_fast=False)
caption = "a bear in the woods."
tokenized_data = native_tokenizer(
caption,
return_tensors="pt",
padding='max_length',
truncation=True,
max_length=56)
tokens = tokenized_data.input_ids[0]
tokens
```
produce:
```python
tensor([ 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 68, 50507, 361, 368,
165526, 17])
```
It pad the pad_token_id "3" to the head, not tail.
This is different with other models.
Why this occurred ? | 03-10-2023 05:08:42 | 03-10-2023 05:08:42 | That's because BLOOM is a generative models, and when using `generate` padding should go on the left for better results. This is thus the default behavior of its tokenizer.<|||||>> That's because BLOOM is a generative models, and when using `generate` padding should go on the left for better results. This is thus the default behavior of its tokenizer.
Is there exist a build in simple api method to change this activity ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,070 | closed | Custom dataset builder for multichannel 'float32' hyperspectral images? | ### Feature request
Is there an easy way to train e.g. `ViTMAE` using hyperspectral images (more than 3 "color" channels), and could (or is there already) a best practice on how to load all the images with `tifffile` (would return `np.ndarray` 3D cubes per tiff file) instead of the typical `PIL`?
### Motivation
I wanted to test the [ViTMAE](https://huggingface.co/docs/transformers/model_doc/vit_mae) [1,2] (as it allowed _n_ > 3 channels) quickly for an existing hyperspectral datasets (that had 58 channels instead of the typical 3 for RGB, e.g. `np_array.shape = (58, 48, 48)` with low spatial resolution) and bumped into several painpoints.
Like:
1) `dataset = load_dataset("imagefolder", data_dir=base_dir)` is supersimple, but then creates a standard PIL-based dataset, whereas I wanted to use `tifffile` to load my hyperspectral files that give me 3d arrays/tensors instead having to rely on ["multipage hacks" with PILs](https://stackoverflow.com/questions/18602525/python-pil-for-loop-to-work-with-multi-image-tiff)
2) Tried to create a custom "loading script" based on the [`Food101` script](https://huggingface.co/datasets/food101/blob/main/food101.py), and wrote an own `Cube()` class instead of the standard `Image()` class. That pretty much just replaced `image = PIL.Image.open(path)` with `image = tifffile.imread(path)`
and then my `_generate_examples()` returns this `yield abs_file_path, {"image": tifffile.imread(abs_file_path).astype('uint8'), "label": label}`
which results in this warning `TypeError('Unsupported array dtype float64 for image encoding. Only uint8 is supported for multi-channel arrays.')` as PIL prefers the `uint8`types whereas my data is now `float32` as it comes from my a custom preprocessing script.
**Summary**
So could not really find a good example on how to define loaders for new types of data (or is everything going back to PIL always?)
**References**
i.e. have more industry standard implementation of these eventually, or similar:
[1] [Ibañez et al. (2022)](https://doi.org/10.1109/TGRS.2022.3217892): "Masked Auto-Encoding Spectral–Spatial Transformer for Hyperspectral Image Classification"
[2] [Xu et al. (2023)](https://arxiv.org/abs/2212.13805): "Swin MAE: Masked Autoencoders for Small Datasets"
### Your contribution
I don't have a working code for training these and was wondering if there is an easy way even. Like probably need to be careful with some of the `Transforms` if they only support 3 color channels | 03-10-2023 03:44:04 | 03-10-2023 03:44:04 | This question might be better suited for the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,069 | closed | Fix position embeddings for GPT-J and CodeGen | # What does this PR do?
Identical inputs to GPT-J and CodeGen models will currently generate different outputs if they are padded differently (for example in a batch of variable sequence lengths).
This PR reverts the recent change #21869 that removes GPT-J `position_ids`, and then applies similar changes as were done for GPT-J XLA in #17986.
~One copy of the precomputed position embeddings is shared between all of the layers.~
Related issue: #21080
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
| 03-10-2023 03:40:31 | 03-10-2023 03:40:31 | I have tested this with my own code/usecase but wanted to check that there is interest in the contribution before also updating any applicable unit tests.
I also wonder whether there should be a universal test applied to all models that just tests the same input with different amounts of padding and makes sure that the output is identical?<|||||>@njhill and yes, the contribution is deeply appreciated! 🙏
Be mindful that this will not result in making the outputs left-padding agnostic. As in all models, the padding is a numerical mask. In FP32, it is almost left-padding agnostic, but in FP16/BF16/INT8 the left-padding may introduce changes :)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@gante I didn't review this PR, but I see this is related to issue #21080, and therefore to PR #21853 indirectly, which was reverted in #22093 due to some unexpected tests failure (PT/TF, PT/Flax).
So before merging this PR, it's better to verify the cross tests, as well as the slow tests too (always better).
The PR CI for #21853 was green (and also green when merged to `main`), but some tests started to fail in subsequent PRs. It's unclear to us why we didn't catch these in the PR CI though. <|||||>Thanks for the heads up @ydshieh! 🙏
I'll make sure all related slow tests (and the tests that failed after merging #21853 ) are passing before merging.<|||||>Thanks @gante ... I'm kind of new to this but will figure out how to verify/update the tests per your request.
The main problem I've run into though is newly-failing `torch.fx` tracing [tests](https://app.circleci.com/pipelines/github/huggingface/transformers/59686/workflows/0fe3ef82-316a-482d-802d-d245028b2bf6/jobs/730191/parallel-runs/0/steps/0-111):
```
FAILED tests/models/gptj/test_modeling_gptj.py::GPTJModelTest::test_torch_fx - AssertionError: Couldn't trace module: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
FAILED tests/models/gptj/test_modeling_gptj.py::GPTJModelTest::test_torch_fx_output_loss - AssertionError: Couldn't trace module: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors
```
I've tried some different variations to the logic but always end up with similar kind of errors. I think it may stem from the [index_select](https://github.com/huggingface/transformers/pull/22069/files#diff-61155574bf9c9669ccdfdf7dd508a5979b4e4915cc95f7ff4a63fee05a0e2715R209) operation. Any pointers/ideas would be appreciated!<|||||>Hey @njhill 👋
I've tried to fix the issue you mentioned with no success. It seems like we are between a rock and a hard place -- the changes you made, by design, make `sincos` dependent on the values of `position_ids`. In other words, `sincos` becomes a tensor impossible to predict at compile time with `torch.fx`, i.e. dynamic tensor. Ultimately, no matter how we rewrite the code (AFAIK), we will hit this barrier, causing the test to fail.
@sgugger @fxmarty is there a way we can make `torch.fx` ignore a function? (or do you have suggestions?) The change in this PR makes GPT-J correct in the presence of left-padding, but breaks compatibility with `torch.fx` 🙈
(Pastebin containing the code with modifications, working through the exceptions until I got stuck: https://pastebin.com/T0HpD07C)<|||||>Also cc @michaelbenayoun for torch fx.<|||||>Thanks @gante, it sounds like you followed a similar path to me w.r.t. trying different arrangements of the logic to get around this. I was guessing this couldn't be the only occurrence of this dynamic tensor issue in the library - is dynamic slicing done elsewhere and if so how does it work with `torch.fx`?<|||||>Hi @njhill,
The issue here (from what I could understand from [this](https://app.circleci.com/pipelines/github/huggingface/transformers/59686/workflows/0fe3ef82-316a-482d-802d-d245028b2bf6/jobs/730191/parallel-runs/0/steps/0-111)), seems to be that during tracing we do not have regular tensors but rather symbolic "proxies".
In the following code we are trying to call `__iter__` on `sincos` which is symbolic, we do not know its length (again, not 100% sure but guessing).
```python
sincos = [t.contiguous() for t in sincos]
```
But the previous line is :
```python
sincos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1)
```
Meaning that the list has:
- 2 elements if `sincos.shape[-1]` is an even number
- 3 elements if `sincos.shape[-1]` is an odd number.
So could you try this:
```python
sincos = torch.split(sincos, sincos.shape[-1] // 2, dim=-1)
len_sincos = 2 + torch.remainder(torch.tensor(sincos.shape[-1], 2))
sincos = [sincos[idx].contiguous() for idx in torch.arange(len_sincos)]
```
Tell me if this works!<|||||>Thanks @michaelbenayoun. You are right that this seems to be the fact that a symbolic proxy tensor is introduced somewhere, however I think that this stems from the tensor-based indexing here:
```python
sincos = embed_positions[position_ids]
```
The proxy iterator errors are easy to circumvent but just move the problem until later where (inevitably?) the size of the proxy tensor is used for flow control. I've pushed a couple of small updates to the PR to demonstrate this... you can see the latest error in the tests [here](https://app.circleci.com/pipelines/github/huggingface/transformers/59903/workflows/203a46e5-1bc2-4ab0-9085-4992384db930/jobs/733587). As @gante pointed out above:
> Ultimately, no matter how we rewrite the code (AFAIK), we will hit this barrier, causing the test to fail.
Could we at least make this path conditional such that it isn't followed in the `torch.fx` case, i.e. declare that variable padding is unsupported in that case?<|||||>Hey @njhill -- I think the conditional path is a sensible idea, at least for now (we can always revisit it later). #22161 reports a similar problem on another demanded model, so I would like to merge the fix as soon as possible 🤗
For context, other places in the `transformers` do this sort of conditional paths for `torch.fx`. Check [here](https://github.com/huggingface/transformers/blob/f7329751fe5c43365751951502c00df5a4654359/src/transformers/models/t5/modeling_t5.py#L845) for an example.<|||||>@njhill The HF tracer is supposed to keep track of "concrete" metadata during tracing to allow for that.
In this case, either this does not work with `len`, which is possible (I do not remember tbh), or it means than an op does not support the meta device, hence breaking the concrete metadata accumulation.
Since in this case you are trying to check the rank of the tensor, could you try replacing `len(tensor.shape)` by `tensor.ndim`?<|||||>Thanks @michaelbenayoun .. the `len` problem can be avoided by adding `torch.fx.wrap('len')`, which I'd done in the prior commit but removed in this latest commit since it seemed futile (just moving the error slightly later). So I was instead attempting to bypass the position_ids fix in the `torch.fx` case per [this comment](https://github.com/huggingface/transformers/pull/22069#issuecomment-1469690762) (so far unsuccessfully).
The problem encountered after working around the `len` problem can be seen [here](https://app.circleci.com/pipelines/github/huggingface/transformers/59903/workflows/203a46e5-1bc2-4ab0-9085-4992384db930/jobs/733587):
```
> if len(tensor.shape) == 5:
AssertionError: Couldn't trace module: symbolically traced variables cannot be used as inputs to control flow
```
basically this traced length value is then used in a control flow condition.<|||||>@gante @michaelbenayoun I've got torch.fx to work with the changes now by using `torch.gather` instead of tensor based indexing and adding a couple of new tensor methods to the metadata tracking in `fx.py`.
Also rebased on latest main branch since some other CI tests started to fail I think related to a recently-merged unrelated change.
I will look into the requested additional tests next when I get a chance.<|||||>For our future reference, here's a snippet that shows that left-padding is fixed with these changes:
```py
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tok = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.bfloat16).to(0)
tok.pad_token = tok.eos_token
model.generation_config.pad_token_id = model.generation_config.eos_token_id
inputs_1 = tok(["The brown fox"], return_tensors="pt", padding=True).to(0)
out_1 = model(**inputs_1)
out_2 = model(**inputs_1)
position_ids = torch.cumsum(inputs_1.attention_mask, dim=-1) - 1
out_3 = model(**inputs_1, position_ids=position_ids + 8)
inputs_2 = tok(["The brown fox"], return_tensors="pt", padding="max_length", max_length=10).to(0)
out_4 = model(**inputs_2)
position_ids = torch.cumsum(inputs_2.attention_mask, dim=-1) - 1
position_ids.masked_fill_(inputs_2.attention_mask == 0, 1)
out_5 = model(**inputs_2, position_ids=position_ids)
# calls with the same inputs get the same logits
print(torch.max(torch.abs(out_1.logits[:, -1, :] - out_2.logits[:, -1, :]))) # tensor(0., device='cuda:0', grad_fn=<MaxBackward1>)
# changing the position_ids changes the logits
print(torch.max(torch.abs(out_1.logits[:, -1, :] - out_3.logits[:, -1, :]))) # tensor(0.0625, device='cuda:0', grad_fn=<MaxBackward1>)
# padding and not passing position ids -> incorrect position ids -> output differences
print(torch.max(torch.abs(out_1.logits[:, -1, :] - out_4.logits[:, -1, :]))) # tensor(0.0625, device='cuda:0', grad_fn=<MaxBackward1>)
# left-padding has a much smaller impact (NOTE: setting e.g. `max_length=20` will cause the next diff to be non-zero.
# Numerical masking is not perfect :) )
print(torch.max(torch.abs(out_1.logits[:, -1, :] - out_5.logits[:, -1, :]))) # tensor(0., device='cuda:0', grad_fn=<MaxBackward1>)
```<|||||>The failing CI was fixed in [this merged PR](https://github.com/huggingface/transformers/pull/22298), merging.<|||||>@njhill fantastic work with the `torch.fx`, I really appreciated your effort 🤗 <|||||>Thanks @gante, glad I was able to contribute. Thank you for your fast responses and for all the great work you and team do.<|||||>This PR isn't backward compatible. It breaks with pytorch-1.8:
```
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py", line 63, in <module>
E @torch.fx.wrap
E AttributeError: module 'torch' has no attribute 'fx'
```
not sure if you want to revert this or have an idea how to overcome this quickly. <|||||>> This PR isn't backward compatible. It breaks with pytorch-1.8:
>
> ```
> E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py", line 63, in <module>
> E @torch.fx.wrap
> E AttributeError: module 'torch' has no attribute 'fx'
> ```
>
> not sure if you want to revert this or have an idea how to overcome this quickly.
@stas00
FYI, see #22291, although that PR and this PR is not directly related from the beginning when they are opened.<|||||>ok, the deepspeed CI is running pt-1.8 - how do we solve that then?<|||||>> ok, the deepspeed CI is running pt-1.8 - how do we solve that then?
I just saw
https://github.com/microsoft/DeepSpeed/pull/3082
opened 2 hours ago. I am not sure what will go, but I will try to follow tomorrow morning.<|||||>oh, ok, I guess everything is fine then. thank you for the heads up, @ydshieh <|||||>it still fails with pt-1.9.1
1. you need `import torch.fx` (thanks @mrwyattii)
2. it then fails with:
```
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/gptj/modeling_gptj.py", line 61, in create_sinusoidal_positions
E return torch.concat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1)
E AttributeError: module 'torch' has no attribute 'concat'
```<|||||>Oops, I guess we should use `torch.cat()` instead<|||||>and it fails w/o `import torch.fx`
```
E File "/mnt/nvme0/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py", line 412, in main
E model = AutoModelForCausalLM.from_pretrained(
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 470, in from_pretrained
E model_class = _get_model_class(config, cls._model_mapping)
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 360, in _get_model_class
E supported_models = model_mapping[type(config)]
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 602, in __getitem__
E return self._load_attr_from_module(model_type, model_name)
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 616, in _load_attr_from_module
E return getattribute_from_module(self._modules[module_name], attr)
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/auto/auto_factory.py", line 561, in getattribute_from_module
E if hasattr(module, attr):
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py", line 1109, in __getattr__
E module = self._get_module(self._class_to_module[name])
E File "/mnt/nvme0/code/huggingface/transformers-master/src/transformers/utils/import_utils.py", line 1121, in _get_module
E raise RuntimeError(
E RuntimeError: Failed to import transformers.models.gptj.modeling_gptj because of the following error (look up to see its traceback):
E module 'torch' has no attribute 'fx'
```
so 2 fixes at least. thank you!<|||||>I confirm that it works with `torch.cat`
perhaps use `torch.concat` but add an alias:
```
# bc for pt<1.10
if not getattr(torch, "concat"):
torch.concat = torch.cat
```
stashed somewhere in utils?
<|||||>`import torch.fx` is a must - even with pt-1.10 it won't work w/o it.<|||||>@njhill, are you on top of fixing this?
This is a bit urgent since Deepspeed CI uses our bleed edge to test deepspeed bleed edge on live CI. and currently their CI breaks because of this breakage.<|||||>@stas00 apologies I am AFK right now but could do it in a few hours. Feel free to do in the meantime if you like!
I don’t see any downside to just using `torch.cat` since it’s already an alias.
Where is it that we need to add the extra `torch.fx` import?<|||||>sure, I will fire off a PR - thank you for letting me know your preferences, @njhill <|||||>the fix is here https://github.com/huggingface/transformers/pull/22325<|||||>the fix has been merged. |
transformers | 22,068 | closed | Fix small typo in flan-ul2.mdx | # What does this PR do?
Saw small typo while reading. Not sure why the last line shows as changed; used GitHub's web UI and just modified "Resources".
Thanks for making thie library!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @sgugger, @stevhliu and @MKhalusova
| 03-10-2023 02:15:09 | 03-10-2023 02:15:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Force-merging this without the tests as I'm 99.9% sure everything will be fine, but for future PR, there is an issue with your CircleCI permissions, the tests won't run.
You will need to refresh your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? |
transformers | 22,067 | closed | Fixed docstring formatting | # What does this PR do?
Fixes the formatting in docstring for whisper model.
Fixes #22052
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Issue #22052
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu and @MKhalusova
| 03-10-2023 02:00:56 | 03-10-2023 02:00:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22067). All of your documentation changes will be reflected on that endpoint. |
transformers | 22,066 | closed | Add Canine Model Config to AutoModelForCausalLM | ### Feature request
Kindly add a class such as https://github.com/huggingface/transformers/blob/a9bd5df16a46356463f2712dd8f6c109fa83d6f9/src/transformers/models/bert/modeling_bert.py#L1161
for the [Canine Model](https://huggingface.co/docs/transformers/model_doc/canine).
Basically, in the list of models available for CausalLM provided [here](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM), the [canine](https://huggingface.co/docs/transformers/model_doc/canine) model isn't listed. Kindly add it.
### Motivation
Currently unable to experiment with CanineConfig LM decoder using [this](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig.example) api.
Snippet of code used:
```
from transformers import ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel, CanineConfig
# taken from https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig.example
config_encoder = ViTConfig()
config_decoder = CanineConfig()
config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = VisionEncoderDecoderModel(config=config)
```
### Your contribution
Not yet, currently. | 03-09-2023 22:48:30 | 03-09-2023 22:48:30 | Hi,
CANINE doesn't support causal attention. It can only be used as an encoder.<|||||>Thanks @NielsRogge for pointing that out. Is there then, any pre-trained language model similar to that of canine that processes the tokens at unicode character level. i.e. the tokenizer basically does
```
tokens = [ord(c) for c in string]
```<|||||>You can leverage the decoder of [ByT5](https://huggingface.co/docs/transformers/model_doc/byt5), which is a byte-based model.<|||||>@NielsRogge I think ByT5 while it does have the tokenization the way I wanted, it still cannot be used by the VisualEncoderDecoder API of hugging face - using the snippet like shown below:
```
from transformers import ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel, ByT5Config # this does not exist
# taken from https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderConfig.example
config_encoder = ViTConfig()
config_decoder = ByT5Config() # this is what is desired.
config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)
model = **VisionEncoderDecoderModel(config=config)**
```
Trying something like the following:
```
from transformers import VisionEncoderDecoderModel
ved = VisionEncoderDecoderModel.from_encoder_decoder_pretrained(
"google/vit-base-patch16-224-in21k", 'google/byt5-small'
)
```
Throws up the following `ValueError`
```
ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, MarianConfig, MBartConfig, MegatronBertConfig, MvpConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig.
```
Do any of the above models listed have tokenization at Byte level or character level - so that it can be used by the VisualEncoderDecoderModel API provided by 🤗.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @khadiravana-belagavi this is because T5/ByT5 is an encoder-decoder model. You would only need the decoder to combine it with a vision encoder. The vision encoder-decoder framework doesn't work out-of-the-box with T5/ByT5 at the moment as this would require us to define a new class that includes only the decoder + a language modeling head on top.
Hence I'd recommend defining this class yourself and then provide it as decoder argument when instantiating a `VisionEncoderDecoderModel` class. The class could roughly look like this:
```
from transformers.models.t5.modeling_t5 import T5PreTrainedModel, T5Stack
class T5DecoderOnlyForCausalLM(T5PreTrainedModel):
def __init__(self, config):
self.shared = nn.Embedding(config.vocab_size, config.d_model)
self.decoder = T5Stack(config, self.shared)
self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)
```
Then you can instantiate the model as follows:
```
from transformers import VisionEncoderDecoderModel, ViTModel
encoder = ViTModel.from_pretrained("google/vit-base-patch16-224")
decoder = T5DecoderOnlyForCausalLM.from_pretrained("t5-base")
model = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder)
```
One would also need to check whether the weights of the decoder are properly instantiated, the draft above probably won't load the weights correctly.<|||||>Thanks @NielsRogge for the detailed response. However, I think this issue is still relevant, as although Canine is not a CausalLM, Bert is not as well. And the class `BertLMHeadModel` adds the necessary components for finetuning on CLM task. Or is there anything specific to Canine - because canine is also a pretrained on a similar MLM task.<|||||>@khadiravana-belagavi BERT can be adapted to be used as decoder (by simply using a causal attention mask rather than a bidirectional one). CANINE on the other hand cannot simply be adapted to work as decoder since it uses a different architecture composed of 3 Transformers. |
transformers | 22,065 | closed | Fix imports of TF MobileViT | # What does this PR do?
Small cleanup in the main init. | 03-09-2023 22:07:58 | 03-09-2023 22:07:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>It's only the type hints that are wrong, not the actual init.<|||||>> It's only the type hints that are wrong, not the actual init.
Yeah, just figure it out after I posted. Deleted it but you are too fast to answer<|||||>Failures are related to Hub being down, so no blocker to merge. |
transformers | 22,064 | closed | BLIP2 hangs after loading shards, no errors | ### System Info
python: 3.9.13
torch: 1.12.0+cu113
transformers: 4.27.0.dev0
Note: I'm on an HPC, running everything through SLURM. I'm not privy to what kind of CPU I'm using.
CPU: Unknown
GPU: NVIDIA A100
GPU memory: 40GB
### Who can help?
@ArthurZucker @younesbelkada @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The original version of this code was borrowed from the zero-shot inference article on Medium, then expanded for a larger set of images.
* Inference on a single image with BLIP2 runs fine.
* Inferencing a single folder of 84 images also runs fine.
* My actual data set is a folder of about 2k folders with 20-84 images in each, and here's where the problems are happening. I have them separated into folders by company to make it easier to label them all; I'm using this output to feed into Stable Diffusion later.
```
> train2
> > company1
> > > image1.png
> > > image2.png
> > company2
> > > image1.png
> > > image2.png
> > > etc
```
What's happening is that the checkpoint shards for the model will load, and then hang, forever, on this line:
```
Loading checkpoint shards: 100%|██████████| 2/2 [00:25<00:00, 12.90s/it]
```
I'm not training or fine-tuning, just trying to run normal inference. It won't error out, either, nor will I get some kind of OOM error from SLURM. It just stays forever. Running `allocations` (which tracks how many hours I've used for jobs sent via SLURM to the HPC) also isn't incrementing time for these jobs at all, which makes me think there's some error I can't see. (Though if I check `squeue`, the time on the job itself is still ticking up, but that time isn't getting applied to my overall time limit somehow.)
I can't tell if this is because of some secret OOM error, because I'm working with about 7GB of image files. I attempted batch inference a few weeks ago, but it wasn't working at the time.
The single image version of the BLIP2 inference code *is* working correctly, though, and typically finishes before I can even `tail -f` the log file. I have both pieces of code below for reference.
Code that's not working first, inferencing a folder full of folders full of images:
```py
from transformers import AutoProcessor, Blip2ForConditionalGeneration
import torch
from PIL import Image
import glob
import json
print("finished imports")
# big list of brand names, printing keys to make sure anything works before the shards make everything hang
folder = ".../inputs/"
brands = {}
with open(folder + "full_brands.json") as jsonf:
brands = json.load(jsonf)
keys = sorted(brands.keys())
print(keys[0:10])
cachedir = ".../hfcache"
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b", cache_dir=cachedir)
print("processor good")
try:
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, cache_dir=cachedir)
# neither the below error, nor the else statement will ever print. we hang here.
except err:
print(err)
else:
print("blip ready")
print("model loaded")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
retval = []
# I haven't reached this section of the code in about a day, but it's here for reference in case this is what's making things hang
for slug in keys:
print(slug)
bname = brands[slug]["name"]
image_files = glob.glob(folder + "/train2/" + slug + "/*.png")
images = []
for x in range(len(image_files)):
try:
images.append(Image.open(image_files[x]).convert("RGBA"))
except:
print("image non-functional")
for i in range(len(images)):
print(".", end="")
image = images[i]
prompt = "an image of " + bname + " with"
inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=20)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
desc = prompt + " " + generated_text
retval.append({"file_name": image_files[i], "text": desc})
print(desc)
with open(folder + "blip_output.json", "w") as jsonf:
json.dump(retval, jsonf, indent=2)
```
Code that is working second, inference on a single image:
```py
import requests
from PIL import Image
from transformers import AutoProcessor, Blip2ForConditionalGeneration
import torch
path = ".../newyorker.jpg"
image = Image.open(path).convert('RGBA')
print(image)
cachedir = ".../hfcache"
processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b", cache_dir=cachedir)
print("processor loaded")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, cache_dir=cachedir)
print("model loaded")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
print("cuda invoked")
inputs = processor(image, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, max_new_tokens=20)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)
```
### Expected behavior
I'd think BLIP/transformers would either error out or continue to the rest of the code. I wish I knew what was going on.
As another point of reference, the top chunk of code *was* working yesterday on torch 1.10.0 and transformers 4.26.1, but between then and now, something about torch got updated such that torch 1.10.0 wasn't working with the A100 GPUs. (I was getting the "no binary exists for this device" error.) When I had to move up to torch 1.12.0, `Blip2ForConditionalGeneration` no longer existed, so I had to bump up to transformers 4.27.0.dev0, and here we are now.
But the smaller code *is* still working. So I don't know what the impact of all those images is on the file itself, but since the code never reaches the point where it *could* load the images, I don't understand how this is happening. | 03-09-2023 20:47:50 | 03-09-2023 20:47:50 | hi @thely
Thanks for the issue, it might be indeed a CPU related issue but this is hard to tell , I'd give a try by loading a model with `low_cpu_mem_usage=True`:
```python
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, cache_dir=cachedir, low_cpu_mem_usage=True)
```
I would also give it a try with `accelerate` + `8-bit` since it enables loading the model with less memory requirements:
First:
```bash
pip install accelerate bitsandbytes
```
Then:
```python
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto", load_in_8bit=True)
```<|||||>@younesbelkada I don't know what about your comment helped me, but it helped me realize the problem wasn't in transformers, or BLIP2.
In the initial run of this code on torch 1.10.0, *something* about the config (Pillow? torch? python?) was printing lines regularly as the code progressed. After the change to torch 1.12.0, which changed both the active Python version from 3.8.x to 3.9.x and the Pillow version from 8.x to 9.x, I wasn't shown any print statements until *all* the activity had completed – image loading, running through BLIP, output, etc. So I guess it wasn't hanging, I just didn't get to know that anything was happening until the very end. Not sure if it's something about Python 3.9.x scheduling print statements differently, but I'm leaving this here in case it helps someone else.
For my sanity, I fixed it by running through 100 folders at a time.<|||||>Awesome! Thank you for the update @thely ! |
transformers | 22,063 | closed | Add a new script to check model testers' config | # What does this PR do?
Add a new check to check model testers will give tiny config.
The objective is to ensure no test will run with large configuration values (such as `BridgeTowerModelTest(er)`, except the integration or slow tests.
The check is not added to PR/daily CI workflow: we don't want to add more burden to contributors.
### The effect
<img width="720" alt="Screenshot 2023-03-09 212533" src="https://user-images.githubusercontent.com/2521628/224147644-62fb5f9c-60e1-4801-b257-08d9679cb06b.png">
| 03-09-2023 19:31:00 | 03-09-2023 19:31:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Request a review from @amyeroberts too - to celebrate her new core maintainer role 🔥 🚀
<|||||>> I think that's way too specific
I agree that `100` is way to specific.
> and will require lots of exceptions
Not really a lot of exceptions. So far, there are `245` errors given by this new check.
- `max_position_embeddings`: 78 places --> this is better to be changed to smaller values
- `hidden_size` or `xxx_dim`: 41 places --> also better to be changed
- `xxx_size` (other than `hidden_size`): 39 places --> I guess better to change
- `xxx_token_id`: 19 places --> this could be skipped (they look weird, but it doesn't really matter)
- **we can skip other places for now**
> It would be more helpful to have something in a PR added a new model that would extract the time the tests took (as reported in the artifacts) and put it somewhere clearly visible, for instance the comment regarding the documentation.
The `test_torch` job of PR #20775 (Add BridgeTower model) took `37` minutes, while it took about `33` minutes on nightly run one day before. So yes, this might be a good way to look. However:
- even if we show the timing, **without the timing of the previous (full) run**, no one really knows if we should look into the timing/speed issue, and we/contributor just would not pay attention
- try to grab (automatically) **without the timing of the previous (full) run** seems to me not a super easy task
- we will have to identity PRs that are new model addition PRs (use diff tool?) + we will have to show in PR comments
- This, plus the necessity to grab the previous running time, seems to me requiring much more work than just simply fix things I mentioned in the first part above
So let me continue a bit and see what would happen!
<|||||>I also don't think this check is super valuable as it will add another burden on the contributors for something that really only affects us (we're the ones to handle CI, memory, timeouts, etc.).
I think aiming for tiny tiny models is great, but it's not the end of the world if a few slip through the cracks which we have to correct after.<|||||>> it will add another burden on the contributors
Yeah, that's a super valid point. I don't feel strong, so more than happy to close the PR.
But still give you a rough numbers:
> if a few slip through the cracks which we have to correct after.
currently there are `245` places this check identified, a few examples
```bash
config["max_speech_positions"] = 4000 which is too large for testing!
config["num_block_records"] = 13353718 which is too large for testing!
config["max_2d_position_embeddings"] = 1024 which is too large for testing!
```
Without the check, more such cases will accumulate (but most of them are not as extreme as `BridgeModelTest` . Also maybe we will only have to deal them after a long long long period of time.
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Well, although we are not going to add a check on PR CI, I think it might be handy if we add the script - just in case when we need to perform housekeeping. It's not complex this new script, but always better if it is already there when we need it.
I also change the check to check just a few major attributes for now.
@sgugger @LysandreJik Let me know if you are happy with this addition (without it being added to CI workflows). |
transformers | 22,062 | closed | Add a progress bar for the total download of shards | # What does this PR do?
This PR adds a feature requested in #22047 and fixes a small bug I encountered while testing it.
**The feature**: a new progress bar is added when loading a sharded checkpoint that gives the overall progress.
**The bug**: when passing along `force_download=True`, the files were not downloaded if cached, because an early return in `cached_file` returned the cached file.
Fixes #22047 | 03-09-2023 19:29:51 | 03-09-2023 19:29:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm not observing any changes with this branch:
I tried:
```
python -c 'import sys; from transformers import AutoModel; AutoModel.from_pretrained(sys.argv[1], revision="sharded")' t5-11b
```
after deleting the cache and not getting a new overall progress bar.
Am I doing something wrong or have some wrong dependencies?<|||||>Woops, wrong check for the file being cached. Can you try again?<|||||>It works great! Thank you, Sylvain!
As you can see it moves the outer progress bar with the inner bar, so it doesn't matter how many shards there are as I was concerned with 72 bloom shards.


|
transformers | 22,061 | closed | Assistance Exporting git-large to ONNX | Hello! I am looking to export an image captioning Hugging Face model to ONNX (specifically I was playing with the [git-large](https://huggingface.co/microsoft/git-large) model but if anyone knows of one that might be easier to deal with in terms of exporting that is great too)
I'm trying to follow [these](https://huggingface.co/docs/transformers/serialization#exporting-a-model-for-an-unsupported-architecture) instructions for exporting an unsupported architecture, and I am a bit stuck on figuring out what base class to inherit from and how to define the custom ONNX Configuration since I'm not sure what examples to look at (the model card says this is a transformer decoder model, but it looks like i that it has both encoding and decoding so I am a bit confused)
I also found [this](https://github.com/huggingface/notebooks/blob/main/examples/onnx-export.ipynb) notebook but I am again not sure if it would work with this sort of model.
Any comments, advice, or suggestions would be so helpful -- I am feeling a bit stuck with how to proceed in deploying this model in the school capstone project I'm working on. In a worst-case scenario, can I use `from_pretrained` in my application? | 03-09-2023 18:08:43 | 03-09-2023 18:08:43 | Questions around conversion to ONNX should go in the [optimum repo](https://github.com/huggingface/optimum) as this is where the feature is actually implemented :-)<|||||>> Questions around conversion to ONNX should go in the [optimum repo](https://github.com/huggingface/optimum) as this is where the feature is actually implemented :-)
Thank you!! I will post this there. |
transformers | 22,060 | closed | Skip 3 tests for `WhisperEncoderModelTest` | # What does this PR do?
Skip 3 tests for `WhisperEncoderModelTest` | 03-09-2023 17:20:57 | 03-09-2023 17:20:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,059 | closed | Adam Weight Decay Rate does not hear the opinion of tf.stop_gradient | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.9.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help?
@gante @Rocketknight1 @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I use a gpt2 variant.
`model = TFGPT2LMHeadModel.from_pretrained("PlanTL-GOB-ES/gpt2-large-bne", from_pt=True)`
where I freeze all the transformer weights except wte
`for i in range(36): model.transformer.h[i].trainable=False`
and then monkey patch the wte layer to stop propagation of gradiente for only some tokens. For instance:
```
import transformers
from transformers.tf_utils import shape_list
def call(self, inputs: tf.Tensor, mode: str = "embedding") -> tf.Tensor:
w= tf.stop_gradient(self.maskEmbeddings * self.weight) + (1-self.maskEmbeddings) * self.weight
if mode == "embedding":
return tf.gather(w, inputs)
elif mode == "linear":
first_dims = shape_list(inputs)[:-1]
x = tf.reshape(inputs, [-1, self.hidden_size])
logits = tf.matmul(x, w, transpose_b=True)
return tf.reshape(logits, first_dims + [self.vocab_size])
else:
raise ValueError(f"mode {mode} is not valid.")
transformers.modeling_tf_utils.TFSharedEmbeddings.call = call
```
key line here is `tf.stop_gradient(self.maskEmbeddings * self.weight`
Now I provide some mask matrix model.transformer.wte.maskEmbeddings = maskEmbeddings and run the training as usual with AdamWeightDecay optimizer. I check the changes with a checksum:
```
def printChecksum(model):
frozenWeights=model.transformer.wte.maskEmbeddings*model.transformer.wte.weight
check=tf.reduce_sum(frozenWeights, axis=0)
print(check)
printChecksum(model)
```
when weight_decay_rate is different of 0.0, the optimizer applies decay to the weights.
### Expected behavior
I would expect the weights that have opted out of gradient updating via the tf.stop_gradient to remain unaltered even if weight_decay_rate is not zero.
On other hand, it is true that the wdr is not part of the gradient, so it should be stopped by other ways. Surely the right path is to implement support for soft prompt training at a higher level. | 03-09-2023 16:44:25 | 03-09-2023 16:44:25 | This is a standard behaviour of all TF optimizers - `tf.stop_gradient()` stops gradient "flowing" back through that operation, but the optimizer is not aware of this control step, and so it simply sees weights with a gradient of 0 and applies weight decay to them as normal (in fact, it probably also applies updates to them from the residual momentum in the optimizer).
If you want to exclude weights from being updated entirely, you can set `layer.trainable=False`, which you're already doing. However, it sounds like you want to selectively mask different weights in each step, and you definitely can't set properties like `layer.trainable` in the `call()` function.
In those circumstances, probably the best solution is to override `train_step()` instead and apply the optimizer step to only unmasked weights at each stage (note that this might prevent XLA compilation!) Alternatively, you could just use a standard optimizer that doesn't have weight decay.
Soft prompt tuning is definitely something we could investigate adding as a feature, though! What kind of interface do you think would work for it?<|||||>I see. Was not sure if it was a bug, just as you say, one needs to know what the standard behaviour is.
Should I open a feature request for soft prompt? It would definitely be an interesting addition to the toolset, and there is a lot of tripwires that one can trigger if made in an ad-hoc way. This was not the only one :-)<|||||>cc @gante to that one - I believe he's been looking at soft prompting in generation!<|||||>Hey @arivero 👋 If I got it right, you would be interested in soft-prompting as in passing post-embedding values to the model (and not train on a few masked tokens).
Our models support an `input_embeds` input, which is mutually exclusive with `input_ids` and is meant as post-embeddings `input_ids`. Would using this input instead solve your problem? I believe you would be able to handle all masking operations outside the model :) |
transformers | 22,058 | closed | Update tiny model creation script | # What does this PR do?
Main changes:
- **Better error message**, including allowing to save the traceback to the report files
- Add `UNCONVERTIBLE_MODEL_ARCHITECTURES`, so for those models could not be converted to tiny versions, the code will skip them (and we have cleaner reports)
- Add a method `build_tiny_model_summary`, which will produce the entries we might want to add `tests/utils/tiny_model_summary.json` (for pipeline testing purpose)
- (well, we might want to remove this file in the future) | 03-09-2023 16:40:36 | 03-09-2023 16:40:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,057 | closed | rm $ symbol from code block from contributing.md | Removed the $ symbol from the code block to make copy-pasting easier.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger | 03-09-2023 15:17:52 | 03-09-2023 15:17:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,056 | closed | Difference in the architecture of openai whisper and huggingface whisper | I was comparing the whisper-medium which I got from OpenAI directly (from https://github.com/openai/whisper.git) with the HuggingFace whisper-medium.
In the decoder part of the model, both has 24 decoder blocks but there is a difference in the block architecture between openai's and huggingFaces.
As it is represented below, In OpenAI whisper decoder, there is a GELU activation function between two linear layer (at mlp block) but in HuggingFace whisper decoder, there isn't that GELU activation function between fc1 and fc2 block.

So based on such difference, is the huggignface whisper model works similar to openai's one?
And is it a totally a difference between these two whisper-medium models?
@sanchit-gandhi
| 03-09-2023 15:09:24 | 03-09-2023 15:09:24 | Hey @hannan72!
We register the activation function here (`activation_fn` in the state dict):
https://github.com/huggingface/transformers/blob/fdf84096565b8d2e15de35ac0cd86818c4b12adb/src/transformers/models/whisper/modeling_whisper.py#L468
And then apply it directly after the first feedforward layer (`fc1`):
https://github.com/huggingface/transformers/blob/fdf84096565b8d2e15de35ac0cd86818c4b12adb/src/transformers/models/whisper/modeling_whisper.py#L556
If you check the config for Whisper, you'll see that this activation function defaults to GELU:
https://github.com/huggingface/transformers/blob/fdf84096565b8d2e15de35ac0cd86818c4b12adb/src/transformers/models/whisper/configuration_whisper.py#L106
So the two are entirely equivalent 👍 OpenAI just register the GELU in a sequential block, we register it standalone. But both apply it in the same place.<|||||>Thank you a lot @sanchit-gandhi for your clear answer! |
transformers | 22,055 | closed | pt-to-tf model architecture override | This PR adds an extra arg to the `pt-to-tf` conversion script. We've seen a few uploaded models where the `config.json` doesn't specify the model class and the script autodetects the wrong one, which means some weights are not converted. This argument lets you override the autodetection and specify a model class to use for the conversion. | 03-09-2023 14:20:43 | 03-09-2023 14:20:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,054 | closed | Show the number of `huggingface_hub` warnings in CI report | # What does this PR do?
Show the number of `huggingface_hub` warnings in CI report, as discussed in #22051
Will be shown like below
<img width="488" alt="Screenshot 2023-03-09 134910" src="https://user-images.githubusercontent.com/2521628/224050100-90c1477a-c33f-485e-85e1-ec648cbb4f91.png">
| 03-09-2023 14:07:13 | 03-09-2023 14:07:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,053 | closed | WhisperTimeStampLogitsProcessor error while using Whisper pipelines. Was WhisperTimeStampLogitsProcessor used? | ### System Info
Hello,
When I tried this notebook, https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor?usp=sharing#scrollTo=Ca4YYdtATxzo, I encountered an error that is:` There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?` Especially sounds greater than the 30s, I encountered this error. On the other hand, it returns timestamps when sounds are lower than 30 seconds.
How can I fix it?
Specs:
`transformers==4.27.0.dev0`
```
from transformers import pipeline
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cuda:0',
generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
results = pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32, generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
```
### Who can help?
@ArthurZucker @sanchit-gandhi @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cuda:0',
generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
results = pipe(speech_file, return_timestamps=True, chunk_length_s=30, stride_length_s=[6,0], batch_size=32, generate_kwargs = {"language":"<|tr|>","task": "transcribe"})
```
### Expected behavior
```
results = {'text':'Some Turkish results.',
'chunks':[
{'text': ' Some Turkish results.',
'timestamp': (0.0,4.4)},
{'text': ' Some Turkish results.',
'timestamp': (4.4,28.32)},
{'text': ' Some Turkish results.',
'timestamp': (28.32,45.6)}]
}
``` | 03-09-2023 10:51:31 | 03-09-2023 10:51:31 | cc @Narsil as this might follow the latest update of the `return_stimestamps`<|||||>Do you have the faulty sample too ? I cannot reproduce with a dummy file ?
@ArthurZucker it does look like the last token is indeed not a timestamp, but it could be linked to batching possibly ?<|||||>I'm using this audio https://github.com/frankiedrake/demo/blob/master/whisper_test.wav to test with your script. <|||||>You can use this full script for testing. I uploaded an English sound to GitHub. By using this, you can try it too.
```
from six.moves.urllib.request import urlopen
import io
import numpy as np
import soundfile as sf
from transformers import pipeline
sound_link = "https://github.com/melihogutcen/sound_data/blob/main/accidents_resampled.wav?raw=true"
data, sr = sf.read(io.BytesIO(urlopen(sound_link).read()))
sound_arr_first_ch1 = np.asarray(data, dtype=np.float64)
audio_in_memory_ch1 = {"raw": sound_arr_first_ch1,
"sampling_rate": 16000}
MODEL_NAME = "openai/whisper-large-v2"
pipe = pipeline(
task="automatic-speech-recognition",
model=MODEL_NAME,
device='cuda:0')
results_pipe_ch1 = pipe(audio_in_memory_ch1, return_timestamps=True, chunk_length_s=30,
stride_length_s=[6, 0], batch_size=32,
generate_kwargs = {"language":"<|en|>",
"task": "transcribe"})
print(results_pipe_ch1["text"])
print(results_pipe_ch1)
```
Error as below.
```
warnings.warn(
Traceback (most recent call last):
File "/SpeechToText/whisper_trials.py", line 21, in <module>
results_pipe_ch1 = pipe(audio_in_memory_ch1, return_timestamps=True, chunk_length_s=30,
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 272, in __call__
return super().__call__(inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1101, in __call__
return next(
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__
processed = self.infer(item, **self.params)
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 527, in postprocess
text, optional = self.tokenizer._decode_asr(
File "/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper_fast.py", line 480, in _decode_asr
return _decode_asr(
File "/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper.py", line 881, in _decode_asr
raise ValueError(
ValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
```<|||||>Thanks, I have been able to reproduce, defnitely linked to batching, as the thing works with `batch_size=1`.
Working on a fix.<|||||>Ok, the issue is that the model uses `50256` for padding, or silence.
@ArthurZucker should we make this a special token ? (This would mean it would be ignored in the state machine, which is OK since this token is `''`.
The other solution would be to decode the `previous_tokens` before failing and checking that the decoding is the nil string, but that seems like a workaround the fact that token 50256 is special and means silence (or pad I guess)<|||||>This is the issue: https://huggingface.co/openai/whisper-large-v2/blob/main/generation_config.json#L124
@melihogutcen A fix is coming.
<|||||>Proposed changes:
https://huggingface.co/openai/whisper-base/discussions/12
https://huggingface.co/openai/whisper-large/discussions/29
https://huggingface.co/openai/whisper-medium/discussions/12
https://huggingface.co/openai/whisper-large-v2/discussions/30
https://huggingface.co/openai/whisper-small/discussions/19
https://huggingface.co/openai/whisper-tiny/discussions/9<|||||>I fixed my problem by updating `generation_config.json`. Thanks!<|||||>Oops! I have tried different sounds with the new config. And rarely, I got this error again on some sounds.
```
Traceback (most recent call last):
File "/SpeechToText/whisper_trials.py", line 63, in <module>
results_pipe_ch1 = pipe(resampled16k_data_ch1, return_timestamps=True, chunk_length_s=30,
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 272, in __call__
return super().__call__(inputs, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1101, in __call__
return next(
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__
processed = self.infer(item, **self.params)
File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 527, in postprocess
text, optional = self.tokenizer._decode_asr(
File "/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper_fast.py", line 480, in _decode_asr
return _decode_asr(
File "/opt/conda/lib/python3.10/site-packages/transformers/models/whisper/tokenization_whisper.py", line 881, in _decode_asr
raise ValueError(
ValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
```<|||||>Thanks, any potential to see the files ?
Or if you could print `previous_tokens` just before this error that would be nice.
This error occurs when the state machine still has some dangling tokens and no timestamp token in the end, meaning we have no ending timestamp. This shouldn't happen given how WhisperTimestampLogitsProcessor is supposed to work. The previous error was that it would use a padding_token_id which wasn't a special_token so it would be considered as text (which it isn't)<|||||>Sorry, I couldn't share these files due to privacy, but I can send the `previous_tokens`. I added print function here. https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/tokenization_whisper.py#:~:text=current_tokens%20%3D%20%5B%5D-,if%20previous_tokens%3A,-if%20return_timestamps%3A
Is it correct?
```
Previous tokens: [[16729, 44999, 39196, 259, 13]]
There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
```<|||||>I suspect the logits processor @Narsil but this is strange that it didn’t came up before<|||||>@melihogutcen This is Turkish, on `whisper-large-v2` correct ? I'll try to run a batch on some dataset to try and trigger it elsewhere. Still using the same script as above correct ?
We need to reproduce to understand what's going on. It could be the WhisperLogitsProcessor, but also a bug somewhere else.<|||||>Yes, it is Turkish and I used `whisper-large-v2.` I used the same script as above, I just used "<|tr|>" language and I changed `generation_config.json` as you said.<|||||>Could it be possible that this is due to all the batches processes are silence? I have seem that the error generates when the Audio has a section that is mainly silence (I test with a 10 min silece). With the original whisper what I get is allucination and repeated words.<|||||>I'm getting this error as well, but only on a fine-tuned model. I will try my program with huggingface openai/whisper-medium and it will work fine, and then I will change just the model over to a model of whisper medium trained on the common_voice_11_0 dataset, and any audio file I try to pass through gets this error.
2023-03-15 15:06:11 Error occurred while processing File1.wav. Exception: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
Traceback (most recent call last):
File "/home/user/basictest.py", line 64, in transcribe_audio
out = pipeline(audio)
File "/home/user/anaconda3/lib/python3.9/site-packages/speechbox/diarize.py", line 120, in __call__
asr_out = self.asr_pipeline(
File "/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 272, in __call__
return super().__call__(inputs, **kwargs)
File "/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1101, in __call__
return next(
File "/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__
processed = self.infer(item, **self.params)
File "/home/user/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 527, in postprocess
text, optional = self.tokenizer._decode_asr(
File "/home/user/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper_fast.py", line 480, in _decode_asr
return _decode_asr(
File "/home/user/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper.py", line 881, in _decode_asr
raise ValueError(
ValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?<|||||>@alextomana, did you try comparing the `generation_config` as mentioned above?
About the silence or what not, not really sure<|||||>Seeing the same with a fine-tuned model.
```python
import requests
import transformers
from transformers import GenerationConfig
pipe = transformers.pipeline(
"automatic-speech-recognition",
model="vasista22/whisper-hindi-large-v2",
device="cuda:0",
)
pipe.model.generation_config = GenerationConfig.from_pretrained("openai/whisper-large-v2")
audio = requests.get(
"https://storage.googleapis.com/dara-c1b52.appspot.com/daras_ai/media/e00ba954-c980-11ed-8700-8e93953183bb/6.ogg"
).content
forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(task="transcribe", language="hindi")
pipe(
audio,
return_timestamps=True,
generate_kwargs=dict(
forced_decoder_ids=forced_decoder_ids,
),
chunk_length_s=30,
stride_length_s=[6, 0],
batch_size=32,
)
```
```console
/root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/generation/utils.py:1288: UserWarning: Using `max_length`'s default (448) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ <stdin>:1 in <module> │
│ │
│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/automatic_spee │
│ ch_recognition.py:272 in __call__ │
│ │
│ 269 │ │ │ │ │ │ "there", "timestamps": (1.0, 1.5)}]`. The original full text can │
│ 270 │ │ │ │ │ │ `"".join(chunk["text"] for chunk in output["chunks"])`. │
│ 271 │ │ """ │
│ ❱ 272 │ │ return super().__call__(inputs, **kwargs) │
│ 273 │ │
│ 274 │ def _sanitize_parameters( │
│ 275 │ │ self, │
│ │
│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/base.py:1101 │
│ in __call__ │
│ │
│ 1098 │ │ elif is_iterable: │
│ 1099 │ │ │ return self.iterate(inputs, preprocess_params, forward_params, postprocess_p │
│ 1100 │ │ elif self.framework == "pt" and isinstance(self, ChunkPipeline): │
│ ❱ 1101 │ │ │ return next( │
│ 1102 │ │ │ │ iter( │
│ 1103 │ │ │ │ │ self.get_iterator( │
│ 1104 │ │ │ │ │ │ [inputs], num_workers, batch_size, preprocess_params, forward_pa │
│ │
│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py:12 │
│ 5 in __next__ │
│ │
│ 122 │ │ │
│ 123 │ │ # We're out of items within a batch │
│ 124 │ │ item = next(self.iterator) │
│ ❱ 125 │ │ processed = self.infer(item, **self.params) │
│ 126 │ │ # We now have a batch of "inferred things". │
│ 127 │ │ if self.loader_batch_size is not None: │
│ 128 │ │ │ # Try to infer the size of the batch │
│ │
│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/pipelines/automatic_spee │
│ ch_recognition.py:527 in postprocess │
│ │
│ 524 │ │ │ │ │ stride_right /= sampling_rate │
│ 525 │ │ │ │ │ output["stride"] = chunk_len, stride_left, stride_right │
│ 526 │ │ │ │
│ ❱ 527 │ │ │ text, optional = self.tokenizer._decode_asr( │
│ 528 │ │ │ │ model_outputs, │
│ 529 │ │ │ │ return_timestamps=return_timestamps, │
│ 530 │ │ │ │ return_language=return_language, │
│ │
│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/models/whisper/tokenizat │
│ ion_whisper_fast.py:480 in _decode_asr │
│ │
│ 477 │ │ return forced_decoder_ids │
│ 478 │ │
│ 479 │ def _decode_asr(self, model_outputs, *, return_timestamps, return_language, time_pre │
│ ❱ 480 │ │ return _decode_asr( │
│ 481 │ │ │ self, │
│ 482 │ │ │ model_outputs, │
│ 483 │ │ │ return_timestamps=return_timestamps, │
│ │
│ /root/.pyenv/versions/3.10.10/lib/python3.10/site-packages/transformers/models/whisper/tokenizat │
│ ion_whisper.py:881 in _decode_asr │
│ │
│ 878 │ │ if return_timestamps: │
│ 879 │ │ │ # Last token should always be timestamps, so there shouldn't be │
│ 880 │ │ │ # leftover │
│ ❱ 881 │ │ │ raise ValueError( │
│ 882 │ │ │ │ "There was an error while processing timestamps, we haven't found a time │
│ 883 │ │ │ │ " WhisperTimeStampLogitsProcessor used?" │
│ 884 │ │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
```<|||||>Running into the same issue:
```
import torch
import gdown
from transformers import pipeline, AutomaticSpeechRecognitionPipeline, Pipeline, GenerationConfig, \
WhisperTokenizer, WhisperModel, WhisperConfig, WhisperForConditionalGeneration, WhisperTokenizerFast, \
WhisperProcessor
url = 'https://drive.google.com/uc?id=1IcnHiL5gdGs8zr-NwuSQm_hsAZugz4mq'
audio_path = 'audio.wav'
gdown.download(url, audio_path, quiet=False)
model_name = "openai/whisper-small"
task = 'transcribe'
language = 'spanish'
predict_timestamps = True
chunk_length = 30
max_length = 100
batch_size = 1
device = 'cuda:0' if torch.cuda.is_available() else 'cpu'
# -----------------------------------------------------------------------
config = WhisperConfig.from_pretrained(model_name)
model = WhisperForConditionalGeneration.from_pretrained(model_name, config=config)
tokenizer = WhisperTokenizer.from_pretrained(model_name)
# tokenizer.set_prefix_tokens(language=language, task=task, predict_timestamps=predict_timestamps)
processor = WhisperProcessor.from_pretrained(model_name)
pipe = pipeline(
task='automatic-speech-recognition',
model=model,
chunk_length_s=chunk_length,
batch_size=batch_size,
tokenizer=tokenizer,
feature_extractor=processor.feature_extractor,
device=device
)
forced_decoder_ids = tokenizer.get_decoder_prompt_ids(language=language, task=task, no_timestamps=not predict_timestamps)
print(forced_decoder_ids)
generate_kwargs = {'max_length': max_length, "forced_decoder_ids": forced_decoder_ids}
print('audio_path: ', audio_path)
result = pipe(audio_path, return_timestamps=predict_timestamps, generate_kwargs=generate_kwargs)
print(result)
```
with error
```
Traceback (most recent call last):
File "/home/spanagiotidi/notebook_dir/whisper_tests/test6.py", line 47, in <module>
print(result)
File "/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 272, in __call__
return super().__call__(inputs, **kwargs)
File "/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1101, in __call__
return next(
File "/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__
processed = self.infer(item, **self.params)
File "/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 527, in postprocess
text, optional = self.tokenizer._decode_asr(
File "/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper.py", line 708, in _decode_asr
return _decode_asr(
File "/home/spanagiotidi/anaconda3/lib/python3.9/site-packages/transformers/models/whisper/tokenization_whisper.py", line 881, in _decode_asr
raise ValueError(
ValueError: There was an error while processing timestamps, we haven't found a timestamp as last token. Was WhisperTimeStampLogitsProcessor used?
```
<|||||>cc @Narsil maybe an edge case that was not handle (and that was previously ignored) let's be more permissive on the last timestamps + will check with the provided example the reason why we are not getting a last timestamps.
Might be something relating to the length of the `forced_decoder_ids` that can affect the `WhisperTImestampsLogitProcessor`. Something to lookout for<|||||>@devxpy I have reproduced with your example. It seems this model never outputs timestamps.
I am guessing it was finetuned without timestamps and so the error is kind of normal.
However it lead me to reduce the hard error to a soft error. The results are still nonsensical (check out the test).
I spent some time trying to find a better fix by fixing the logits processor itself, but to no avail. There's just no way to fix models that refuse to output timestamp tokens. To be noted is that whisper models are never even forced to output increasing timestamp tokens, so there's already a lot of room there. Soft error is better.
<|||||>https://github.com/huggingface/transformers/pull/22475/files<|||||>I received this error when transcribing audio with `openai/whisper-large-v2`. For me, the cause was 10 seconds of silence at the end of the file. Maybe this can be added as a potential solution to the error/warning, or maybe this can be detected and silently ignored.<|||||>Thanks for this comment! @narsil, I think it makes sense<|||||>@Narsil @devxpy @ArthurZucker I also did finetuning without timestamps, and now I have an issue where timestamps are not appearing. Is there a good way to finetune and include timestamps? Do I need to add 1500 special tokens for each timestamp in the tokenizer? I made sure that the tokenizer doesn't have a timestamps. #20225
<|||||>Hey! For finetuning with timestamps, you should either use the latest tokenizer (which by default should add 1500 special tokens, not more) or use the previous one, which also supported them, but not for encoding. Pinging @sanchit-gandhi as he has been working on distil whisper, might have a training script to add timestamps. Also this kind of question would be better for the [forum](https://discuss.huggingface.co/)<|||||>Hey @upskyy - in my experience, fine-tuning with LoRA / QLoRA is a fantastic way to prevent this 'catastrophic forgetting' effect where Whisper forgets how to predict timestamps after fine-tuning. For this, you can check-out the following repo: https://github.com/Vaibhavs10/fast-whisper-finetuning
And @ArthurZucker - cool that the latest tokenizer has the 1500 special tokens already added! This should make our lives a lot easier for encoding with timestamps, since the tokenizer is now able to map the timestamp strings to tokens.
All we really need to do then is have a small amount of data in our train set that has timestamps in the Whisper format, e.g.
```
"<|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and<|6.24|><|6.24|> can discover in it but little of rocky Ithaca.<|9.44|>"
```
Generally, you only need between 1-5% of your data to be timestamped to ensure you retain Whisper's timestamp prediction abilities. The easiest way of getting this data is to use the pre-trained Whisper model to re-annotate 1% of your training data with timestamps. You can then merge this data into your full training corpus to train on both non-timestamped (99%) and timestamped (1%) data.
What we then want to do is enable/disable timestamps when we encode the labels, depending on whether the labels have timestamps or not:
```python
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# set tokenizer prefix tokens depending on whether we have timestamps or not
predict_timestamps = batch["predict_timestamps"] # boolean that tells us whether our labels have timestamps or not (add this column to your dataset to indicate)
tokenizer.set_prefix_tokens(language=language, task="transcribe", predict_timestamps= predict_timestamps)
# encode target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
```<|||||>@ArthurZucker @sanchit-gandhi Thank you so much for the detailed explanation. I'm trying to download a new tokenizer, but it seems like it was updated 5 months ago. Can I get it like this? [[link]](https://huggingface.co/openai/whisper-medium/tree/main)
What is the latest tokenizer you are talking about?
Currently, my tokenizer is splitting one by one like this.
```python
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
tokens = processor.tokenizer("<|0.00|>Hello!<|2.34|>").input_ids
print(tokens)
# [50258, 50363, 27, 91, 15, 13, 628, 91, 29, 15947, 0, 27, 91, 17, 13, 12249, 91, 29, 50257]
text = processor.decode([27, 91, 15, 13, 628, 91, 29])
print(text)
# <|0.00|>
```<|||||>@ArthurZucker could you give @upskyy a hand with downloading the latest version of the tokenizer please! 🙌<|||||>Mmmm I don’t think we updated the checkpoints but rather the code. Will check and open PRs! <|||||>@ArthurZucker
Aren't you talking about this part?
https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/tokenization_whisper.py#L495-L512
Thanks if you can point me to a PR or code location to refer to.
<|||||>Hey actually here is the linked issue, #20225 tokens have not been added, will open a PR <|||||>@ArthurZucker okay, thank you!<|||||>I would really recommend using `AddedTokens` like this:
```python
from transformers import AddedToken, WhisperTokenizerFast
timestamps = [AddedToken("<|%.2f|>" % (i * 0.02), lstrip=False, rstrip=False) for i in range(1500 + 1)]
fast = WhisperTokenizerFast.from_pretrained(model_path)
fast.add_tokens(timestamps)
```
<|||||>Oh, great! Thanks for letting me know. |
transformers | 22,052 | closed | Add a newline here in the docstring | There should be a newline to separate prev_key_values from inputs_embeds.
https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/whisper/modeling_whisper.py#L827 | 03-09-2023 10:39:27 | 03-09-2023 10:39:27 | Seems like it indeed! Do you want to suggest a PR with the change?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,051 | closed | Remove set_access_token usage + fail tests if FutureWarning | `set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`.
This PR removes it from the tests (it was not used in `transformers` source code itself). In the future, use `set_git_credential` if needed. It is a git-credential-agnostic helper, i.e. you can store your git token in `git-credential-cache`, `git-credential-store`, `osxkeychain`, etc. The legacy `set_access_token` was only able to set in `git-credential-store` no matter the user preference.
(for context, I found out about this while working on https://github.com/huggingface/huggingface_hub/pull/1381)
---
In addition to this, I have added
```
filterwarnings =
error::FutureWarning:huggingface_hub*
```
to the `setup.cfg` config file to fail on future warnings from `huggingface_hub`. In `hfh`'s CI we trigger on FutureWarning from any package but it's less robust (any package update leads can lead to a failure). No obligation to keep it like that (I can remove it if you prefer) but I think it's a good idea in order to track future FutureWarnings. | 03-09-2023 09:33:20 | 03-09-2023 09:33:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,050 | closed | run_speech_recognition_seq2seq to fine-tune whisper tiny model stop in dataset map | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.0-144-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
according to official doc in [sequence-to-sequence](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#sequence-to-sequence) ,run the same cmd (use model tiny),it's always stop in step 7:
preprocess train dataset (num_proc=16): 0%| | 0/6540 [00:00<?, ? examples/s]
but if I verify it in python accorind [fine-tune-whsiper](https://huggingface.co/blog/fine-tune-whisper),it works well:
preprocess train dataset (num_proc=16): 7%|███████████▌ | 197/2894 [00:11<02:19, 19.34 examples/s]
### Expected behavior
can finish the train in official example | 03-09-2023 09:13:29 | 03-09-2023 09:13:29 | Hey @xyx361100238 - could you try with a lower value of `preprocessing_num_workers`? Maybe first by reducing it by a factor of 2:
```diff
- --preprocessing_num_workers="16" \
+ --preprocessing_num_workers="8" \
```
This is usually the cause of datasets map hanging.<|||||>THX @sanchit-gandhi , I tried preprocess num 4 / 8 / 16 but still hanging the datasets map.
This time I reinstall my huggingface-transformers, it works! I can finish the train use official examples.
<|||||>Hey @xyx361100238! That's super strange since the `transformers` library shouldn't interact with `datasets`'s map method 🧐 glad you found a fix by reinstalling `transformers`, it might have bumped a package version that is a `datasets` dependency that unblocked map for you. Closing as complete!<|||||>@sanchit-gandhi ,still not work today,hanging in map
`python run_speech_recognition_seq2seq.py --model_name_or_path="openai/whisper-tiny" --dataset_name="mozilla-foundation/common_voice_11_0" --dataset_config_name="zh-CN" --language="chinese" --train_split_name="train+validation" --eval_split_name="test" --max_steps="5000" --output_dir="./whisper-small-zh" --per_device_train_batch_size="16" --gradient_accumulation_steps="2" --per_device_eval_batch_size="16" --logging_steps="25" --learning_rate="1e-5" --warmup_steps="500" --evaluation_strategy="steps" --eval_steps="1000" --save_strategy="steps" --save_steps="1000" --generation_max_length="225" --preprocessing_num_workers="16" --length_column_name="input_length" --max_duration_in_seconds="30" --text_column_name="sentence" --freeze_feature_encoder="False" --gradient_checkpointing --group_by_length --fp16 --overwrite_output_dir --do_train --do_eval --predict_with_generate
`
`preprocess train dataset (num_proc=16): 0%| | 0/39637 [00:00<?, ? examples/s] `
<|||||>Hey @xyx361100238, I think this is `datasets` library issue that would be more apt there: https://github.com/huggingface/datasets
You can create a dummy reproducible codesnippet for this issue with something like:
```python
from datasets import Audio, load_dataset
raw_dataset = load_dataset("mozilla-foundation/common_voice_11_0", "zh-CN")
raw_dataset = raw_dataset.cast_column("audio", Audio(sampling_rate=16000))
def preprocess_dataset(batch):
audio = batch["audio"]
return batch
raw_dataset = raw_dataset.map(preprocess_dataset, num_proc=16)
```
Feel free to check if that hangs -> you can add the minimum amount of code that reproduces your issue and then post the codesnippet on the datasets repo. |
transformers | 22,049 | closed | Tracing mismatch during conversion of Whisper model to ONNX using torch.onnx.export | I'm trying to convert Whisper model to onnx, so when exporting encoder of Whisper model to onnx by using torch.onnx.export:
```
mel = torch.zeros((1, 80, 3000))
encoder = model.get_encoder().to('cpu')
audio_features = encoder(mel)
torch.onnx.export(
encoder,
mel,
"whisper_encoder.onnx",
input_names=["mel"],
output_names=["output_features"]
)
```
It raises a TracerWarning as follows:
```
/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py:207: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/usr/local/lib/python3.8/dist-packages/transformers/models/whisper/modeling_whisper.py:246: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
```
Afterwards, the onnx file is generated but the resulted model for runtime (using Optimum) is slow (about 50% slower than pytorch run)! I guess that slowness of the onnx model is due to the TracerWarning.
Any Idea?
I'm using transformers == 4.26.0, optimum==1.6.1, onnx==1.10.0 and torch==1.12.0+cu116. | 03-09-2023 08:05:41 | 03-09-2023 08:05:41 | Hi @hannan72! I recommend that you use Optimum for exporting Whisper to the ONNX format (it will basically be a wrapper around `torch.onnx.export` but it is tested and Whisper is supported). You can find more information in the doc: https://huggingface.co/docs/optimum/exporters/onnx/overview
If you encounter any issue, feel free to open an issue in the Optimum repo.<|||||>> Hi @hannan72! I recommend that you use Optimum for exporting Whisper to the ONNX format (it will basically be a wrapper around `torch.onnx.export` but it is tested and Whisper is supported). You can find more information in the doc: https://huggingface.co/docs/optimum/exporters/onnx/overview If you encounter any issue, feel free to open an issue in the Optimum repo.
I have used the Optimum but I get such a Warning and the resulted ONNX model deployed by Optimum ORT is about 50% slower that pytorch model deployment<|||||>Yes I see you opened this issue in Optimum: https://github.com/huggingface/optimum/issues/827
I think the best is to wait for @fxmarty to take a look at it.
Regarding these warnings, I don't think they are the reason why it is slow. They just mean that the expression in the if statements will not be evaluated at runtime, so the model may fail with different batch sizes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,048 | closed | Cannot import name 'deepspeed_reinit' from 'transformers.deepspeed' | ### System Info
Bug found when following the optimization steps provided in **https://huggingface.co/blog/optimum-inference.**
It seems like transformers/deepspeed.py does not contain the method '**deepspeed_reinit**' so it's not possible to import it when loading ORTModel objects.
Thanks in advance for your incredible work. @stas00, @pacman100
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Install optimum[onnxruntime]==1.2.0
Run: from optimum.onnxruntime import ORTModelForQuestionAnswering or import optimum.onnxruntime
### Expected behavior
The package should import the ORTModels without any issue, enabling the optimization of the ONNX models using DeepSpeed | 03-09-2023 08:02:40 | 03-09-2023 08:02:40 | Hi @rubenCrayon!
`deepspeed_reinit` was removed a few versions ago, you should use a more recent version of Optimum. Which may requires to change your script a bit, in that case I recommend that you open an issue in Optimum: https://github.com/huggingface/optimum/issues<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,047 | closed | add progress bar to the sharded model download status | ### Feature request
`from_pretrained('bigscience/bloom')` is taking forever the first time until it's cached (~350GB) - I thought that perhaps with 72 shards it'd be awesome to have an overall progress bar (in addition to the each shard download progress bar) to know where things stand and how many hours the coffee break should last.
Thank you!
| 03-09-2023 07:39:52 | 03-09-2023 07:39:52 | Nested tqdm bars in the console are unreadable though, so not sure how to fix that.<|||||>I didn't know that. I remember it definitely worked in the past. Unless you're referring to something I'm not aware of when you say it's unreadable. Is it because the outside tqdm line will be so far from line 72 that it won't be seen as it'd scroll past visible area?
If that's the case then perhaps this would work:
1. switch to show and erase individual shard download progress as soon as it completed
2. only keep the overall progress bar
so that there are only 2 lines dedicated to the progress updates.
But if none of this is doable nicely, perhaps at least using `desc` to number each shard's tqdm as in "43/72" would at least give some indication. Though it won't really help that much since one can't tell how much time it took to download the previous x entries.
<|||||>Note that the description should already contain the name of the file, which ends with 0043-of-0072 normally.
I can have a look at what adding an overall progress bar would look like, but I don't have any control on the per-file progress bar, as it's issued by huggingface_hub. I could deactivate it entirely (so solution 2.) but I don't have control over it's closing.<|||||>> Note that the description should already contain the name of the file, which ends with 0043-of-0072 normally.
not for me:

> I can have a look at what adding an overall progress bar would look like, but I don't have any control on the per-file progress bar, as it's issued by huggingface_hub. I could deactivate it entirely (so solution 2.) but I don't have control over it's closing.
Understood! Thank you for looking, Sylvain!<|||||>Oh maybe you have an older version of huggingface_hub?<|||||>oh, I didn't know - you're correct - updating to 0.13 did add the filenames - that's much better. Thank you for that, Sylvain.
<|||||>Can you try the PR mentioned above? I got confused and nested progress bars do appear nicely in the console. It's in notebooks that the result is messy. |
transformers | 22,046 | closed | Can't install tf2 on M1 Chip by default | # What does this PR do?
Trying to
```
pip install 'transformers[tf-cpu]'
```
will give you a confusing error like below
```
Collecting sentencepiece==0.1.91
Using cached sentencepiece-0.1.91.tar.gz (500 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
Package sentencepiece was not found in the pkg-config search path.
Perhaps you should add the directory containing `sentencepiece.pc'
to the PKG_CONFIG_PATH environment variable
No package 'sentencepiece' found
Failed to find sentencepiece pkgconfig
[end of output]
```
The answer is to install `cmake` and `pkg-config` based on the reply here:
https://github.com/google/sentencepiece/issues/378#issuecomment-969896519
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu and @MKhalusova | 03-09-2023 05:20:12 | 03-09-2023 05:20:12 | This actually got worse... if you open a new Laptop, you also have to do the following
```
brew install cmake
brew install pkg-config
brew install sentencepiece
pip install sentencepiece
```
and then I had to also install Rust next...
```
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source "$HOME/.cargo/env"
```
After that, it finally works
```
pip install 'transformers[tf-cpu]'
```
Not sure how thorough we want to be in the docs of getting people fully up to speed vs. making certain assumptions. The `sentencepiece` part was fairly brutal to work through<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,045 | closed | Docs Improvement - In ZSH, not using ' ' around pip install fails, fix it | # What does this PR do?
Running
```
pip install transformers[torch]
```
in the default ZSH terminal will fail with the error `zsh: no matches found: transformers[torch]`
The solution is to wrap the installation path in ' ' like
```
pip install 'transformers[torch]'
```
Relevant StackOverflow: https://stackoverflow.com/questions/30539798/zsh-no-matches-found-requestssecurity
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu and @MKhalusova | 03-09-2023 04:45:47 | 03-09-2023 04:45:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,044 | closed | [deepspeed] offload + non-cpuadam optimizer exception doc | part 2 of https://github.com/huggingface/transformers/pull/22043, but we can't merge it until `deepspeed==0.8.3` is released.
This PR documents the new feature and up's the min deepspeed version.
**XXX: DO NOT MERGE UNTIL `deepspeed==0.8.3` is released.**
I'm keeping it as a DRAFT so that I don't mistakenly merge it to soon. But we can pre-approve.
cc: @jeffra | 03-09-2023 03:06:06 | 03-09-2023 03:06:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,043 | closed | [deepspeed] offload + non-cpuadam optimizer exception | adapting to https://github.com/microsoft/DeepSpeed/pull/2971 - as our deepspeed tests will fail without that new flag when that deepspeed PR will get merged.
Will add the new config `zero_force_ds_cpu_optimizer` to the integration docs and require `deepspeed>=0.8.3`, but can't do it here w/o breaking DS' CI. Will do it here post new release https://github.com/huggingface/transformers/pull/22044
@jeffra | 03-09-2023 01:08:57 | 03-09-2023 01:08:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,042 | closed | testing tokengt | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-09-2023 00:13:15 | 03-09-2023 00:13:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22042). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,041 | closed | Tokengt branch | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-08-2023 22:46:34 | 03-08-2023 22:46:34 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22041). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,040 | closed | Return analysis for hyperparameter_search with Ray backend | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #22037
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Issue #22037
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@sgugger | 03-08-2023 20:56:45 | 03-08-2023 20:56:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,039 | closed | Mark all `BridgeTower` tests slow for now | # What does this PR do?
Mark all `BridgeTower` tests slow for now. | 03-08-2023 20:32:37 | 03-08-2023 20:32:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,038 | closed | Pytorch MBart Model - Trace on CPU and run inference on GPU. | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.157-139.675.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.9.15
- Huggingface_hub version: 0.13.0
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Load MBart model and trace it on CPU with `torch.jit.trace()`
```python
import torch
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50", torchscript=True)
traced_model = torch.jit.trace(model, [inputs.input_ids, inputs.attention_mask])
torch.jit.save(traced_model, "mbart-traced.pt")
```
2. Load traced model and place it on GPU using `torch.jit.load()`
```python
loaded_model_gpu = torch.jit.load("mbart-traced.pt", map_location=torch.device('cuda'))
```
3. Run inference on GPU
```python
loaded_model_gpu(inputs.input_ids.to('cuda'), inputs.attention_mask.to('cuda'))
```
The following error is raised while running inference:
```python
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1394.py", line 15, in forward
lm_head = self.lm_head
model = self.model
_0 = (model).forward(input_ids, attention_mask, )
~~~~~~~~~~~~~~ <--- HERE
_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, _32, _33, _34, _35, _36, _37, _38, _39, _40, _41, _42, _43, _44, _45, _46, _47, _48, _49, _50, = _0
_51 = torch.add((lm_head).forward(_1, ), final_logits_bias)
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1392.py", line 31, in forward
_7 = torch.slice(prev_output_tokens0, 0, 0, 9223372036854775807)
_8 = torch.fill_(torch.select(_7, 1, 0), decoder_start_tokens)
_9 = (encoder).forward(embed_tokens, weight, input_ids, attention_mask, )
~~~~~~~~~~~~~~~~ <--- HERE
_10 = (decoder).forward(weight, prev_output_tokens0, attention_mask, _9, )
_11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, _32, _33, _34, _35, _36, _37, _38, _39, _40, _41, _42, _43, _44, _45, _46, _47, _48, _49, _50, _51, _52, _53, _54, _55, _56, _57, _58, _59, = _10
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1181.py", line 47, in forward
_13 = (argument_1).forward(weight, input, )
inputs_embeds = torch.mul(_13, CONSTANTS.c1)
_14 = (embed_positions).forward(input_ids, )
~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
input0 = torch.add(inputs_embeds, _14)
_15 = (layernorm_embedding).forward(input0, )
File "code/__torch__/transformers/models/mbart/modeling_mbart/___torch_mangle_1045.py", line 17, in forward
positions = torch.expand(_2, [_0, -1])
input = torch.add(positions, CONSTANTS.c3)
return torch.embedding(weight, input)
~~~~~~~~~~~~~~~ <--- HERE
Traceback of TorchScript, original code (most recent call last):
...
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
```
Also, by running `dump_to_str` I am able to see that device is set to `cpu` within `MBartLearnedPositionalEmbedding`:
```python
>>> loaded_model_gpu._c.dump_to_str(True, False, False)
module __torch__.transformers.models.mbart.modeling_mbart.___torch_mangle_4565.MBartLearnedPositionalEmbedding {
parameters {
weight = ...
}
attributes {
weight = ...
training = False
_is_full_backward_hook = None
}
methods {
method forward {
graph(%self.1 : __torch__.transformers.models.mbart.modeling_mbart.___torch_mangle_4565.MBartLearnedPositionalEmbedding,
%input_ids.1 : Tensor):
%34 : Tensor = prim::Constant[value={2}]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:133:0
%25 : bool = prim::Constant[value=0]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%52 : Device = prim::Constant[value="cpu"]()
%22 : NoneType = prim::Constant() # :0:0
%16 : Tensor = prim::Constant[value={0}]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:130:0
%5 : int = prim::Constant[value=0]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%12 : int = prim::Constant[value=1]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%21 : int = prim::Constant[value=4]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%29 : int = prim::Constant[value=-1]() # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%weight.1 : Tensor = prim::GetAttr[name="weight"](%self.1)
%6 : int = aten::size(%input_ids.1, %5) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%bsz.1 : Tensor = prim::NumToTensor(%6) # :0:0
%10 : int = aten::Int(%bsz.1) # :0:0
%13 : int = aten::size(%input_ids.1, %12) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:128:0
%seq_len.1 : Tensor = prim::NumToTensor(%13) # :0:0
%18 : Tensor = aten::add(%seq_len.1, %16, %12) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:130:0
%19 : Scalar = aten::ScalarImplicit(%18) # :0:0
%26 : Tensor = aten::arange(%5, %19, %21, %22, %52, %25) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%30 : int[] = prim::ListConstruct(%10, %29)
%positions.1 : Tensor = aten::expand(%26, %30, %25) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:129:0
%input.1 : Tensor = aten::add(%positions.1, %34, %12) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py:133:0
%42 : Tensor = aten::embedding(%weight.1, %input.1, %29, %25, %25) # /home/gnovack/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/nn/functional.py:2210:0
return (%42)
}
}
submodules {
}
}
```
### Expected behavior
I expected to be able to run inference successfully on GPU.
I have come across some similar issues related to other types of models:
- https://github.com/huggingface/transformers/issues/5664
- https://github.com/pytorch/pytorch/issues/50971
And some PRs to address some similar issues:
- https://github.com/huggingface/transformers/pull/11252
- https://github.com/huggingface/transformers/pull/12290 | 03-08-2023 19:53:58 | 03-08-2023 19:53:58 | cc @ArthurZucker and @younesbelkada <|||||>EDIT: in order to actually solve this, we would need a lot of potential usage.
The reason is that after fixing the positional ids with a `registered buffer` we need to modify the causal attention mask which also has to be a buffer otherwise it does not work.
This is a lot of refactoring on a lot of model (even if we juste fix this one, it is still a bit too much): we would have to implement the same logic as in GPT2 and GPTNeo. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,037 | closed | Trainer hyperparameter_search only returns the best trial config | ### Feature request
Allow `hyperparameter_search` method of `Trainer` to return the entire `ExperimentAnalysis` object instead of a single `best_run`.
### Motivation
The `hyperparameter_search` method of the `Trainer` currently only returns the best configuration `best_run`, instead of the more comprehensive `ExperimentAnalysis` object `analysis`. However, I believe that `analysis` would be more valuable than just the single best run configuration since it offers additional attributes and methods that can provide more useful information about tuning (see [doc](https://docs.ray.io/en/releases-1.11.0/tune/api_docs/analysis.html#analysis-tune-analysis)). Therefore, I suggest modifying the`hyperparameter_search` method to return the `ExperimentAnalysis` object so that users can do more analysis.
```python
analysis = ray.tune.run(
dynamic_modules_import_trainable,
config=trainer.hp_space(None),
num_samples=n_trials,
**kwargs,
)
best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3], scope=trainer.args.ray_scope)
best_run = BestRun(best_trial.trial_id, best_trial.last_result["objective"], best_trial.config)
if _tb_writer is not None:
trainer.add_callback(_tb_writer)
return best_run
```
### Your contribution
I can submit a PR for this feature. | 03-08-2023 19:27:46 | 03-08-2023 19:27:46 | We'd be happy to look at a PR!<|||||>@sgugger I have a concern regarding the different backends we use (ray, optuna, sigopt, wandb) and their varying return objects. I wonder if we should consider modifying all backends to return a more comprehensive object, such as the `analysis` object used in ray, to ensure consistency across all the backends.
While I am familiar with the ray tune backend, I am unsure about how to proceed with the other backends. I checked the code briefly to find the object that acts as `analysis` for ray:
1. [study](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations.py#L196) for optuna
2. [entire list of experiments](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations.py#L433) for sigopt
3. [dictionary](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations.py#L433) for wandb (need to modify the dictionary to record results of all experiments instead of the best one.)
Let me know if I understand it correctly.<|||||>I think it's okay if the object is backend specific.<|||||>PR #22040 submitted. |
transformers | 22,036 | closed | [21737][T5]: Fix gradient checkpoint bug | <!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/21737
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-08-2023 19:01:44 | 03-08-2023 19:01:44 | cc @gante <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @gante! |
transformers | 22,035 | closed | Avoid `text_config_dict` and `vision_config_dict` being saved for CLIP-like models | # What does this PR do?
**Avoid `text_config_dict` and `vision_config_dict` being saved for CLIP-like models.** So less confusion.
Currently, configuration classes for CLIP-like models will save both `text_config` and `text_config_dict`, if `text_config_dict` is provided (as `kwargs`). Similarly, for `vision_config` and `vision_config_dict`. Many configuration files on the Hub have all these keys, and they look really confusing, see for example [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16/blob/main/config.json) or [laion/CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/blob/main/config.json#L115)
This issue dates back before the PR #19954. The PR #19954 tried to avoid the usage of `text_config_dict` and `vision_config_dict` (while keeping back compatibility), it didn't prevent `text_config_dict` and `vision_config_dict` being saved.
This PR:
- avoid `text_config_dict` and `vision_config_dict` being saved,
- make sure all the values provided in `text_config_dict` and `vision_config_dict` will be used to update `text_config` and `vision_config` (so backward compatibility), and only `text_config` and `vision_config` are saved
- **therefore, we can load a existing configuration, save it and upload it again --> make it clean + less confusing, and not break anything**
I will apply the same change to other CLIP-like models if the idea/approach is accepted. | 03-08-2023 16:28:21 | 03-08-2023 16:28:21 | > Thanks! This may need to be copied over some CLIP-like models that also have some backward-compatibility code with the config dicts.
Sure, in the plan already. Quoted in the PR description
> I will apply the same change to other CLIP-like models if the idea/approach is accepted.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,034 | closed | Bug fix: token classification pipeline while passing offset_mapping | # What does this PR do?
Bug fix: add check so `AttributeError` isn't preventing using slow tokenizers with `offset_mapping`
On token-classification pipelines it threw an error (AttributeError None) if using a slow tokenizer & passing `offset_mapping`.
It is intended to work so (if you want) you can calculate offsets yourself while using a slow(or custom) tokenizer. otherwise "start"&"end" values returned from the pipeline are `None`
For example 'google/canine-c' (pretend it is finetuned)
```python
from transformers import pipeline
token_classifier = pipeline(
"token-classification", model='google/canine-c',
aggregation_strategy="simple", ignore_labels=[],
)
offset_mapping=[(0,0)]+[(i,i+1) for i,t in enumerate(text)]+[(0,0)] # canine is an easy enough tokenizer to calculate offsets ourselves accounting for [cls],[sep].
ents=token_classifier(text)
print(ents) # without offset_mapping "start"&"end" is None
ents=token_classifier(text, offset_mapping=offset_mapping)
print(ents) # should return entities with "start"&"end" index values.
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
- pipelines: @Narsil
| 03-08-2023 16:22:59 | 03-08-2023 16:22:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>the current failing tests look unrelated to me
<|||||>LGTM @sgugger
I'm confused by the failure, I'm guessing it's CircleCI running the wrong runner, but I don't remember the fix. |
transformers | 22,033 | closed | Edit the docstring of `image_processing_donut` to match code | # What does this PR do?
It changes the list of arguments in the the docstring of the class `DonutImageProcessor`, for the current docstring does not match the list of parameters in the code.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
Donut: @amyeroberts @alaradirik
| 03-08-2023 15:58:26 | 03-08-2023 15:58:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@vermouthmjl If running `make style` didn't resolve the `check_code_quality` checks, make sure the most recent formatting libraries are installed using `pip install -e .[quality]` |
transformers | 22,032 | closed | handle numpy inputs in whole word mask data collator | # What does this PR do?
Adds support to `DataCollatorForWholeWordMask` to work on numpy arrays as inputs. I added tests for all variants (np, pt, tf), but only tf had the bug which is now fixed.
Fixes https://github.com/huggingface/transformers/issues/22009
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@gante
| 03-08-2023 15:45:46 | 03-08-2023 15:45:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1, can you have a look too plz? You have been working with these :) Then we can tag Sylvain, after you approve too |
transformers | 22,031 | closed | Add tokenize_kwargs parameter definition in the FeatureExtractionPipeline | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21971
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
@Narsil
| 03-08-2023 15:31:19 | 03-08-2023 15:31:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks great to me.
To fix quality you can do `pip install -e .[quality] && make fixup`
@sgugger for final review.<|||||>The quality is failing due to the branch being too old, not something in this PR. Merging.
Thanks for your contribution! |
transformers | 22,030 | closed | Thomas/llama | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-08-2023 15:27:07 | 03-08-2023 15:27:07 | Hi! Thank you for opening a PR. I see that you've forked my branch. I think it's perfectly fine if you want to keep on working on my branch for your experiments. However I think the more promising branch is this one https://github.com/huggingface/transformers/pull/21955 (as in the most likely to be merged). I'm closing this PR as this has low probability of being merged on `main`. |
transformers | 22,028 | closed | Fix test for torchneuroncore in Trainer | # What does this PR do?
The test was always passing since the function is not None... | 03-08-2023 13:57:17 | 03-08-2023 13:57:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,027 | closed | KeyError: 'eval_metric-name' in trainer.py, line 2339 | ### System Info
Latest `transformers` version from the `main` branch, running on Ubuntu
```python
File "code-cli.py", line 347, in <module>
main(sys.argv[1:])
File "code-cli.py", line 281, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1631, in train
return inner_training_loop(
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1975, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2236, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2339, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_metric-name'
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Seq2seq training on NQ
### Expected behavior
```python
File "code-cli.py", line 347, in <module>
main(sys.argv[1:])
File "code-cli.py", line 281, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1631, in train
return inner_training_loop(
File "[..]/python3.10/site-packages/transformers/trainer.py", line 1975, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2236, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "[..]/python3.10/site-packages/transformers/trainer.py", line 2339, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_metric-name'
``` | 03-08-2023 13:40:52 | 03-08-2023 13:40:52 | Without seeing the code you run and how it was launched, there is very little we can do to help you.<|||||>@sgugger no worries it's just to be able to reference to an issue when/if I submit a pull request |
transformers | 22,026 | closed | [`bnb`] Fix bnb error message | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/22018
This PR introduces a clearer error message to users who wants to explore how to dispatch a model between CPU & GPU when loading a model in 8bit
cc @sgugger
| 03-08-2023 13:27:39 | 03-08-2023 13:27:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,025 | closed | Update ALIGN docs | # What does this PR do?
Improves ALIGN docs, fixes typos.
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| 03-08-2023 12:57:28 | 03-08-2023 12:57:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,024 | closed | [WIP]`NLLB-MoE` Adds the moe model | # What does this PR do?
Fixes #21300
To-Dos:
- [x] Conversion script and original weights available [here](https://huggingface.co/ArthurZ/fairseq-nllb-moe)
- [x] Converted checkpoints and configuration file available:
- [moe-128](https://huggingface.co/ArthurZ/nllb-moe-128) experts
- [x] Make the common tests go green
- [x] Implement top 2 gating mecanism
- [x] Add integration tests for:
- [x] the routers
- [x] the logits
- [x] the generation using greedy search
- [x] Cleanup the PR | 03-08-2023 12:11:57 | 03-08-2023 12:11:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,023 | closed | Update `AudioClassificationPipelineTests::test_small_model_pt` for PT 2.0.0 | # What does this PR do?
(Not tiny, but not too large neither) Different values with different torch/cuda versions. | 03-08-2023 10:52:04 | 03-08-2023 10:52:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,022 | closed | VideoMAE doctest - use valid dummy pixel values | # What does this PR do?
This PR updates the raw input video frames to the expected data type.
The pixel values passed into the image processor in the tests took values sampled from a standard normal distribution. For an image (or frame) this represents pixels which have been rescaled between [0 - 1] and normalized i.e. one which has already been passed to the image processor.
After merging #21969, resizing the image throws an error as this image cannot be converted to a PIL.Image.Image without possibly unexpected behaviour from numpy and overflow issues.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 03-08-2023 10:40:36 | 03-08-2023 10:40:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,021 | closed | [DO NOT MERGE] Test v0.13.0.rc0 | DO NOT MERGE.
Only to test the CI with huggingface_hub 0.13 release. | 03-08-2023 09:00:16 | 03-08-2023 09:00:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>CI is green. I'm closing this :) |
transformers | 22,020 | closed | Add missing optional argument summary_proj_to_labels to XLNetConfig | # What does this PR do?
`XLNetConfig` (`src\transformers\models\xlnet\configuration_xlnet.py`) lists an argument `summary_proj_to_labels` as optional and with a default value of `True`. However, this is not actually included in the arguments and is not set anywhere. Initializing an XLNet model thus results in no such parameter existing. Also fixes a very minor typo (`boo` -> `bool`).
For reference: the same argument also is utilized in `XLMConfig` (`src\transformers\models\xlm\configuration_xlm.py`) but is actually utilized there.
From personal experience, this argument is used in `SequenceSummary` (`src\transformers\modeling_utils.py`). When using default arguments, there is an inconsistency between the models where one would have `hidden_size` -> `hidden_size` layers while the other would have `hidden_size` -> `num_labels` layers.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc: @sgugger
Still a draft right now. Need to make changes to the tests to adapt to this. Please let me know if this is intended functionality though. | 03-08-2023 08:31:36 | 03-08-2023 08:31:36 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_22020). All of your documentation changes will be reflected on that endpoint.<|||||>Yes. I wanted to get some clarity on that before working on fixing the tests. From a logical perspective, I would believe that the argument should exist since at the end of the day it decides only one thing - "Whether the projection outputs should have config.num_labels or config.hidden_size classes.". As it stands, it will only use `config.hidden_size` classes since the argument doesn't currently exist. Do you know someone who might be able to clarify this?
I believe the tests are failing since this change causes the output of the summary layer to be different. Setting `summary_proj_to_labels=False` causes all tests to pass locally. I'll look into the tests sometime this weekend assuming that we want to include this change.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,019 | closed | fixes the gradient checkpointing of whisper | fixes the gradient checkpointing of whisper
@gante
#21737 | 03-08-2023 08:18:54 | 03-08-2023 08:18:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,018 | closed | PretrainedModel.from_pretrained does not work with load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True and device_map='auto' | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Nas
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
quantization_config = BitsAndBytesConfig(load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=True)
AutoModelForCausalLM.from_pretrained(path, device_map='auto', quantization_config=quantization_config)
```
If the model does not fit into VRAM, it reports:
```
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you have set a value for `max_memory` you should increase that. To have
an idea of the modules that are set on the CPU or RAM you can print model.hf_device_map.
```
auto-created device map example:
```
{'model.decoder.embed_tokens': 0, 'model.decoder.layers.0': 0, 'model.decoder.layers.1': 0, 'model.decoder.layers.2': 0, 'model.decoder.layers.3': 0, 'model.decoder.layers.4': 0, 'model.decoder.layers.5': 0, 'model.decoder.layers.6': 0, 'model.decoder.layers.7': 0, 'model.decoder.layers.8': 0, 'model.decoder.layers.9': 0, 'model.decoder.layers.10': 0, 'model.decoder.layers.11': 0, 'model.decoder.layers.12': 0, 'model.decoder.layers.13': 0, 'model.decoder.layers.14': 0, 'model.decoder.layers.15': 0, 'model.decoder.layers.16': 0, 'model.decoder.layers.17': 0, 'model.decoder.layers.18': 0, 'model.decoder.layers.19': 0, 'model.decoder.layers.20': 0, 'model.decoder.layers.21': 0, 'model.decoder.layers.22': 0, 'model.decoder.layers.23': 0, 'model.decoder.layers.24': 0, 'model.decoder.layers.25': 'cpu', 'model.decoder.layers.26': 'cpu', 'model.decoder.layers.27': 'cpu', 'model.decoder.layers.28': 'cpu', 'model.decoder.layers.29': 'cpu', 'model.decoder.layers.30': 'cpu', 'model.decoder.layers.31': 'cpu', 'model.decoder.layers.32': 'cpu', 'model.decoder.layers.33': 'cpu', 'model.decoder.layers.34': 'cpu', 'model.decoder.layers.35': 'cpu', 'model.decoder.layers.36': 'cpu', 'model.decoder.layers.37': 'cpu', 'model.decoder.layers.38': 'cpu', 'model.decoder.layers.39': 'cpu', 'model.decoder.norm': 'cpu', 'lm_head': 'cpu'}
```
### Expected behavior
It should auto create device_map, quantize what's in VRAM to int8, and keep what on cpu/RAM as float32.
In fact if the `device_map` is passed manually it runs correctly. The problem is that `PretrainedModel.from_pretrained` expand `device='auto'` to acutal mapping after populating `modules_to_not_convert`, so modules automatically offloaded to RAM are missing from the list. If I edit modeling_utils.py and expand `device='auto'` before `replace_8bit_linear` it works correctly. | 03-08-2023 08:16:45 | 03-08-2023 08:16:45 | All the weights offloaded to the CPU won't be in int8 though, so the model is not loaded in 8 bits as requested. This is why we throw an error and choose not to support this use case (cc @younesbelkada ).<|||||>Yes, as explained by @sgugger we don't support `device_map=auto` + `llm_int8_enable_fp32_cpu_offload` you need to pass a custom device map as explained in https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
The main motivation behind that is that we want to avoid unexpected behavior for users that are new to this feature and prefer to support this only for advanced use cases where users know exactly what they are doing.
I agree though the warning message is slightly misleading and we can phrase it differently <|||||>@sgsdxzy
As #22026 has been merged it closed the issue, feel free to re-open the issue if you think that there is something that needs to be fixed
Thanks! |
transformers | 22,017 | closed | Weight mismatch when using deepspeed zero-stage 3 and pretrained codegen model | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.15.0-189-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help?
@stas @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. My code for load model.
```python
from transformers import AutoModelForCausalLM, AutoConfig
from transformers.models.codegen.modeling_codegen import CodeGenMLP
import argparse
import torch
import time, datetime
import deepspeed
from deepspeed.accelerator import get_accelerator
from torch.utils.data import Dataset
from transformers.activations import ClippedGELUActivation, LinearActivation
from lion_pytorch import Lion
SEQ_LEN = 300
VOCAB_SIZE = 10000
DATA_SIZE = 100
class FakeDataset(Dataset):
def __init__(self, length, seq_len, vocab_size):
self.length = length
self.seq_len = seq_len
self.vocab_size = vocab_size
def __len__(self):
return self.length
def __getitem__(self, index):
input_ids = torch.randint(0, self.vocab_size, (self.seq_len, ))
attention_mask = torch.ones_like(input_ids)
return input_ids, attention_mask
def train():
with deepspeed.zero.Init():
model = AutoModelForCausalLM.from_pretrained(
"Salesforce/codegen-350M-mono",
ignore_mismatched_sizes=True # if False, it would run in error
)
optimizer = Lion(model.parameters(), lr=1e-4, weight_decay=1e-2)
print(f"[{datetime.datetime.today()}] Loading dataset.")
dataset = FakeDataset(DATA_SIZE, SEQ_LEN, VOCAB_SIZE)
print(f"[{datetime.datetime.today()}] Initializing DeepSpeed Engine.")
model_engine, optimizer, trainloader, _ = deepspeed.initialize(
args=args,
model=model,
optimizer=optimizer,
model_parameters=model.parameters(),
training_data=dataset)
model.train()
for i, data in enumerate(trainloader):
model_engine.zero_grad()
optimizer.zero_grad()
input_ids, attn_mask = data[0].cuda(), data[1].cuda()
output = model_engine(input_ids=input_ids,
attention_mask=attn_mask,
labels=input_ids)
model_engine.backward(output['loss'])
model_engine.step()
# 2 pytorch allocator cache flushes since last step. this happens when
# there is high memory pressure and is detrimental to performance. if
# this is happening frequently consider adjusting settings to reduce
# memory consumption. If you are unable to make the cache flushes go
# away consider adding get_accelerator().empty_cache() calls in your
# training loop to ensure that all ranks flush their caches at the
# same time
get_accelerator().empty_cache()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--local_rank', type=int, default=-1)
parser = deepspeed.add_config_arguments(parser)
args = parser.parse_args()
train()
```
2. Deepspeed config
```json
{
"gradient_accumulation_steps": 1,
"train_micro_batch_size_per_gpu": 1,
"steps_per_print": 1,
"wall_clock_breakdown": true,
"fp16": {
"enabled": true,
"auto_cast": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.001,
"betas": [
0.8,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"zero_allow_untested_optimizer": true,
"zero_optimization": {
"stage": 3,
"contiguous_gradients": true,
"overlap_comm": true,
"reduce_scatter": true,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
}
}
}
```
4. bash script to run the
```bash
deepspeed --include localhost:0,1,2,3,4,5,6,7 train.py --deepspeed_config 350m.json
```
5. Relevant output snippets. It shows the weird behaviour wherein the model isn't being properly initialized with the pretrained weights.
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/39761308/223641149-1f25d27f-2069-43c3-bddc-0d6cad143be5.png">
### Expected behavior
Model being properly initialized with the pretrained weights when using DeepSpeed ZERO Stage-3. It seems that the model parameters are randomly initialized so far. | 03-08-2023 07:08:56 | 03-08-2023 07:08:56 | Hey, there is something wrong indeed :
> ignore_mismatched_sizes=True # if False, it would run in error
The error in witch you run should indicate how to fix the problems (most probably malformed configuration file)<|||||>Thanks for you quick reply.
Actually, I followed the error message to set it to True. When I set 'ignore_mismatched_sizes' to False, it prints as followings:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/39761308/223704955-699b9db5-906c-4d1a-af62-61f93189d968.png">
<|||||>Ah sorry, you were right in ignoring the missmatches! Yes, there is a special argument to initialise your model using deepspeed in transformers but it does not support the deepspeed stage 3:
```
* `low_cpu_mem_usage` algorithm:
This is an experimental function that loads the model using ~1x model size CPU memory
Here is how it works:
1. save which state_dict keys we have
2. drop state_dict before the model is created, since the latter takes 1x model size CPU memory
3. after the model has been instantiated switch to the meta device all params/buffers that
are going to be replaced from the loaded state_dict
4. load state_dict 2nd time
5. replace the params/buffers from the state_dict
Currently, it can't handle deepspeed ZeRO stage 3 and ignores loading errors
```
The documentation mentions this.
So this is expeced, but @stas00 is the deep speed boss so pinging him for help, but this is more a feature request than a bug IMO<|||||>For Non HF-Trainer integration please see:
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#nontrainer-deepspeed-integration
`zero.Init` is already done for you inside the modeling code - you just need to set `dschf = HfDeepSpeedConfig(args.deepspeed_config)` and keep it alive before you call `from_pretrained` - that's it.
I fixed your program to work:
```
from transformers import AutoModelForCausalLM, AutoConfig
from transformers.models.codegen.modeling_codegen import CodeGenMLP
import argparse
import torch
import time, datetime
import deepspeed
from deepspeed.accelerator import get_accelerator
from torch.utils.data import Dataset
from transformers.activations import ClippedGELUActivation, LinearActivation
from lion_pytorch import Lion
SEQ_LEN = 300
VOCAB_SIZE = 10000
DATA_SIZE = 100
class FakeDataset(Dataset):
def __init__(self, length, seq_len, vocab_size):
self.length = length
self.seq_len = seq_len
self.vocab_size = vocab_size
def __len__(self):
return self.length
def __getitem__(self, index):
input_ids = torch.randint(0, self.vocab_size, (self.seq_len, ))
attention_mask = torch.ones_like(input_ids)
return input_ids, attention_mask
def train(args):
from transformers.deepspeed import HfDeepSpeedConfig
dschf = HfDeepSpeedConfig(args.deepspeed_config) # keep this object alive
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-mono")
optimizer = Lion(model.parameters(), lr=1e-4, weight_decay=1e-2)
print(f"[{datetime.datetime.today()}] Loading dataset.")
dataset = FakeDataset(DATA_SIZE, SEQ_LEN, VOCAB_SIZE)
print(f"[{datetime.datetime.today()}] Initializing DeepSpeed Engine.")
model_engine, optimizer, trainloader, _ = deepspeed.initialize(
args=args,
model=model,
optimizer=optimizer,
model_parameters=model.parameters(),
training_data=dataset)
model.train()
for i, data in enumerate(trainloader):
model_engine.zero_grad()
optimizer.zero_grad()
input_ids, attn_mask = data[0].cuda(), data[1].cuda()
output = model_engine(input_ids=input_ids,
attention_mask=attn_mask,
labels=input_ids)
model_engine.backward(output['loss'])
model_engine.step()
# 2 pytorch allocator cache flushes since last step. this happens when
# there is high memory pressure and is detrimental to performance. if
# this is happening frequently consider adjusting settings to reduce
# memory consumption. If you are unable to make the cache flushes go
# away consider adding get_accelerator().empty_cache() calls in your
# training loop to ensure that all ranks flush their caches at the
# same time
get_accelerator().empty_cache()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--local_rank', type=int, default=-1)
parser.add_argument('--deepspeed_config', type=str)
args = parser.parse_args()
train(args)
```<|||||>BTW, when you use deepspeed offload w/ LION it will be slow.
You want deepspeed's Adam instead or turn off offload. You shouldn't need it with 8 gpus and this small model. Unless you were just using it for a repro case, still 8 gpus is a lot of sharding.
The Deepspeed team are working on flagging this incompatibility here https://github.com/microsoft/DeepSpeed/pull/2971
Make sure to enabled gradient checkpointing - which will save you a ton of gpu memory at a small cost of slowdown. (unrelated to deepspeed)<|||||>Thanks very much. The problem have been solved. |
transformers | 22,016 | closed | `clean_up_tokenization` too many false positives | ### System Info
The method `PreTrainedTokenizerBase.clean_up_tokenization` attempts to fix some quote marks, but breaks quite a lot of the time.
I'm testing various tokenization techniques searching for the holy grail of `original == decode(encode(original))`
Looping through docs in OpenWebText, here's some of the results:

The fix is pretty easy: instead of doing `text.replace(" 's", "'s")`, do `re.sub(r" 's\b", "'s", text)`.
I note that this has already been logged, and the AUTO CLOSED here: https://github.com/huggingface/transformers/issues/6164
Please let me know if you would like to hear my thoughts about auto closing bugs :)
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
For any tokenizer `tok`, note the output of:
```py
tok.decode(tok("asking why 'my people' wanted").input_ids)
```
### Expected behavior
Output should be "asking why 'my people' wanted", not "asking why'my people' wanted" | 03-08-2023 04:22:46 | 03-08-2023 04:22:46 | Hey! Thanks for pointing this out! I do agree with you on this one, I am also guessing that if you have `he said that 'revenge' was over` the same problem will occur.
Currently this is what is being used :
```python
out_string = (
out_string.replace(" .", ".")
.replace(" ?", "?")
.replace(" !", "!")
.replace(" ,", ",")
.replace(" ' ", "'")
.replace(" n't", "n't")
.replace(" 'm", "'m")
.replace(" 's", "'s")
.replace(" 've", "'ve")
.replace(" 're", "'re")
)
```
So a lot of patterns are going to get swallowed up by this.
cc @Narsil is it not too breaking to switch to the `re` pattern? The same thing happens in `wordpiece.rs`<|||||>> holy grail of original == decode(encode(original))
Bloom tokenizer achieves this if you're looking for it. To the exception that there's a very old default: https://github.com/huggingface/transformers/pull/20846
@ArthurZucker
I feel really bad about making changes to such old things. It's been in use for so long I don't feel it's a bug anymore but a feature. Allowing users to disengage from the cleanup (and maybe make it a default for newly created tokenizers) is OK, but modifying existing behavior, I don't feel good about (in theory I like it, but I'm fairly confident it will blow up as soon as released, and if it blows up a little bit later, then we'll be in a worse position even since you have 2 different behavior unable to find a good compromise).
My take is that the replace is bad, but the cleanup itself is bad and should just be not used anymore (and for BC we should just modify future behavior, not the current one).
<|||||>Yes this method seems like a good candidate for the great deprecation, and we can see if we want to officially support something better.<|||||>I appreciate the reluctance to 'move fast and break things' - nice to see :)
As a user finding his way around the Hugging Face packages, it did strike me as odd that there was extra magic in the `transformers` tokenizer that wasn't in the underlying `tokenizers` tokenizer. It certainly makes troubleshooting difficult, so my humble vote would go toward deprecating.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,014 | closed | Support `padding_side` in `Blip2Processor` | ### Feature request
Support different `padding_side` for `Blip2Processor`.
### Motivation
When I use BLIP2 LLM, I found that the padding style is different between decoder-only model (opt-2.7b for example) and encoder-decoder model (flan-t5-xl for example). So I assume that the paddings are different from `Salesforce/blip2-opt-2.7b` and `Salesforce/blip2-flan-t5-xl`. But actually I got the same padding results, the default is `padding_side`=right.
Code example:
```
prompt = ["hello world", "Question: how many cats are there? Answer:"]
processor_1 = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
processor_2 = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl")
inputs_1 = processor_1(text=prompt, return_tensors="pt", padding=True)
inputs_2 = processor_2(text=prompt, return_tensors="pt", padding=True)
```
Output (the same for inputs_1 and inputs_2):
```
{'input_ids': tensor([[ 2, 42891, 232, 1, 1, 1, 1, 1, 1, 1,
1],
[ 2, 45641, 35, 141, 171, 10017, 32, 89, 116, 31652,
35]]), 'attention_mask': tensor([[1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
I believe that right padding for flan-t5 will give the wrong outputs when calling `generate`, correct me if I am wrong: (transformers/generation/utils.py)
<img width="940" alt="image" src="https://user-images.githubusercontent.com/16580382/223613908-3ca57575-5536-4296-a449-e49ad3b4fa90.png">
Expected outputs (when setting `padding_side`=left):
```
{'input_ids': tensor([[ 1, 1, 1, 1, 1, 1, 1,
1, 2, 42891, 232],
[ 2, 45641, 35, 141, 171, 10017, 32, 89, 116, 31652,
35]]), 'attention_mask': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
### Your contribution
I found the this `padding_side` feature exists in `AutoTokenizer`, is that possible to move this feature in `Blip2Processor`? | 03-08-2023 03:35:19 | 03-08-2023 03:35:19 | Hi,
You can achieve that by simply updating the `padding_side` attribute of the processor's tokenizer:
```
processor.tokenizer.padding_side = "left"
```
Note that Blip2Processor is just a wrapper around both the image processor and the tokenizer.<|||||>Thanks a lot!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,013 | closed | Dataset load problem when using own data to run run_mim.py | ### System Info
transformers 4.27.0
python 3.8.16
Ubuntu 20.04
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I directly use the script run_mim.py (examples/pytorch/image-pretraining) to fine-tune the VIT model on my own data, I got the error FileNotFoundError: Unable to find 'my dataset absolute path' at /. However, when I run the script run_image_classification_no_trainer.py (examples/pytorch/image-classification) to fine-tune the VIT model on the same data with the same path, everything is ok.
### Expected behavior
I compare the implementation of run_mim.py and run_image_classification_no_trainer.py. It seems that the previous one has some problems when sets train_dir.
In run_image_classification_no_trainer.py, the implementation is data_files["train"] = os.path.join(args.train_dir, "**") seeing https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L266
In run_mim.py, the implementation is data_files["train"] = self.train_dir seeing https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mim.py#L109
The later one misses "**" in the file_path for loading dataset.
I changed the code in my local file, and the script run_mim.py runs well. | 03-08-2023 03:33:07 | 03-08-2023 03:33:07 | Would you like to open a PR with your fix?<|||||>> Would you like to open a PR with your fix?
I am not sure whether my solution is correct, I hope your organization could check it. If the solution is ok, it is my honor to open a new PR.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,012 | closed | update: bertology paper | # What does this PR do?
Add additional reference papers for the documentation of BERTology.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-08-2023 02:36:42 | 03-08-2023 02:36:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,011 | closed | Blip2ForConditionalGeneration.from_pretrained is limited by 100% CPU usability (on one single core) | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.19.0-31-generic-x86_64-with-glibc2.36
- Python version: 3.10.6
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 2.0.0.dev20230209+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run this code on a computer with stron GPU and strong CPU:
```
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch
device = "cuda"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
with torch.device("cuda"):
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
for i in range(1, 923):
raw_image = Image.open('UIDimgs/' + str(i) + '.jpg').convert('RGB')
inputs = processor(raw_image, return_tensors="pt").to(device, torch.float16)
out = model.generate(**inputs, max_length=64, min_length=20)
print(i,': ',processor.decode(out[0], skip_special_tokens=True))
```
### Expected behavior
Hello!
When running the above code the usability of my RTX 4090 is only around 30%. My CPU usability is all the time limited with 100%. Unfortunately Python here only uses one single core of my AMD 5900X (12+12 cores).
Can anyone see an error in my code? How can I bring the code to use more than only one single CPU core? | 03-08-2023 01:55:36 | 03-08-2023 01:55:36 | cc @younesbelkada <|||||>Hello @Marcophono2
Thanks for the issue, can you try:
```python
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch
device = "cuda"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
print(model.hf_device_map)
for i in range(1, 923):
raw_image = Image.open('UIDimgs/' + str(i) + '.jpg').convert('RGB')
inputs = processor(raw_image, return_tensors="pt").to(device, torch.float16)
out = model.generate(**inputs, max_length=64, min_length=20)
print(i,': ',processor.decode(out[0], skip_special_tokens=True))
```
And let me know what you get for `print(model.hf_device_map)`?<|||||>Thank you, @younesbelkada !The result I get is
`{'': 0}`<|||||>This is a bit strange @Marcophono2 ,
`{'': 0}` indicates that the entire model is on the GPU device. Can you confirm with us the GPU VRAM of your gpu?
Also I would replace:
```python
with torch.device("cuda"):
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
```
With:
```python
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto")
```
Also make sure to use the latest `accelerate` and `bitsandbytes` versions:
```bash
pip install --upgrade accelerate bitsandbytes
```<|||||>Yes, that is correct, @younesbelkada , the entire model is in the VRAM (RTX 4090). There is not much space left but it's matching. ;)
Before I tried without
`with torch.device("cuda"):`
I updated accelerate from 0.16 to 0.17 (bitsandbytes was up to date) but no difference. Meanwhile I am not sure anymore if this 100% cpu usage is really a "limit". When I analyse how the load is split up then I can see that sometimes 2 cores are working. One with 40%, the other with 61% (as an example). Then it would be just an accident. But what would then be the bottleneck that my GPU usability is never > 32%?<|||||>It seems that the model loading in 8 bit is the reason for the 100% cpu (one core/thread) limitation. I replaced the code now with
`model3 = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16).to("cuda")`
and the cpu can use up to 200% which the gpu usage is at 60%. Still not perfect but double performance. But I do not want to use the 2.7 model. :-) I want to use the blip2-flan-t5-xxl model which is too large for my VRAM as long as I do not use the 8 bit version. Has anyone an idea how I can activate also the other cpu cores when using 8 bit?<|||||>Sorry @ArthurZucker , but as you seem to be very near at the core, may be you have an idea for this issue I posted last week, too?<|||||>Hey, I think setting `devic_map = "auto"` should help balancing the load when using the `flan-t5-xxl` model to both CPU and GPU. This should allow you to run on both. You need `accelerate` library for this to work! Would that fix your issue? <|||||>Nope, @ArthurZucker . I already have device_map = "auto" included in my code. Or do you mean to implement it anywhere else too? Also accelerate is installed.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,010 | closed | Can't import Blip2Processor | ### System Info
I was trying to follow [this tutorial](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2Model.forward.example) and ran into the following issue:
```
ImportError: cannot import name 'Blip2Processor' from 'transformers' (/usr/local/lib/python3.8/dist-packages/transformers/__init__.py)
```
version: '4.26.1'
@sgugger @ArthurZucker @amyeroberts (there is no PIC for multimodal models so tagging both PICs)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Just run the following example: https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2Model.forward.example
### Expected behavior
Should import the preprocessor | 03-08-2023 00:17:08 | 03-08-2023 00:17:08 | You need to install transformers from source to get access to BLIP-2:
```
pip install git+https://github.com/huggingface/transformers
```<|||||>@sgugger thank you! Just one minor feedback since you are the docs PIC, if a note modal or just a highlighted note could be added informing if a module hasn't been added to the stable release would be helpful. (If it is already there, I might have missed it, apologies.)<|||||>Note that it's not in the [stable documentation](https://huggingface.co/docs/transformers/index) (which is what is viewed by default) only the [main documentation](https://huggingface.co/docs/transformers/main/en/index). Did you get on the page via a search engine maybe and did not realize you were not on the documentation of the latest release?<|||||>> Note that it's not in the [stable documentation](https://huggingface.co/docs/transformers/index) (which is what is viewed by default) only the [main documentation](https://huggingface.co/docs/transformers/main/en/index). Did you get on the page via a search engine maybe and did not realize you were not on the documentation of the latest release?
I am not the author but this is exactly what happened to me - I did not see at all there's a dropdown for versions so I assumed BLIP2 is just available. |
transformers | 22,009 | closed | DataCollatorForWholeWordMask does not handle numpy inputs when return_tensors="tf" | ### System Info
- `transformers` version: 4.26.1
- Platform: macOS-13.2.1-x86_64-i386-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@gante @Rocketknight1
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import numpy as np
from transformers import AutoTokenizer, DataCollatorForWholeWordMask
features = [{"input_ids": np.array(list(range(10)))}, {"input_ids": np.array(list(range(10)))}]
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
data_collator = DataCollatorForWholeWordMask(tokenizer, return_tensors="tf")
batch = data_collator(features)
```
```
InvalidArgumentError Traceback (most recent call last)
Cell In[1], line 9
6 tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
7 data_collator = DataCollatorForWholeWordMask(tokenizer, return_tensors="tf")
----> 9 batch = data_collator(features)
File ~/venv/lib/python3.9/site-packages/transformers/data/data_collator.py:43, in DataCollatorMixin.__call__(self, features, return_tensors)
41 return_tensors = self.return_tensors
42 if return_tensors == "tf":
---> 43 return self.tf_call(features)
44 elif return_tensors == "pt":
45 return self.torch_call(features)
File ~/venv/lib/python3.9/site-packages/transformers/data/data_collator.py:912, in DataCollatorForWholeWordMask.tf_call(self, examples)
910 mask_labels.append(self._whole_word_mask(ref_tokens))
911 batch_mask = _tf_collate_batch(mask_labels, self.tokenizer, pad_to_multiple_of=self.pad_to_multiple_of)
--> 912 inputs, labels = self.tf_mask_tokens(batch_input, batch_mask)
913 return {"input_ids": inputs, "labels": labels}
File ~/venv/lib/python3.9/site-packages/transformers/data/data_collator.py:1067, in DataCollatorForWholeWordMask.tf_mask_tokens(self, inputs, mask_labels)
1065 indices_random = self.tf_bernoulli(input_shape, 0.1) & masked_indices & ~indices_replaced
1066 random_words = tf.random.uniform(input_shape, maxval=len(self.tokenizer), dtype=tf.int64)
-> 1067 inputs = tf.where(indices_random, random_words, inputs)
1069 # The rest of the time (10% of the time) we keep the masked input tokens unchanged
1070 return inputs, labels
File ~/venv/lib/python3.9/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File ~/venv/lib/python3.9/site-packages/tensorflow/python/framework/ops.py:7215, in raise_from_not_ok_status(e, name)
7213 def raise_from_not_ok_status(e, name):
7214 e.message += (" name: " + name if name is not None else "")
-> 7215 raise core._status_to_exception(e) from None
InvalidArgumentError: cannot compute SelectV2 as input #2(zero-based) was expected to be a int64 tensor but is a int32 tensor [Op:SelectV2]
```
### Expected behavior
No exception.
This is a pretty simple bug. Seems we just need to cast the inputs to tf.int64 [here](https://github.com/huggingface/transformers/blob/b338414e614a30af5f940269484ef15bf716d078/src/transformers/data/data_collator.py#L910) which we do in `DataCollatorForLanguageModeling` but not `DataCollatorForWholeWordMask`
This is necessary to use the data collator with https://github.com/huggingface/datasets `datasets.Dataset.to_tf_dataset` since it implicitly formats data as `numpy` causing it to come into the data collator as int32 | 03-08-2023 00:16:46 | 03-08-2023 00:16:46 | Hey @dwyatte 👋
At a first glance, a missing `tf.cast` seems to be indeed the problem. Would you be interested in opening a PR with the fix? 🤗 <|||||>@gante sure thing, here you go: https://github.com/huggingface/transformers/pull/22032. Requested you for review |
transformers | 22,008 | closed | DataCollatorForSpanPreTraining | ### Feature request
It seems that there's already a script existing for [T5 pretraining](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py) that has a DataCollator, but only available in Flax. Would we be able to add a Data Collator for the Span pretraining task that's implemented in the T5 papers?
### Motivation
Currently it's not super easy to run T5 pretraining in Pytorch with Transformers
### Your contribution
I can help with the PR! | 03-07-2023 23:02:42 | 03-07-2023 23:02:42 | Transformers is a library of models, not data collators. You can adapt the code of this data collator to PyTorch in your code, but we won't have it in the main library (the same way the Flax one is just in an example script).<|||||>No worries, I thought maybe it would fit nicely within the same class as [DataCollatorForLanguageModeling](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/data/data_collator.py#L609) but totally understand wanting to keep the scope contained!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,007 | closed | Fix case when using --gradient_accumulation_steps with DDP disabled. | # What does this PR do?
When --gradient_accumulation_steps option is used with DDP disabled, HF has a call to ```model.no_sync``` which doesn't exist. This PR is to fix the issue of ```model.no_sync```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/aws-neuron/aws-neuron-sdk/issues/635
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-07-2023 21:30:10 | 03-07-2023 21:30:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I think it would be easier to just adapt [this property](https://github.com/huggingface/transformers/blob/dfe9a3197364c7f0e2169d7c16c357c9c1311cb9/src/transformers/training_args.py#L1802) to add the torch_neuroncore_available there.
@sgugger I have made the required changes to this PR itself. Please take a look. TIA |
transformers | 22,006 | closed | Update tiny model creation script and some others files | # What does this PR do?
The original goal is to update tiny model creation script, so we can create tiny models for some (newly added) model classes. It turns out some files are needed to be updated too. See my own review comments.
Note: This PR doesn't imply we are able to create tiny models for all involved model classes in this PR. Some model classes require more work to be done (`speecht5, tvlt` for example), but let me do it in separate PR(s). | 03-07-2023 19:53:57 | 03-07-2023 19:53:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 22,005 | closed | Bug in t5x to PyTorch weights conversion script | ### System Info
transformers version: 4.26.1
Platform: Ubuntu 20.04.5 LTS (Focal Fossa)
Python version: 3.8
Huggingface_hub version: 0.12.1
PyTorch version (GPU?): 1.13.1 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): 0.6.6 (GPU)
Jax version: 0.4.5
JaxLib version: 0.4.4
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This is the official example for script `transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py`
1. `gsutil -m cp -r gs://t5-data/pretrained_models/t5x/t5_1_1_small $HOME/`
2. `python3 convert_t5x_checkpoint_to_pytorch.py --t5x_checkpoint_path=$HOME/t5_1_1_small --config_file=config.json --pytorch_dump_path=$HOME/t5_1_1_small_pt`
Where `config.json` is a config for `t5-small `(https://huggingface.co/t5-small/blob/main/config.json)
When running this, I get an error:
> Traceback (most recent call last):
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 231, in <module>
convert_t5x_checkpoint_to_pytorch(
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 200, in convert_t5x_checkpoint_to_pytorch
load_t5x_weights_in_t5(model, config, t5x_checkpoint_path, is_encoder_only)
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 181, in load_t5x_weights_in_t5
state_dict = make_state_dict(converted, is_encoder_only)
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 160, in make_state_dict
state_dict = collections.OrderedDict([(k, torch.from_numpy(v.copy())) for (k, v) in converted_params.items()])
File "/root/transformers/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py", line 160, in <listcomp>
state_dict = collections.OrderedDict([(k, torch.from_numpy(v.copy())) for (k, v) in converted_params.items()])
TypeError: expected np.ndarray (got Array)
This can be fixed easily by importing numpy and changing line 160 to:
`state_dict = collections.OrderedDict([(k, torch.from_numpy(np.array(v.copy()))) for (k, v) in converted_params.items()])`
### Expected behavior
After converting `v` to `np.array(v)` the script exceutes fine and returns
> All model checkpoint weights were used when initializing T5ForConditionalGeneration.
>All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at /root/t5_1_1_small_pt.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
loading configuration file /root/t5_1_1_small_pt/generation_config.json
>Generate config GenerationConfig {
"_from_model_config": true,
"decoder_start_token_id": 0,
"eos_token_id": 1,
"pad_token_id": 0,
"transformers_version": "4.26.1"
}
>Done | 03-07-2023 19:13:14 | 03-07-2023 19:13:14 | cc @ArthurZucker and @younesbelkada <|||||>hello @rinapch
Thanks for the issue,
we used the same script to convert `flan-ul2` and did not face into any issue. Can you share with use the `t5x` version you used?<|||||>This is related to an update of jax and jax.numpy. `torch.FloatTensor(weights["token_embedder"]["embedding"])` does not work anymore as it was reported. Will have a look as the broader impact this has on our codebase. Thanks for reporting!<|||||>Hey @younesbelkada! As far as I know, t5x do not really release versions (their `version.py` still states "0.0.0" - https://github.com/google-research/t5x/blob/main/t5x/version.py). I used a clone of their repo to build t5x module, and I cloned it on monday, so the code is up to date <|||||>hi @rinapch
can you try:
```bash
pip install git+https://github.com/google-research/t5x@45c1a9d02321afeadb43f496de83c52421f52d66
```
this is the version of `t5x` that worked fine on my setup<|||||>Repeated the steps with this version and I get the following error:
> File "convert_t5x_checkpoint_to_pytorch.py", line 36, in <module>
from t5x import checkpoints
File "/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/__init__.py", line 17, in <module>
import t5x.adafactor
File "/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/adafactor.py", line 63, in <module>
from t5x import utils
File "/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/utils.py", line 46, in <module>
from t5x import checkpoints
File "/root/.cache/pypoetry/virtualenvs/chatbot-JrwxGvoq-py3.8/lib/python3.8/site-packages/t5x/checkpoints.py", line 160, in <module>
orbax.checkpoint.utils.register_ts_spec_for_serialization()
AttributeError: module 'orbax.checkpoint.utils' has no attribute 'register_ts_spec_for_serialization'<|||||>@rinapch
Can you try with: `orbax @ git+https://github.com/google/orbax@4ca7a3b61081e91323c89cf09f8c1a53c06cccda` ?
```bash
pip install git+https://github.com/google/orbax@4ca7a3b61081e91323c89cf09f8c1a53c06cccda
```<|||||>This worked, yep!<|||||>Awesome, feel free to close the issue, so the fix was to:
```bash
pip install git+https://github.com/google-research/t5x@45c1a9d02321afeadb43f496de83c52421f52d66
pip install git+https://github.com/google/orbax@4ca7a3b61081e91323c89cf09f8c1a53c06cccda
``` |
transformers | 22,004 | closed | Clip embeddings text/vision missmatch for the model 'laion/CLIP-ViT-H-14-laion2B-s32B-b79K' | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I use text and vision models for the same clip model and got different dimensionality of embeddings, which is wrong with the idea of the CLIP models.
Here is the source code to reproduce (outputs as comments)
```
CACHE_PATH = "../models_cache"
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer, CLIPVisionModel, AutoProcessor, CLIPConfig
clip_model = CLIPTextModel.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
clip_model = clip_model.to("cuda")
tokenizer = CLIPTokenizer.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
clip_v_model = CLIPVisionModel.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
clip_v_model = clip_v_model.to("cuda")
v_preprocess = AutoProcessor.from_pretrained("laion/CLIP-ViT-H-14-laion2B-s32B-b79K", cache_dir=CACHE_PATH)
....
image = Image.open(img_filename)
prompt = "some prompt"
with torch.no_grad():
inputs = tokenizer([prompt], padding=True, return_tensors="pt").to(gpu_device)
outputs = clip_model(**inputs)
print(outputs.pooler_output.shape) # torch.Size([1, 1024])
print(outputs.last_hidden_state.shape) # torch.Size([1, 12, 1024])
inputs = v_preprocess(images=image, return_tensors="pt").to(gpu_device)
image_features = clip_v_model(**inputs)
print(image_features.pooler_output.shape) # torch.Size([1, 1280]) !!!!
print(image_features.last_hidden_state.shape) # torch.Size([1, 257, 1280])
```
### Expected behavior
I expect CLIPVisionModel to produce embedding with the shape [1, 1024]. | 03-07-2023 19:08:51 | 03-07-2023 19:08:51 | cc @amyeroberts <|||||>Hi, this particular model (laion/CLIP-ViT-H-14-laion2B-s32B-b79K) uses a different `hidden_size` for the text and vision encoders (1024 and 1280 respectively), but they get projected to the same dimensionality using a linear projection layer (for this model, the `projection_dim` is 1024 as seen in the [config](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/blob/main/config.json#L177)). It's recommended to use the [get_text_features](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPModel.get_text_features) and [get_image_features](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPModel.get_image_features) methods of `CLIPModel` to get embeddings which have the same dimensionality. The pooler output is pre-projection.<|||||>@NielsRogge Thank you for the explanation |
transformers | 22,003 | open | Add X-Decoder Model | ### Model description
X-Decoder is a generalized decoding pipeline that can predict pixel-level segmentation and language tokens seamlessly. X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks.
The model exhibits strong transferability to a wide range of downstream tasks in both zero-shot and fine-tuning settings, achieving state-of-the-art open-vocabulary segmentation and referring segmentation on 10 settings of 7 datasets and should be a valuable addition to transformers library
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/pdf/2212.11270.pdf
Code: https://github.com/microsoft/X-Decoder
Weights: https://huggingface.co/spaces/xdecoder/Demo/blob/main/xdecoder_focalt_last.pt
Author: @eltociear
Cc: @NielsRogge @alaradirik | 03-07-2023 16:42:40 | 03-07-2023 16:42:40 | Hi @ChanBong, thanks for opening the issue!
You can expect to see an X-Decoder PR is the next two weeks :)<|||||>Hi @alaradirik, can we please collaborate in adding this model?<|||||>Hi @atharvakavitkar, the PR is almost done but won't include the _referring image editing_ task, which require integration with Stable Diffusion inpainting. Perhaps you could create a tutorial or demo for this task?<|||||>Hi @alaradirik, thank you for reaching out to me. I must admit that I have not yet added a model to HuggingFace. But I really want to learn how to do it. Would creating this tutorial be the right step? Or should I search for a simpler model to implement from scratch? |
transformers | 22,002 | closed | Unable to create a keras model with a pretrained TFBertModel using inputs_embeds as inputs | ### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@Rocketknight1 @gant
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to create a BERT layer inside a Keras Model to fine-tune it.
Colab link of the code sample: [Colab link](https://colab.research.google.com/drive/1rirX6R_hG3VfkyxbiD7-x5AbAEUe7cNK?usp=sharing)
This is the code snippet:
```
import tensorflow as tf
from keras.models import Model
from keras.layers import Input, Dense, Dropout
from transformers import TFBertModel
inputs_embeds = Input(shape=(3,5,))
encoder = TFBertModel.from_pretrained("bert-base-uncased")
embedding = encoder(inputs_embeds=inputs_embeds)
x = Dense(32, activation="relu")(embedding)
x = Dropout(0.1)(x)
outputs = Dense(7, activation="linear")(x)
model = Model(inputs=inputs_embeds, outputs=outputs)
```
This is the error I get:
```
Some layers from the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['mlm___cls', 'nsp___cls']
- This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFBertModel were initialized from the model checkpoint at bert-base-uncased.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-26-57294296cc4b>](https://localhost:8080/#) in <module>
7
8 encoder = TFBertModel.from_pretrained("bert-base-uncased")
----> 9 embedding = encoder(inputs_embeds=inputs_embeds)
10 x = Dense(32, activation="relu")(embedding)
11 x = Dropout(0.1)(x)
1 frames
[/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
[/usr/local/lib/python3.8/dist-packages/keras/utils/layer_utils.py](https://localhost:8080/#) in split_out_first_arg(self, args, kwargs)
807 inputs = kwargs.pop(self._arg_names[0])
808 else:
--> 809 raise ValueError(
810 "The first argument to `Layer.call` must always be passed."
811 )
ValueError: The first argument to `Layer.call` must always be passed.
``` | 03-07-2023 15:54:43 | 03-07-2023 15:54:43 | Hi @Giorgia3, this is unfortunately part of Keras that we can't work around! What's happening is that our layers can take multiple arguments, but Keras insists that the first argument is always present.
The first argument to a `TFBertModel` is `input_ids`, which is a sequence of integer tokens. However, you can also pass pre-embedded float `input_embeds`, which is what you're doing in your example. The error arises because you have not passed `input_ids`.
What you're trying to do is totally reasonable if you're already embedding your inputs in some other way! But if you want to make it work while respecting the "first argument must be passed" rule, you should instead pass a dict of inputs to the first argument of the Model, and this will be unpacked and passed to the corresponding arguments. All TF models in `transformers` will understand this input and unpack it correctly. So the line
`embedding = encoder(inputs_embeds=inputs_embeds) `
would be replaced by
`embedding = encoder({"inputs_embeds": inputs_embeds}) `
Remember that you should only use `inputs_embeds` if your inputs are already embeddings with the right dimension, though! If you just want to pass integer tokens, which is much more common, use the first argument `input_ids`.<|||||>Thank you very much, it worked!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,001 | closed | Is there a data leakage in causal masking? | ### System Info
Following [this tutorial](https://huggingface.co/course/chapter7/6?fw=pt) on training a causal language model from scratch. I found the [source code](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/gpt2/modeling_gpt2.py#L666) for the model they use (GPT2). On line 195 we define “causal_mask”. I tried commenting out this line and defining a new “causal_mask” with the same shape but either all True or all False entries (instead of the triangle masking). Though, the model still learned in both cases to generate natural language. This is unexpected as if all the inputs are masked all the time the model should not learn to generate coherent text. Am I missing something or is there data leakage?

I don't know if the following is relevant to the issue, but I also find that on line 822 we have "attention_mask" which from the comments suppose to mask out as well:
>
> # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
> # masked positions, this operation will create a tensor which is 0.0 for
> # positions we want to attend and the dtype's smallest value for masked positions.
> # Since we are adding it to the raw scores before the softmax, this is
> # effectively the same as removing these entirely.
But I find that if I print
`print('attention_mask', torch.min(attention_mask))`
the result is always -0.0. So I assume this is not actually masking anything for some reason?
### Who can help?
@ArthurZucker, @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Comment line 195 which defines "causal_mask".
2. Instead, define causal_mask as:
```
causal_mask = torch.rand(1,1,context_length,context_length)
causal_mask = causal_mask > 110.0 # all False
# or
causal_mask = causal_mask > 0.0 # all True
causal_mask = causal_mask.to(device)
```
3. Run script
Note that I'm using a small dataset with 10 short paragraphs.
### Expected behavior
Masking all the inputs all the time should not allow the model to learn to generate natural language. Instead. the model should generate random text. | 03-07-2023 15:43:58 | 03-07-2023 15:43:58 | Hey!
1. The `attention_mask` can be assimilated to the `padding_mask` which tells the model where the pad tokens are.
2. The `causal_mask` defined with :
```python
# if only "normal" attention layer implements causal mask
query_length, key_length = query.size(-2), key.size(-2)
causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length]
mask_value = torch.finfo(attn_weights.dtype).min
# Need to be a tensor, otherwise we get error: `RuntimeError: expected scalar type float but found double`.
# Need to be on the same device, otherwise `RuntimeError: ..., x and y to be on the same device`
mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(attn_weights.device)
attn_weights = torch.where(causal_mask, attn_weights.to(attn_weights.dtype), mask_value)
```
line 195 as you mention is the actual causal mask that is used in the SelfAttention, right before the softmax.
When we create this attention mask, we make sure that the values that we want to mask have a `mask_value = torch.finfo(attn_weights.dtype).min`, this is a very big *negative* number. What you are using is a causal mask with values 0 and 1, which will no affect the attention scores.
If you are using a pretrained model, its normal that this does not affect it. If you are not, it is also normal, but if you try to run inference, the model will perform worse than to a properly trained model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 22,000 | closed | Expanding static features when embedding - bug | ### System Info
Python 3.9, Pycharm
### Who can help?
@sgugger @ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here is the training script we used:
```py
import torch
import pandas as pd
from torch.utils.data import Dataset, DataLoader
from transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerModel, TimeSeriesTransformerForPrediction
from Preprocess import create_dataset
class TimeSeriesDataset(Dataset):
def __init__(self, subjects_dict):
self.subjects_dict = subjects_dict
self.subjects = list(subjects_dict.keys())
def __len__(self):
return len(self.subjects)
def __getitem__(self, idx):
subject = self.subjects[idx]
subject_dict = self.subjects_dict[subject]
# df_numpy = df.to_numpy()
# inputs = torch.tensor(df[['past_values', 'future_values']].values, dtype=torch.float32)
# inputs = torch.tensor()
return subject_dict
# Instantiating the dataset
directory = 'D:\Final Project\TASK_PCC_PFC\TEMP'
subjects_dict = create_dataset(directory)
dataset = TimeSeriesDataset(subjects_dict)
# Creating the dataloader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# Instantiating the TimeSeriesTransformerForPrediction
# model = TimeSeriesTransformerForPrediction
embedding_dimension = [349]
cardinality = [15]#[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] #[15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15, 15]#
# Initializing a default Time Series Transformer configuration
configuration = TimeSeriesTransformerConfig(prediction_length = 327, lags_sequence = [0, 0, 0], embedding_dimension = embedding_dimension,
num_static_categorical_features = 1, encoder_attention_heads = 2, decoder_attention_heads = 2, cardinality =cardinality )
# Randomly initializing a model (with random weights) from the configuration
model = TimeSeriesTransformerModel(configuration)
# Accessing the model configuration
configuration = model.config
#we dont know if passing the data as a dataframe instead if a tesndor would work
#currently model.train() is throwing an error, maybe we need to use a gpu? TODO
# Setting the model to training mode
model.train()
# Defining the loss function and optimizer
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
# Training loop
for epoch in range(100):
for batch in dataloader:
# Forward pass
outputs = model(
past_values=batch["past_values"],
past_time_features=batch["past_time_features"],
past_observed_mask=None,
static_categorical_features=batch['static_categorical_features'],
static_real_features=batch['static_real_features'],
future_values=batch["future_values"],
future_time_features=batch["future_time_features"],
)
loss = loss_fn(outputs, batch)
# Backward pass and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Printing the training loss
if (epoch + 1) % 10 == 0:
print(f"Epoch [{epoch + 1}/100], Loss: {loss.item()}")
```
Dataset:
HPC voxel dataset
### Expected behavior
Hi,
We are trying to train TimeSeriesTransformer for forcasting using fMRI voxel data. The shape of the data is: (batch size, rows of datapoints, columns of features)
We encountered an issues in the embedding phase.
This is from the source code:
```
# embeddings
embedded_cat = self.embedder(static_categorical_features)
# static features
log_scale = scale.log() if self.config.input_size == 1 else scale.squeeze(1).log()
static_feat = torch.cat((embedded_cat, static_real_features, log_scale), dim=1)
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
```
This is the error:
```
Traceback (most recent call last):
File "D:\Final Project\fMRI_Ariel_Lital\train.py", line 61, in <module>
outputs = model(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1626, in forward
transformer_inputs, scale, static_feat = self.create_network_inputs(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1536, in create_network_inputs
expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)
RuntimeError: expand(torch.DoubleTensor{[32, 1, 329, 349]}, size=[-1, 654, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)
Process finished with exit code 1
```
To our understanding there is a contradiction in this code.
**embedded_cat** has 3 dimentions: (batch_size, rows, columns)
**log_scale** has 3 dimenstions: (batch_size, 1, columns)
In order to use 'torch.cat', '**static_real_features**' must have the shape: [batch_size, n, columns]
This means that aftre concatenation of these 3 variables, '**static_feat**' will have 3 dimensions.
Then, when **unsqueezing** it will have 4 and then '**expand**' won't work.
How can we solve this?
Many thanks!
| 03-07-2023 15:27:51 | 03-07-2023 15:27:51 | cc @kashif<|||||>thanks @LtlSh for the report.
So `embedding_dimension` is the size of the resulting vector for the given categorical covariate. `cardinality` is the unique number of categories. So if you have only 15 different categories, perhaps it does not make sense to map the resulting vector to a 349 vector. Also the lags you can set it to `[1]` .
Finally, note the categorical feature is `static` meaning it has no temporal component and thus would have shape for a single feature `[B, 1]`
let me know if that makes sense?
<|||||>Thank you for your answer! @kashif
We tried your suggestion, but we are still getting the same error:
`
C:\Users\Cognition\anaconda3\envs\ArielLital\python.exe "D:\Final Project\fMRI_Ariel_Lital\train.py"
Traceback (most recent call last):
File "D:\Final Project\fMRI_Ariel_Lital\train.py", line 62, in <module>
outputs = model(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1626, in forward
transformer_inputs, scale, static_feat = self.create_network_inputs(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1535, in create_network_inputs
static_feat = torch.cat((embedded_cat, static_real_features, log_scale), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 3 but got size 349 for tensor number 1 in the list.
Process finished with exit code 1
`
In addition, we couldn't understand from your answer why there isn't a contradiction
(I'm referring to this part of our previous comment:
**embedded_cat** has 3 dimensions: (batch_size, rows, columns)
**log_scale** has 3 dimensions: (batch_size, 1, columns)
In order to use 'torch.cat', '**static_real_features**' must have the shape: [batch_size, n, columns]
This means that after concatenation of these 3 variables, '**static_feat**' will have 3 dimensions.
Then, when unsqueezing it will have 4 and then 'expand' won't work.)
Many thanks!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,999 | closed | Move `is_pipeline_test_to_skip` to specific model test classes | # What does this PR do?
As promised!
So far, it's incomplete - just for you to check this way is OK. If so, I will move all of them around.
It's normal to have some test failures at this moment. | 03-07-2023 15:07:33 | 03-07-2023 15:07:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I don't get the use case for `is_pipeline_test_to_skip`, we have the `pipeline_model_mapping`, why not just remove the model from there?
The reasons are:
- the mapping `pipeline_model_mapping` is (IMO) what should be tested in **theory**, i.e. what model classes of a specific model type are for some pipeline tasks
- **it's not the place to control what to skip or not**
- if we do skip tests by using this mapping, we lose the important information of `why this test is skipped`. We will only see some model class is not in the mapping, but not `skip this test as ...[whatever more precise reason]`
- define this mapping as what should be tested in theory allows:
- to generate the mapping in a systematic way **(less error prone)**
- to define a repo check (so **less chance to miss a pipeline test**)
- But **most importantly**, the skip conditions sometimes go to the level of what tokenizer/process classes are used. It's not just about the model class
- For example, a model class might be tested with a faster tokenizer, but not a slow tokenizer (due to some issue) <|||||>> Ok then, let's validate with @LysandreJik as well though!
I have talked to him offline when you were off. But yeah, let's make it official :-)<|||||>I think this would add another layer of unneeded complexity; do you think this will touch a lot of the models? In any case if you're both ok with this, I'm fine with merging it, but let's aim for as simple a change as possible that makes contributing a new pipeline/model as simple as possible. <|||||>@LysandreJik
This would almost **never** affect the **model** contribution experience:
- when a model is added, the tiny model creation **would not run by the contributor** (at least for this moment, as it's way to complex)
- no tiny model checkpoint on the Hub for the newly added model -> no pipeline test will be (actually) run for that model
- no pipeline test being run -> no need to skip any test by adding/changing `is_pipeline_test_to_skip`.
==> It's us (if not me) to create and upload the tiny models. Once done, if there is anything not working, **it's us to skip them**.
(The complexity is absorbed by the hidden process of tiny model creation that is not run by the contributor)
Regarding **adding a new pipeline**: if an existing tiny model work (i.e. it could run other existing pipeline tasks), the chance that it works for the new task (if it is suitable for that task) is high. **So the chance to changing existing `is_pipeline_test_to_skip` is low**.<|||||>Note that since it's a test modeling file and contributors add new models by copying those, those `is_pipeline_test_to_skip` will be copied over new models automatically. So it will be part of the contributor experience (though probably unnoticed) and we will get lots of those without really noticing (since the PRs that add new models are very big). This can be alleviated in some way if the add-new-model-like command takes special care to remove the `is_pipeline_test_to_skip` from the new test file, but this is again more complexity.<|||||>Thank you @sgugger , very nice point! Let me play with `add-new-model-like command` and see how the current `pipeline_model_mapping` and `is_pipeline_test_to_skip` will be treated by this command.
<|||||>I tried it, both `pipeline_model_mapping` and `is_pipeline_test_to_skip` will be copied.
If `pipeline_model_mapping` is used to also control which tests should be skip or not, it's also dangerous that this attribute being copied (especially automatically) to another model test files: as we are very likely to miss more and more tests that should be tested (a test that fails for an existing model have the chance to work on a new similar model - and should be tested at once to determine if we need to skip it).
Also as mentioned earlier:
- manually edit `pipeline_model_mapping` have more disadvantages than good.
- having `pipeline_model_mapping` edited by a contributor won't actually make the pipeline tests to run - we still need to create and upload the tiny models to `hf-internal-testing`
**I am going to make changes to `add_new_model_like` to not copy these 2 attributes**. It makes the script a bit more complex, but it won't bother the users - as long as we all agree and know that these 2 attributes for pipeline testing are not for contributors to add/change (at least not before we can have a much easier and safer process to create/upload tiny models).
Is this OK for you @sgugger ?<|||||>That plan works for me!<|||||>As discussed offline, I changed the approach to use string only.
Would still like @sgugger to elaborate a bit more:
> Using ast would be a first for the repo and would make contributing harder
Do you mean for the contributors (either external or internal?) who want (or might need) to modify `src/transformers/commands/add_new_model_like.py`? If so, I agree better not to use `ast` here. If you mean the usage only, I don't think using `ast` is a real problem - if they don't need to look the internals.
I would also like to mention, for automatically adding `pipeline_model_mapping` to a test file (from the auto mapping, prepared in `XXXPipelineTest` classes, we will need more access to the test files. And string approach would make it more complex (well `ast` is complex, but at least it also avoid a lot of things). Furthermore, if we want to add a new repo check on `pipeline_model_mapping`, the same consideration applies.
So let's have a talk later - at least for the above 2 scripts that I might have to implement.
(well, after a 2nd thought, I understand using `ast` might bring new burden to the reviewers.)<|||||>@sgugger Need your feedback :-) for
https://github.com/huggingface/transformers/pull/21999#discussion_r1132834136
|
transformers | 21,998 | closed | audio_utils improvements | # What does this PR do?
Recently the `audio_utils.py` file was added to Transformers to provide shared functions for audio processing such as STFT. This PR aims to clean up the code and make the API more robust.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-07-2023 14:27:19 | 03-07-2023 14:27:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I cleaned up `hertz_to_mel` and `mel_to_hertz` a bit:
- more consistent doc comments
- both support single float inputs as well as numpy arrays
- simplified the formulas so it's not literally the same as the librosa code but also doesn't do pointless calculations
Since I think this implementation was based on librosa, we should also give them credit.<|||||>I rewrote `power_to_db` and added `amplitude_to_db`. They still work like the librosa versions but with argument names that make more sense to me.<|||||>Changed `get_mel_filter_banks` into `mel_filter_bank`. Mostly renamed arguments and variables and cleaned up the doc comments, so that the naming is more in line with the rest of Transformers, e.g. `num_frequency_bins` instead of `nb_frequency_bins`.
<|||||>Pushed significant changes to the `stft` code.
- Removed `fram_wave`; this is really an implementation detail that should happen inside the STFT.
- The new `stft` gives the same results as librosa and torchaudio for the same options. It's 25% faster than the previous implementation, mostly due to using `rfft` instead of `fft` (since the input is always real-only, not complex).
- librosa is still faster since they use a bunch of tricks under the hood to avoid memory copies etc; we can slowly work towards matching this speed (not super important to do this immediately since the new `stft` is already faster than what we had before)
- No batching yet.
I will be replacing the other hand-rolled STFTs with this soon (also in this PR).
None of the changes I made are set in stone — feel free to discuss things like the argument names, the shapes of the returned tensors, and so on.
<|||||>Replaced the hand-rolled STFT in the different models with the one from `audio_utils`:
- CLAP
- M-CTC-T
- SpeechT5
- TVLT
- Whisper
Did not do `audio_spectrogram_transformer` and `speech_to_text`. These use `ta_kaldi.fbank`, which is simple enough and faster than `audio_utils`. If we want to get completely rid of torchaudio we could also replace these.
<|||||>@sanchit-gandhi @ArthurZucker I think this is ready for review now. Feel free to look at this with a critical eye!
The STFT code is currently written for ease of understanding and flexibility, not speed, although it does outperform the previous methods we were using.
<|||||>@sanchit-gandhi @ArthurZucker Are you OK with the PR in its current state? Then I can ask a core maintainer for a final review.<|||||>Took a second look through and the changes LGTM @hollance!<|||||>If everyone's happy with it, feel free to merge (I don't have rights).
|
transformers | 21,997 | closed | Stop requiring Torch for our TF examples! | This PR overrides a property in `TFTrainingArguments` to ensure that our TF examples don't accidentally depend on `torch` | 03-07-2023 13:58:44 | 03-07-2023 13:58:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,996 | closed | [Whisper] Remove embed_tokens from encoder docstring | # What does this PR do?
`embed_tokens` is not an arg for the `WhisperEncoder`. It looks like it was copied from BART (where we do use it) and left in by mistake!
https://github.com/huggingface/transformers/blob/9402788b34fbc6581ae9d7d9d68612a96d9aa111/src/transformers/models/bart/modeling_bart.py#L708
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-07-2023 13:46:14 | 03-07-2023 13:46:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Rebased and pushed two dummy commits to re-trigger the CI after GH SSO (no new changes to the PR in the last two commits https://github.com/huggingface/transformers/pull/21996/commits/ef692e28e650d305ba2947c21b572b447d0eb01f and https://github.com/huggingface/transformers/pull/21996/commits/ecfddec7b2ff4958e3fe3259d09ed5e681cf8fe1)<|||||>@sanchit-gandhi
For trigger CI, you can do sth like `git commit --allow-empty -m "Empty commit to trigger CI"`
(in the future when you need it)<|||||>Thanks for the tip @ydshieh! Looks the CI is red on main due to a `500 Server Error` with the HF Hub, see https://github.com/huggingface/transformers/actions/runs/4392426382/jobs/7692183097.<|||||>I re-run that CI job and it is green now :-)
The failed test in the job `test_tf` is irrelevant to this PR I believe.<|||||>Amazing, thanks @ydshieh! 🙌 |
transformers | 21,995 | closed | TypeError: 'NoneType' object is not subscriptable in modeling_utils.py | ### System Info
Using free tier Google Colab, it gives the following output of `transformers-cli env`:
2023-03-07 12:26:45.314129: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-03-07 12:26:45.314255: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-03-07 12:26:45.314280: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:tensorflow:From /usr/local/lib/python3.8/dist-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2023-03-07 12:26:49.528826: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.26.1
- Platform: Linux-5.10.147+-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): 2.11.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Using the library [Detoxify](https://github.com/unitaryai/detoxify) raises an error in transformers code if using transformers version >= 4.25.1, but works well with version 4.24 and below.
The error is: `TypeError: 'NoneType' object is not subscriptable` in [file `modeling_utils` at line 2718](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/modeling_utils.py#L2718). This line (and its block of code) has been added with [PR#20321](https://github.com/huggingface/transformers/pull/20321) merged in version 4.25.1
https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/modeling_utils.py#L2718-L2736
Looking at the code, it seems to me that the variable `resolved_archive_file` can take the value `None`, hence raising this error.
The full error stacktrace is:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-72e665f021e6> in <module>
----> 1 toxicity_model = Detoxify('multilingual', device='cuda')
2 # results = toxicity_model.predict()
4 frames
/usr/local/lib/python3.8/dist-packages/detoxify/detoxify.py in __init__(self, model_type, checkpoint, device, huggingface_config_path)
101 def __init__(self, model_type="original", checkpoint=PRETRAINED_MODEL, device="cpu", huggingface_config_path=None):
102 super().__init__()
--> 103 self.model, self.tokenizer, self.class_names = load_checkpoint(
104 model_type=model_type,
105 checkpoint=checkpoint,
/usr/local/lib/python3.8/dist-packages/detoxify/detoxify.py in load_checkpoint(model_type, checkpoint, device, huggingface_config_path)
54 }
55 class_names = [change_names.get(cl, cl) for cl in class_names]
---> 56 model, tokenizer = get_model_and_tokenizer(
57 **loaded["config"]["arch"]["args"],
58 state_dict=loaded["state_dict"],
/usr/local/lib/python3.8/dist-packages/detoxify/detoxify.py in get_model_and_tokenizer(model_type, model_name, tokenizer_name, num_classes, state_dict, huggingface_config_path)
18 ):
19 model_class = getattr(transformers, model_name)
---> 20 model = model_class.from_pretrained(
21 pretrained_model_name_or_path=None,
22 config=huggingface_config_path or model_type,
/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2476 offload_index,
2477 error_msgs,
-> 2478 ) = cls._load_pretrained_model(
2479 model,
2480 state_dict,
/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py in _load_pretrained_model(cls, model, state_dict, loaded_keys, resolved_archive_file, pretrained_model_name_or_path, ignore_mismatched_sizes, sharded_metadata, _fast_init, low_cpu_mem_usage, device_map, offload_folder, offload_state_dict, dtype, load_in_8bit, keep_in_fp32_modules)
2716 return mismatched_keys
2717
-> 2718 folder = os.path.sep.join(resolved_archive_file[0].split(os.path.sep)[:-1])
2719 if device_map is not None and is_safetensors:
2720 param_device_map = expand_device_map(device_map, original_loaded_keys)
TypeError: 'NoneType' object is not subscriptable
```
PS: [link to the related issue](https://github.com/unitaryai/detoxify/issues/75) in the library Detoxify
### Expected behavior
Put a condition on `resolved_archive_file` to handle the case when its value is `None`.
However, if its value SHOULDN'T be `None`, then add a validity check earlier in the code, with more explicit details.
Let me know if I can help on this. | 03-07-2023 12:40:43 | 03-07-2023 12:40:43 | I think this has been fixed by #21542. Could you try on the main branch of Transformers and see if you still have the bug?<|||||>Oh yes perfect! I'll wait for the next release to update then.
thank you<|||||>Next release should be this week or beginning of next, as an FYI :-) |
transformers | 21,994 | closed | chinese testdata were transcribed as english | when adding follow codes in a asr server, i send chinese asr data but i get english result. I don't know how to set the language. and try to use "forced_decoder_ids" to set the language, it failed.
```
transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-large", device=1)
#transcriber.model.config.forced_decoder_ids = (transcriber.tokenizer.get_decoder_prompt_ids(language="zh", task="transcribe"))
transcriber.model.config.forced_decoder_ids = (transcriber.tokenizer.get_decoder_prompt_ids(language="zh", task="transcribe"))
result = transcriber(audio_bytes, chunk_length_s=30)
print(result)
```
my transformers version is 4.26.1 | 03-07-2023 12:19:54 | 03-07-2023 12:19:54 | cc @ArthurZucker and @sanchit-gandhi though this question would be more appropriate for the [forums](https://discuss.huggingface.co/).<|||||>Hey, this is related to the update of the `generate()` function. The issue is that you are not modifying the `model.generation_config`. If you want to set the language in a proper manner, the following will work:
```python
transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-large", device=1)
transcriber.model.generation_config.forced_decoder_ids = transcriber.processor.get_decoder_prompt_ids(language="zh", task="transcribe")
result = transcriber(audio_bytes, chunk_length_s=30)
print(result)
```
We updated the generation config which by defaults should automatically detect the language, but is set to `translate` and not transcribe.
cc @sanchit-gandhi for visibility, this was introduced by #20388<|||||>Resolved in https://github.com/huggingface/transformers/pull/21965 - Whisper now respects the `config.forced_decoder_ids` if the language is not set in the args / `generation_config`
The most up-to-date way of passing the language is to use the args if possible:
```python
result = transcriber(audio_bytes, chunk_length_s=30, generate_kwargs={"language":"zh"})
```<|||||>> Resolve in #21965 - Whisper now respects the `config.forced_decoder_ids` if the language is not set in the args / `generation_config`
>
> The most up-to-date way of passing the language is to use the args if possible:
>
> ```python
> result = transcriber(audio_bytes, chunk_length_s=30, generate_kwargs={"language":"zh"})
> ```
@sanchit-gandhi upgrade transformers to version 4.27.1 and try it again, but get follow error:
```
f"Unsupported language: {self.language}. Language should be one of:"
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'WhisperForConditionalGeneration' object has no attribute 'language'
```<|||||>@sgugger another thing is that I using the pipeline get a Translation result not a *Transcription result.
how to specify Transcription tasks and language with the pipline.
```
from transformers import pipeline
transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small")
transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
```<|||||>I am sorry but I can't reproduce your errors. The following [notebook](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=vqXoVLesTUE6) has examples of setting the task and the language on whisper small and both work. Did you run `pip install --upgrade transformers`?
Here is my output (so expected behaviour)
<img width="1364" alt="image" src="https://user-images.githubusercontent.com/48595927/226351228-1d3f3b54-98b7-4688-a57d-6e661e8425b3.png">
<|||||>> I am sorry but I can't reproduce your errors. The following [notebook](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=vqXoVLesTUE6) has examples of setting the task and the language on whisper small and both work. Did you run `pip install --upgrade transformers`? Here is my output (so expected behaviour) <img alt="image" width="1364" src="https://user-images.githubusercontent.com/48595927/226351228-1d3f3b54-98b7-4688-a57d-6e661e8425b3.png">
@ArthurZucker yes, I have run pip install --upgrade transformers and i follow https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=vqXoVLesTUE6 , I still get a error:
```
self._validate_model_kwargs(model_kwargs.copy())
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/generation/utils.py", line 1090, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['task', 'language'] (note: typos in the generate arguments will also show up in this list)
```
<|||||>@ArthurZucker when transformers 4.26.1 is the latest version, I try it, it failed. Now i update it to 4.27.2, it works.<|||||>@ArthurZucker how I to modify parameter "condition_on_previous_text"? This parameter is provided by whisper and its important for me .
```
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/models/whisper/modeling_whisper.py", line 1606, in generate
return super().generate(
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/generation/utils.py", line 1213, in generate
self._validate_model_kwargs(model_kwargs.copy())
File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/generation/utils.py", line 1105, in _validate_model_kwargs
raise ValueError(
ValueError: The following `model_kwargs` are not used by the model: ['condition_on_previous_text'] (note: typos in the generate arguments will also show up in this list)
```<|||||>This is not yet available in the HuggingFace implementation. The PR is currently ongoing, see here #21491 <|||||>@ArthurZucker actually, I still have some problem as above,(transformers 4.27.2) when I use transformer pipeline and whisper pipeline recognize a wave file ,all is normal, I use micphone to record Chinese wav bytes and send bytes to transformers pipeline sever , it is not normal I get above abnormal result (not Chinese recognition result, for example "You." and other english words), but I use micphone to record Chinese wav bytes and send bytes to whisper pipeline sever , it is normal, so I'm confused.<|||||>Can you show me exactly how you are `sending to transformers pipeline server` so that I can check how you are calling the model? <|||||>> Can you show me exactly how you are `sending to transformers pipeline server` so that I can check how you are calling the model?
@ArthurZucker my transformers pipeline server codes is as follows,and the received bytes data is from a web client reording voice throught Browser microphone :
```
def forward(model, audio_bytes):
#print(len(audio_bytes))
text = model(audio_bytes, chunk_length_s=30, generate_kwargs = {"task":"transcribe", "language":"<|zh|>"})['text']
return text
def recognize(websocket, path):
global model
global args
global loop
global pool
global vad
#global seg_model
rec = None
phrase_list = None
sample_rate = args.sample_rate
client_ip = websocket.remote_address
last_message = ""
audio_bytes = b''
bytesdata = b''
wavdir = "./audiodata"
uid = str(uuid.uuid1())
filename = str(client_ip[0])+"_"+uid
filepath = os.path.join(wavdir, filename+".wav")
wfile = open(filepath,"wb+")
phrase_timeout = 4
max_timeout = 20
audio_format = "wav"
channel = 1
samplewidth = 16
logging.info('Connection from %s', websocket.remote_address);
while True:
message = await websocket.recv()
if isinstance(message, str):
if message == '{"eof":1}':
if len(audio_bytes):
if audio_format != "wav":
audio_bytes = bytes2wav(audio_bytes, audio_format, sample_rate, channel, samplewidth)
else:
pass
response = await loop.run_in_executor(pool, forward, model, audio_bytes)
response = format_result(response)
print("last"+response)
await websocket.send(response)
else:
await websocket.send("")
break
elif "samplerate" in message and "format" in message:
try:
json_str = json.loads(message)
sample_rate = json_str["samplerate"]
audio_format = json_str["format"]
samplewidth = json_str["samplewidth"]
await websocket.send("")
except:
await websocket.send("wrong format")
else:
await websocket.send("")
else:
audio_bytes += message
#audiotime = audio_length(audio_bytes, audio_format, sample_rate, channel, samplewidth)
audiotime = len(audio_bytes) / 2 / int(sample_rate)
#print(audiotime)
if audiotime > max_timeout :
if audio_format != "wav":
audio_bytes = bytes2wav(audio_bytes, audio_format, sample_rate, channel, samplewidth)
else:
pass
response = await loop.run_in_executor(pool, forward, model, audio_bytes)
response = format_result(response)
print("first"+response)
audio_bytes = b''
await websocket.send(response)
else:
await websocket.send("")
def start():
global model
global args
global loop
global pool
global vad
logging.basicConfig(level=logging.INFO)
args = type('', (), {})()
args.interface = os.environ.get('SERVER_INTERFACE', '0.0.0.0')
args.port = int(os.environ.get('SERVER_PORT', 40000))
args.model_path = os.environ.get('MODEL_PATH', 'model')
#args.seg_model_path = os.environ.get('VOSK_MODEL_PATH', 'seg_model')
args.sample_rate = float(os.environ.get('SAMPLE_RATE', 16000))
if len(sys.argv) > 1:
args.model_path = sys.argv[1]
#args.seg_model_path = sys.argv[2]
model = whisper.load_model(args.model_path,device="cpu")
```
<|||||>I have confirmed that ffmpeg_read function(read audio bytes)has some problem and I replace it with whiper provided function, all is normal(both wafile and mic stream)<|||||>Okay sorry if I don't understand completely, I don't see the `forward2` being called or passed anywhere right? <|||||>> Okay sorry if I don't understand completely, I don't see the `forward2` being called or passed anywhere right?
update it, forward2 should be forward.<|||||>Ok, 2 things we need to check:
1. When calling the pipeline, could you check that `pipeline.model.generation_config.forced_decoder_ids` is properly updates with the `language` and the `task`?
2. Can you also print the `language` that should be outputed by the generation process (`decode_asr` called in the pipeline for whisper should output the language that is detected by the model, which could help us understand if the decoding process went well)<|||||>> Ok, 2 things we need to check:
>
> 1. When calling the pipeline, could you check that `pipeline.model.generation_config.forced_decoder_ids` is properly updates with the `language` and the `task`?
yes, set it as follows:
model = pipeline(task="automatic-speech-recognition", model="openai/whisper-medium",device="cpu")
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="zh", task="transcribe")
> 2. Can you also print the `language` that should be outputed by the generation process (`decode_asr` called in the pipeline for whisper should output the language that is detected by the model, which could help us understand if the decoding process went well)
sorry actually when I use transfomer pipline, met the problem and when I use whisper official pipeline , all is ok。
<|||||>Okay! After re-reading your issue, I think you said
> I have confirmed that ffmpeg_read function(read audio bytes)has some problem and I replace it with whiper provided function, all is normal(both wafile and mic stream)
So this means we should probably update our `ffmpeg_read` function. Is that right? <|||||>> Okay! After re-reading your issue, I think you said
>
> > I have confirmed that ffmpeg_read function(read audio bytes)has some problem and I replace it with whiper provided function, all is normal(both wafile and mic stream)
>
> So this means we should probably update our `ffmpeg_read` function. Is that right?
yes, transformer's ffmpeg_read leads to my problem.
<|||||>now we can use the parameters of "fp16" and "condition_on_previous_text"?<|||||>`fp16`, `load_in_8_bits` and the jax models if want faster inference yes. Conditioning on previous text, the update on that feature is here #21491 !<|||||>how to use "fp16, load_in_8_bits", has sample codes?<|||||>For load in 8 bits you need `accelerate` and `bits-and-bytes`:
```python
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small", load_in_8bit=True)
```
for `fp16`:
```python
import torch
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small", torch_dtype = torch.float16)
```<|||||>I try it with `model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small", torch_dtype = torch.float16)`
get errors: `RuntimeError: Input type (torch.FloatTensor) and weight type (torch.HalfTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor`
<|||||>The input should also be halved (the audio)<|||||>Note that `load_in_8bit` will give you a nice memory saving (~30%) but will run slower than fp16. This is likely due to the bitsandbytes 8bit matmul algorithm which isn't super optimised for "small" tensors, but rather is designed more for super large LMs. |
transformers | 21,993 | closed | add 1 to cur_len to make up the new beam length | # What does this PR do?
cur_len is 1 token shorter comparing to the length of the sequence whose best_sum_logprobs is the numerator.
Fixes # (issue)
add 1 to cur_len
## Who can review?
@LysandreJik
| 03-07-2023 12:10:31 | 03-07-2023 12:10:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante<|||||>@gante Thx for the great advice. You are suggesting a better coding style. |
transformers | 21,992 | closed | Update `notification_service.py` | # What does this PR do?
While working on CI report with PyTorch `2.0.0`, the first run failed to send the report due to the following problem:
- The original code checked each line in `summary_short.txt` by `if re.search("FAILED", line):`
- We expect such occurrences have 1-1 correspondence in the file `failures_line.txt`
- However, some lines in `summary_short.txt` might have ` FAILED` but not really what we are looking for. For example, some tests in `tests/extended/test_trainer_ext.py` using `execute_subprocess_async` and we get some lines like
```
/transformers/examples/pytorch/translation/run_translation.py FAILED
```
- In such cases, we get error `stacktraces.pop(0)` at some point, as there is no more element to pop (`stacktraces`, obtained from `failures_line.txt`)
- **This PR avoids this situation by checking with `if line.startswith("FAILED "):` which should give the desired 1-1 correspondence.**
| 03-07-2023 12:03:48 | 03-07-2023 12:03:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,991 | closed | Skip `test_multi_gpu_data_parallel_forward` for some model tests | # What does this PR do?
This test fails for some models in CI with torch 2.0, causing CUDA in a bad state, and many other tests fail in this situation.
The only way (I could find online) that it won't fail is to use other GPUs, like `P100` or `V100`.
Let's skip it for now for a few model tests. It's likely it will work again in a future PyTorch release.
| 03-07-2023 10:15:52 | 03-07-2023 10:15:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,990 | closed | Add Image Completion Transformer (ICT) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-07-2023 09:41:02 | 03-07-2023 09:41:02 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21990). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,989 | closed | RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' | ### System Info
I'm feeding flags of low memory and half precision data type to `AutoModelForCausalLM.from_pretrained('bigscience\bloomz7b1')` and I'm receiving the error above.
I'm not sure if this is a bug, is it like those flags are only meant to be passed for specific models for which half precision is implemented? If so, how can one tell in a graceful way?
Those low memory flags seem to work like a dream with other models like `EleutherAI/gpt-j-6B`.
Thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
as above.
### Expected behavior
model loaded in half precision. | 03-07-2023 04:42:46 | 03-07-2023 04:42:46 | You need to execute a model loaded in half precision on a GPU, the operations are not implemented in half on the CPU.<|||||>@sgugger Then how come that this example works on cpu?
```
from transformers import GPTJForCausalLM
import torch
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True
)
```
<|||||>What code are you using exactly to get the error?
```
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained('bigscience/bloomz-7b1', torch_dtype=torch.float16)
```
works perfectly fine.<|||||>@sgugger
Yes, it **loads up perfectly fine** but if you proceed to build the pipeline and generate text, you get the `Half` implementation error.
I just tried your code again
```
import torch
from transformers import AutoModelForCausalLM, pipeline
model = AutoModelForCausalLM.from_pretrained('bigscience/bloomz-7b1', torch_dtype=torch.float16, low_cpu_mem_usage=True)
g = pipeline(task='text-generation', model=model, tokenizer='bigscience/bloomz-7b1')
g("Hi, ")
```
I got this traceback:
```
In [1]:
...: import torch
...: from transformers import AutoModelForCausalLM, pipeline
...: model = AutoModelForCausalLM.from_pretrained('bigscience/bloomz-7b1', torch_dtype=torch.float16, low_cpu_mem_usage=True)
...: g = pipeline(task='text-generation', model=model, tokenizer='bigscience/bloomz-7b1')
...: g("Hi, ")
...:
C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\generation\utils.py:1273: UserWarning: Neither `max_length` nor `max_new_tokens` has been set, `max
_length` will default to 20 (`generation_config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the
config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module> │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\pipelines\text_generation.py:210 in │
│ __call__ │
│ │
│ 207 │ │ │ - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when `retu │
│ 208 │ │ │ ids of the generated text. │
│ 209 │ │ """ │
│ ❱ 210 │ │ return super().__call__(text_inputs, **kwargs) │
│ 211 │ │
│ 212 │ def preprocess(self, prompt_text, prefix="", handle_long_generation=None, **generate │
│ 213 │ │ inputs = self.tokenizer( │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\pipelines\base.py:1084 in __call__ │
│ │
│ 1081 │ │ │ │ ) │
│ 1082 │ │ │ ) │
│ 1083 │ │ else: │
│ ❱ 1084 │ │ │ return self.run_single(inputs, preprocess_params, forward_params, postproces │
│ 1085 │ │
│ 1086 │ def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params): │
│ 1087 │ │ return [self.run_single(item, preprocess_params, forward_params, postprocess_par │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\pipelines\base.py:1091 in run_single │
│ │
│ 1088 │ │
│ 1089 │ def run_single(self, inputs, preprocess_params, forward_params, postprocess_params): │
│ 1090 │ │ model_inputs = self.preprocess(inputs, **preprocess_params) │
│ ❱ 1091 │ │ model_outputs = self.forward(model_inputs, **forward_params) │
│ 1092 │ │ outputs = self.postprocess(model_outputs, **postprocess_params) │
│ 1093 │ │ return outputs │
│ 1094 │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\pipelines\base.py:992 in forward │
│ │
│ 989 │ │ │ │ inference_context = self.get_inference_context() │
│ 990 │ │ │ │ with inference_context(): │
│ 991 │ │ │ │ │ model_inputs = self._ensure_tensor_on_device(model_inputs, device=se │
│ ❱ 992 │ │ │ │ │ model_outputs = self._forward(model_inputs, **forward_params) │
│ 993 │ │ │ │ │ model_outputs = self._ensure_tensor_on_device(model_outputs, device= │
│ 994 │ │ │ else: │
│ 995 │ │ │ │ raise ValueError(f"Framework {self.framework} is not supported") │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\pipelines\text_generation.py:252 in │
│ _forward │
│ │
│ 249 │ │ │ in_b = input_ids.shape[0] │
│ 250 │ │ prompt_text = model_inputs.pop("prompt_text") │
│ 251 │ │ # BS x SL │
│ ❱ 252 │ │ generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=att │
│ 253 │ │ out_b = generated_sequence.shape[0] │
│ 254 │ │ if self.framework == "pt": │
│ 255 │ │ │ generated_sequence = generated_sequence.reshape(in_b, out_b // in_b, *genera │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\torch\autograd\grad_mode.py:27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\generation\utils.py:1391 in generate │
│ │
│ 1388 │ │ │ │ ) │
│ 1389 │ │ │ │
│ 1390 │ │ │ # 11. run greedy search │
│ ❱ 1391 │ │ │ return self.greedy_search( │
│ 1392 │ │ │ │ input_ids, │
│ 1393 │ │ │ │ logits_processor=logits_processor, │
│ 1394 │ │ │ │ stopping_criteria=stopping_criteria, │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\generation\utils.py:2179 in │
│ greedy_search │
│ │
│ 2176 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │
│ 2177 │ │ │ │
│ 2178 │ │ │ # forward pass to get next token │
│ ❱ 2179 │ │ │ outputs = self( │
│ 2180 │ │ │ │ **model_inputs, │
│ 2181 │ │ │ │ return_dict=True, │
│ 2182 │ │ │ │ output_attentions=output_attentions, │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\torch\nn\modules\module.py:1194 in _call_impl │
│ │
│ 1191 │ │ # this function, and just call forward. │
│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │
│ 1195 │ │ # Do not call functions when jit is used │
│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\models\bloom\modeling_bloom.py:900 in │
│ forward │
│ │
│ 897 │ │ │
│ 898 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 899 │ │ │
│ ❱ 900 │ │ transformer_outputs = self.transformer( │
│ 901 │ │ │ input_ids, │
│ 902 │ │ │ past_key_values=past_key_values, │
│ 903 │ │ │ attention_mask=attention_mask, │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\torch\nn\modules\module.py:1194 in _call_impl │
│ │
│ 1191 │ │ # this function, and just call forward. │
│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │
│ 1195 │ │ # Do not call functions when jit is used │
│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\transformers\models\bloom\modeling_bloom.py:729 in │
│ forward │
│ │
│ 726 │ │ if inputs_embeds is None: │
│ 727 │ │ │ inputs_embeds = self.word_embeddings(input_ids) │
│ 728 │ │ │
│ ❱ 729 │ │ hidden_states = self.word_embeddings_layernorm(inputs_embeds) │
│ 730 │ │ │
│ 731 │ │ presents = () if use_cache else None │
│ 732 │ │ all_self_attentions = () if output_attentions else None │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\torch\nn\modules\module.py:1194 in _call_impl │
│ │
│ 1191 │ │ # this function, and just call forward. │
│ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │
│ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │
│ 1195 │ │ # Do not call functions when jit is used │
│ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\torch\nn\modules\normalization.py:190 in forward │
│ │
│ 187 │ │ │ init.zeros_(self.bias) │
│ 188 │ │
│ 189 │ def forward(self, input: Tensor) -> Tensor: │
│ ❱ 190 │ │ return F.layer_norm( │
│ 191 │ │ │ input, self.normalized_shape, self.weight, self.bias, self.eps) │
│ 192 │ │
│ 193 │ def extra_repr(self) -> str: │
│ │
│ C:\Users\aalsaf01\venvs\nlp\lib\site-packages\torch\nn\functional.py:2515 in layer_norm │
│ │
│ 2512 │ │ return handle_torch_function( │
│ 2513 │ │ │ layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, b │
│ 2514 │ │ ) │
│ ❱ 2515 │ return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.c │
│ 2516 │
│ 2517 │
│ 2518 def group_norm( │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
```
By the way, it also complained about `accelerate` library not being installed saying that its crucial for `low_cpu` and half precision. Then after installation, the loading works fine, but text generation still fails.
So the question would be, why does it still work with GPT-J as per the official example on huggingface docs.
<|||||>also fault when trying the blog of transformers in https://mp.weixin.qq.com/s/k8rE9GrF97E-0TKJhih9kw.<|||||>As I said before, you need to **run** your model on the GPU as the operations are not all implemented on the CPU in float16. On CPU you can only run models in float32.<|||||>Okay, thanks for explaining that. I think an update for docs would be appropriate.
https://huggingface.co/docs/transformers/model_doc/gptj
One can indicate that low precision example that works on CPU is just a coincidence as the operations happen to be implemented for CPU. In general, this requires acceleration device.
I'm not sure if Pytorch have cpu implementation on their agenda.<|||||>Thanks for pointing this example out! It indeed needs to be add a GPU to work. cc @stevhliu or @MKhalusova if you want to fix it (it's the example just before GPTJConfig on the page linked above that loads the model in float16).<|||||>Tesla P40 not support Half... |
transformers | 21,988 | closed | Fix broken link | null | 03-07-2023 04:05:07 | 03-07-2023 04:05:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21988). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 21,987 | closed | Long inputs to Flan-T5/UL2 text generation with load_in_8bit=True outputs <pad> tokens repeatedly | ### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.228-141.415.amzn2int.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When input texts are short, the generated texts look good.
But when input texts are long e.g., the following, then it produces <pad> tokens.
Input
```
model = T5ForConditionalGeneration.from_pretrained("google/flan-ul2", device_map=device_map) # same with "google/flan-t5-xxl"
load_in_8bit=True)
input_text = """Q: Answer the following yes/no question by reasoning step-by-step. Could a dandelion suffer from hepatitis?
A: Hepatitis only affects organisms with livers. Dandelions don’t have a liver. The answer is yes.
Q: Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?
A: """
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=100)
print(tokenizer.decode(outputs[0]))
```
Output:
```
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
```
### Expected behavior
This is the result when loaded with `load_in_8bit=False`
```
<pad> A Haiku is a Japanese poetry form that uses a 5-7-5 syllable structure. A typical tweet is limited to 140 characters. The answer is no.</s>
```
| 03-07-2023 02:41:54 | 03-07-2023 02:41:54 | Thanks a lot for the issue @akkikiki !
What is the hardware you are using + bnb version?<|||||>Thanks a lot for the reply!
The hardware is 8 V100 (16GB) GPUs and the bnb version is 0.37.0.<|||||>I think sadly there is indeed an issue with V100 right now as stated by @TimDettmers here: https://github.com/huggingface/transformers/pull/21955#issuecomment-1455235281
It should be fixed somehow soon, also as stated in this comment, more universal methods (that cover most of GPU hardware) should be published soon!<|||||>Thanks @younesbelkada!
Interesting, so some smart workaround for GPUs without hardware-level support on int8.
FYI, I actually played around with `BitsAndBytesConfig`, and seems like `quantization_config = BitsAndBytesConfig(llm_int8_threshold=5.0)` resolved the issue.
Output result with `quantization_config = BitsAndBytesConfig(llm_int8_threshold=5.0)`:
```
<pad> A Haiku is a Japanese poetry form that uses a 5-7-5 syllable structure. A typical tweet is limited to 140 characters. The answer is no.</s>
```
Will just close this thread for now. Thanks again for the heads up on V100 issue!<|||||>This is great! Thanks for the advice! Would you mind posting it in #21955 so that people can be aware of this hack 🙏 ?<|||||>> This is great! Thanks for the advice! Would you mind posting it in #21955 so that people can be aware of this hack 🙏 ?
Will do!<|||||>Thanks a lot @akkikiki ! Much apprciated! |
transformers | 21,986 | closed | save_pretrained crashes when torch_dtype is passed | If you have the following code
```
p = pipeline(... torch_dtype=torch.bfloat16)
p.save_pretrained()
```
you get a crash because `torch.bfloat16` is not json serializable in the `tokenizer.save_pretrained()` method.
This PR fixes this.
@Narsil
@ArthurZucker
| 03-06-2023 23:49:20 | 03-06-2023 23:49:20 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21986). All of your documentation changes will be reflected on that endpoint.<|||||>My understanding that the following code should work
```
from transformers import pipeline
p = pipeline("some/model", torch_dtype=torch.bfloat16)
```
from what I can see the tokenizer kwargs basically start with copy of the model_kwargs?
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L869<|||||>Indeed, this means that the problem is bigger! Popping the argument is good if we want to keep the pipeline that way, I think we agreed to properly handle the extra tokenizer kwargs @Narsil if you want to take care of it, it will fix this! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ArthurZucker @Narsil What do you want to do here. I think this PR does the right thing at the moment because extra arguments to the pipeline are passed around to the tokenizer and then the tokenizer crashes on save_pretrained.
I think merging this is strictly better than the current state. Sure the code could be improved internally. Conceptually there should be something that decides which kwargs to the pipeline should be passed to each of the components in the pipeline.<|||||>Hey! Thanks for reporting the bug, I merged something that seemed more aligned with what we want: torch dtype should just not be passed to the tokenizer, so poping it outside |
transformers | 21,985 | closed | Ranking a pre-defined list of output candidates for LMs | ### Feature request
Can we support a feature that given an input and a list of output candidates for a LM (say fine-tuned T5 or GPT-2), we can get a score for each output candidate and then return the top K of them?
For example, I fine-tuned a seq2seq model (e.g., BART or T5) for summarization, and now given an input doc and a list of candidate summaries, I want to know which ones are the best. I do not want the fine-tuned model to decode and generate its summaries as we normally do with them.
I believe we can do this by computing the token-by-token log-likelihood (with or without length normalization), and then return the top K from the candidate list.
### Motivation
I was reading the paper of T0, and it mentioned that they used such a method to evaluate tasks with multiple choices:
The paper describes it like this on Page 6 from https://arxiv.org/pdf/2110.08207.pdf:
> For tasks that involve choosing the correct completion from several options (e.g. multiple choice
question answering), we follow Brown et al. (2020) and use **rank classification** to evaluate our
model: we compute the log-likelihood of each of the target options under the fine-tuned model and
select the option with the highest log-likelihood as the prediction. For simplicity, we do not apply
length normalization to the log-likelihoods of the target options.
### Your contribution
I found their code that maybe helpful but I feel that they are a bit hard to use directly.
https://github.com/bigscience-workshop/t-zero/blob/25c0761427f3894a8ec5a062a075b96037fb1492/t0/model.py#L67 | 03-06-2023 22:04:13 | 03-06-2023 22:04:13 | Hey @gante , I was wondering you may will also have great suggestions on this. Thanks a lot in advance! :D. <|||||>Hey @yuchenlin 👋 We usually don't add those sorts of tools to `transformers`, but I'd be happy to guide you.
First of all, I'd suggest to check [the documentation and the examples for this function](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/text_generation#transformers.GenerationMixin.compute_transition_scores). Then, by invoking it for several models and/or inputs, you will be able to re-rank candidate options according to your needs :)
If you find the end results satisfying, I'd suggest opening a Spaces so it can be shared with the world 🌍 <|||||>Hi @gante thank you so much for the help! I was reading this before but it seems that this method can only output the transition scores for tokens that are considered by the model.generate process. However, the more general case for this application is that we have some external output candidates that may not be generated by "model.generate()".
And sure I will put a solution to a Spaces if I manage to do this and I believe this will help many others! :D <|||||>I see. We have in our plans to build a function that returns the score for any candidate sequence, but have other competing priorities :) Feel free to have a go at it, if you're interested!<|||||>Hi @gante , thanks for letting me know.
I managed to develop a solution here: https://github.com/yuchenlin/rank_outputs/blob/main/rank_outputs/main.py
Will try to wrap it up as a more general tool.
Thanks!
|
transformers | 21,984 | closed | Update Jukebox tests | # What does this PR do?
Updage Jukebox tests, for PyTorch 2.0.
The reason is the same as in #21975: tiny diff in scores, but sampling can give different results. | 03-06-2023 20:06:55 | 03-06-2023 20:06:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>test failure irrelevant to this PR. |
transformers | 21,983 | closed | Remove unneeded casts to bool | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR removes some conversions to `torch.bool` which are not needed anymore following #21384.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-06-2023 18:44:31 | 03-06-2023 18:44:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,982 | closed | docs: New terms and updates to glossary | Updates to the glossary as proposed in #21801 .
All terms suggested in the issue were added, except for Encoder/Decoder. Acronyms were also added for terms in which it made sense (e.g. NLP)
Note that this PR replaces ***autoencoding models*** with ***encoder models*** (and provides a more detailed definition) as well as merging ***autoregressive models*** and ***causal language modeling*** with ***decoder models***.
This is my first draft and would likely benefit from additional edits/revision. Any suggestions in terms of updates or areas to expand further on are welcome 🙂
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21801
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
| 03-06-2023 18:00:42 | 03-06-2023 18:00:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the detailed edits, they are helpful! 🙂 I will refresh permissions and also add a link to the pipeline for inference doc tomorrow <|||||>Added my update, and I followed the instructions to refresh permissions. I don't seem to have permissions to manually restart the CircleCI Pipeline, so not sure whether the steps I took were reflected in my last commit (I did my updates before refreshing permissions). <|||||>You can push an empty commit with `git commit -m "Trigger CI" --allow-empty" and then push to your branch.<|||||>Thanks for the review! I'll start going over the changes / implementing your suggestions over the weekend <|||||>> Thanks for your PR. I don't think we should remove entries from the glossary. Linking to other entries is better. Could you also add an entry for "self-supervised learning" since most pretraining of Transformer models use that technique?
I will add back in the old entries and add links between them, and yeah that's a great idea!<|||||>Awesome! I committed the suggestion, looks to have merged successfully🙂 Thanks again for the help with the edits/reviews! (also feel free to ping me if you see any further edits/additions to be done)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21982). All of your documentation changes will be reflected on that endpoint. |
transformers | 21,981 | open | Function infer_channel_dimension_format has a bug | ### System Info
Hi, I am working on my own DataLoader.
So my input to class transformers.SegformerImageProcessor has shape (6, 512, 512).
So channel first.
As I realized the problem is **image.shape[first_dim]** for me is 0,
but your if construction said channel dimension should be always 1 or 3,
which does not make sense for 3 dim images, because above you assigned first_dim, last_dim = 0, 2.
@amyeroberts
not sure I tagged right person, sorry
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
def infer_channel_dimension_format(image: np.ndarray) -> ChannelDimension:
if image.ndim == 3:
first_dim, last_dim = 0, 2
elif image.ndim == 4:
first_dim, last_dim = 1, 3
else:
raise ValueError(f"Unsupported number of image dimensions: {image.ndim}")
if image.shape[first_dim] in (1, 3):
return ChannelDimension.FIRST
elif image.shape[last_dim] in (1, 3):
return ChannelDimension.LAST
raise ValueError("Unable to infer channel dimension format")
```
any image wish a shape (ch, w, h)
infer_channel_dimension_format(image)
### Expected behavior
I expected function will return ChannelDimension.FIRST for image in shape (CH, W, H) | 03-06-2023 17:40:54 | 03-06-2023 17:40:54 | Hi @aleksmirosh - you've tagged the right person :) Thanks for raising this issue.
The functions in the image transforms library currently only support grayscale (1 channel) and RGB (3 channel) images. What format are the images you're trying to use? <|||||>@amyeroberts
Thank you for your quick response and sorry for not complet describing of the issue.
So I use 6 channels.
I used version 4.24.0 before and tried to update it to the latest.
Do you plan to update to any channel number in the future?
<|||||>There aren't any immediate plans to accept an arbitrary number of channels. I agree it would be useful and will add it to the potential extensions of the library.
For the 6 channel images - are these concatenations of 2, 3-channel images or a single image with 6 channels? If the former, the simplest way to get this working quickly would be to pass each of 3 RBG images to the image processor and then concatenate. However, this will likely be quite inefficient. <|||||>Hello! Just wanted to add that I've come across this issue as well, but using the CLIPProcessor which uses https://github.com/huggingface/transformers/blob/2f320661f364557c821c285729dab3881e977363/src/transformers/image_transforms.py#L304
Basically I pass in images that always have the dimension (H,W,C), but occasionally I'll get images that are (1,1,3) or (3,*,3), but in both cases the first dimension is inferred as the channel dimension, which is not what I intended. The (3,1,3) case will not error, but silently proceed, but the (1,1,3) case errors for me b/c the mean is of length 3 but the inferred num_channels of the image is 1<|||||>> For the 6 channel images - are these concatenations of 2, 3-channel images or a single image with 6 channels? If the former, the simplest way to get this working quickly would be to pass each of 3 RBG images to the image processor and then concatenate. However, this will likely be quite inefficient.
this is 6 channels single image, but I will try to use concatenation. thank you for the advice.
Also if using the parameter 'data_format=None' could it help?
@terrykong maybe data_format=None could help with your case?<|||||>Thanks for the suggestion @aleksmirosh. Unfortunately, that kwarg only affects the output format: https://github.com/huggingface/transformers/blob/2f320661f364557c821c285729dab3881e977363/src/transformers/image_transforms.py#L322-L323
The input format is not configurable and is inferred: https://github.com/huggingface/transformers/blob/2f320661f364557c821c285729dab3881e977363/src/transformers/image_transforms.py#L340
It would be nice if the input format could be specified.<|||||>Hi @aleksmirosh , `data_format` specifies the desired output format for the images. I'm working on a PR to add the option to specify for the input format @terrykong |
transformers | 21,980 | closed | Fix gradient checkpointing bug in ESM | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 03-06-2023 17:21:52 | 03-06-2023 17:21:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,979 | closed | Fix gradient checkpointing bug in Codegen | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 03-06-2023 17:18:23 | 03-06-2023 17:18:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,978 | closed | Fix gradient checkpointing bug in BlipText | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 03-06-2023 17:15:01 | 03-06-2023 17:15:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,977 | closed | Fix gradient checkpointing bug in Blenderbot Small | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 03-06-2023 17:10:02 | 03-06-2023 17:10:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,976 | closed | Fix gradient checkpointing bug in BigBird Pegasus | This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante | 03-06-2023 17:05:08 | 03-06-2023 17:05:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 21,975 | closed | Update expected values for `test_xglm_sample` | # What does this PR do?
Despite the `probs` value has only a difference of `7.59e-07`, the `torch.multinomial(probs, num_samples=1)` still gives different `next_tokens`:
- 3967 (`sun`) in `torch 1.13.1`
- 4565 (`water`) in `torch 2.0`
Currently I keep both values, but we can remove the one for `torch 1.13` soon. | 03-06-2023 16:48:50 | 03-06-2023 16:48:50 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.