repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 19,967 | closed | Transformer Model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-30-2022 08:32:29 | 10-30-2022 08:32:29 | |
transformers | 19,966 | closed | standardize `DistilBert` class names | # What does this PR do?
This PR aims to standardize the module names of `DistilBert`. For example, previously the `DistilBert` layers were named `TransformerBlock` instead of the current convention `xxxLayer`. This PR addresses this, by changing the names of some core modules of `DistilBertModel`.
This way this model that is highly used on the Hub can easily benefit from `BetterTransformers` speedup using `optimum` library as such:
```
import torch
from transformers import AutoModel
from optimum.bettertransformer import BetterTransformer
model = AutoModel.from_pretrained("distilbert-base-uncased").eval()
model = BetterTransformer.transform(model)
input_ids = torch.LongTensor([[1, 1, 1, 1, 1]])
with torch.no_grad():
out = model(input_ids)
```
https://github.com/huggingface/optimum/pull/423
I am not sure but I don't think this is a breaking change since none of the key names of the modules are changed, and this only touches modules that are not present in the automapping. I have also ran a quick test and made sure I am getting the same results as the model card:
```
from transformers import pipeline
unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
unmasker("Hello I'm a [MASK] model.")
>>> [{'score': 0.05292877182364464, 'token': 2535, 'token_str': 'role', 'sequence': "hello i'm a role model."}, {'score': 0.039685774594545364, 'token': 4827, 'token_str': 'fashion', 'sequence': "hello i'm a fashion model."}, {'score': 0.03474348038434982, 'token': 2449, 'token_str': 'business', 'sequence': "hello i'm a business model."}, {'score': 0.034622881561517715, 'token': 2944, 'token_str': 'model', 'sequence': "hello i'm a model model."}, {'score': 0.01814521849155426, 'token': 11643, 'token_str': 'modeling', 'sequence': "hello i'm a modeling model."}]
```
cc @sgugger @ydshieh
PS: I am unsure about these CI tests that are failing, they seem to pass on my local laptop, also the error does not give a proper traceback 🤔
```
[gw0] linux -- Python 3.7.12 /home/circleci/.pyenv/versions/3.7.12/bin/python
worker 'gw0' crashed while running 'tests/models/distilbert/test_modeling_distilbert.py::DistilBertModelTest::test_load_with_mismatched_shapes'
=========== xdist: worker gw0 crashed and worker restarting disabled ===========
```
I am seeing the same failure message for CI tests in https://github.com/huggingface/transformers/pull/19946 and https://github.com/huggingface/transformers/pull/19975 so maybe it's unrelated to this PR but I am not sure | 10-30-2022 08:30:23 | 10-30-2022 08:30:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hmmm, I don't feel comfortable doing such a change. I understand it's supposed to be non-breaking, but we have had usage in the past of such classes, and the renaming here seems purely cosmetic.
I understand that it would ease your conversion in `optimum`, but I'm pretty sure you'll need to adapt the implementation to other models that do not respect the format enforced here. How much effort would be needed from the `optimum` side to support the current DistilBERT layer class names?
Thanks<|||||>Thanks a lot!
I see, I was also unsure about adding these changes. No problem, I will try to find a workaroud that should work without this modification |
transformers | 19,965 | closed | Cannot load TensorFlow model from PyTorch weights split to multiple files | ### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117
- Tensorflow version (GPU?): 2.9.2
### Who can help?
@LysandreJik @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```bash
$ git clone https://github.com/stancld/transformers.git -b tf_longt5
$ cd transformers
$ pip install -e .
$ python
```
```python
>>> from transformers import TFLongT5ForConditionalGeneration
>>> m = TFLongT5ForConditionalGeneration.from_pretrained("google/long-t5-tglobal-xl", from_pt=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/transformers/src/transformers/modeling_tf_utils.py", line 2613, in from_pretrained
raise EnvironmentError(
OSError: google/long-t5-tglobal-xl does not appear to have a file named tf_model.h5 or pytorch_model.bin.
>>> m = TFLongT5ForConditionalGeneration.from_pretrained(MODEL_NAME, from_flax=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/transformers/src/transformers/modeling_tf_utils.py", line 2613, in from_pretrained
raise EnvironmentError(
OSError: google/long-t5-tglobal-xl does not appear to have a file named tf_model.h5 or pytorch_model.bin.
```
### Expected behavior
Being able to load TensorFlow model from PyTorch checkpoint when split to multiple files due to a large size. | 10-30-2022 08:16:28 | 10-30-2022 08:16:28 | No this is not supported yet, we'll work on adding support for this later on :-)<|||||>@sgugger Great to hear! 🔝 Feel free then to close this issue if redundant, thanks! :] <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Haven't forgotten, I plan to look into this in December :-) |
transformers | 19,964 | closed | Removed mt5 dependency on t5 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303. Removes the dependency of mt5 on t5.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-30-2022 05:41:52 | 10-30-2022 05:41:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19964). All of your documentation changes will be reflected on that endpoint.<|||||>Creating this PR after a long time because of a tensorflow AVX problem on my system due to which I couldn't run any tests. Currently working on my friend's laptop and making changes:)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,963 | closed | Generate: contrastive search with full optional outputs | # What does this PR do?
This PR further massages PT's `contrastive_search` in advance of the conversion to TF. It does the following modifications:
1. Pipes additional outputs that were missing (e.g. when `output_attentions` is `True`)
2. Rewrites part of the input replication to share logic with beam search -- replicating the input for `top_k` candidates is the same as replicating the input for `num_beams`
3. Removes additional redundant/unused operations
4. Because we now have all outputs (see 1), adds the standard suite of tests for a generation method
5. Moves integration tests to the corresponding model folder
All tests passing locally (`RUN_SLOW=1 py.test tests/* -k contrastive -vv`)
After this PR, we can start with the TF conversion. | 10-29-2022 16:14:31 | 10-29-2022 16:14:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,962 | closed | Add Onnx Config for PoolFormer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (https://github.com/huggingface/transformers/issues/16308)
Add changes to make PoolFormer models available for Onnx conversion.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ChainYo
| 10-29-2022 13:16:42 | 10-29-2022 13:16:42 | I have ran
```RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "poolformer"```

<|||||>Conversion output

<|||||>@ChainYo @lewtun Any suggestions here?
Thanks and Regards.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19962). All of your documentation changes will be reflected on that endpoint.<|||||>Any updates on this PR?<|||||>@ChainYo Any updates here?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @lewtun and @michaelbenayoun, we should merge this before the Optimum change.
Could you help on this?<|||||>Hi @ChainYo,
The change is already here, I think the PR can be merged once the conflicts are resolved.
Also @BakingBrains could you add it to Optimum as well? It should not require much effort. If not, I can make it myself.<|||||>@michaelbenayoun Sure, I will add it to Optimum.
Thank you<|||||>I tried to resolve the conflicts, but I think I messed up<|||||>I just reopened a new pull request for the same with resolved conflicts, can you please check @michaelbenayoun
https://github.com/huggingface/transformers/pull/20868
Thank you |
transformers | 19,961 | closed | Update README.md | [](https://workerb.linearb.io/v2/badge/collaboration-page?magicLinkId=Ds1ztbl)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-29-2022 12:26:57 | 10-29-2022 12:26:57 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19961). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,960 | closed | `return_dict` does not working in `modeling_t5.py` , I set `return_dict==True` but return a turple | ### System Info
- `transformers` version: 4.22.1
- Platform: Linux-5.13.0-48-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten Many thanks!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am using the code from facebook research [FID](https://github.com/facebookresearch/FiD), and I try to use this code
```
for i, batch in enumerate(dataloader):
(idx, labels, _, context_ids, context_mask) = batch
outputs = model(
input_ids=context_ids.cuda(),
attention_mask=context_mask.cuda(),
labels=labels.cuda(),
return_dict=True,
head_mask=head_mask,
decoder_head_mask=decoder_head_mask
)
```
And it report error!
```
File "/home/user/anaconda3/envs/uw/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1695, in forward
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
AttributeError: 'tuple' object has no attribute 'last_hidden_state'
```
So I went to this line to see the output of t5 encoder output
https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/models/t5/modeling_t5.py#L1609
So I use this code
```
encoder_outputs = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
inputs_embeds=inputs_embeds,
head_mask=head_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
print(type(encoder_outputs),"@@@",return_dict)
```
### Expected behavior
It print `<class 'tuple'> @@@ True`, so I set `return_dict==True` but return a turple | 10-29-2022 08:22:46 | 10-29-2022 08:22:46 | Hi @CaffreyR 👋 At a first glance at our code base, I don't see how that bug can arise 🤔 Can you share a script or a notebook where the issue can be reproduced?<|||||>Hi @gante, yes of course! Many thanks! The code is here https://github.com/CaffreyR/FiD with little revision from https://github.com/facebookresearch/FiD. We can see our problem is here https://github.com/CaffreyR/FiD/blob/main/train_reader.py#L63.
The transformer version of this code is different from my experiment.(This is the script that is the easiest for you to produce). Please follow the steps on `readme` on https://github.com/facebookresearch/FiD#download-data to prepare the data(a bit large). And try to run
```
python train_reader.py \
--use_checkpoint \
--train_data open_domain_data/NQ/train.json \ # after we preparing the data
--eval_data open_domain_data/NQ/dev.json\ # after we preparing the data
--model_size base \
--per_gpu_batch_size 1 \
--n_context 100 \
--name my_experiment \
--checkpoint_dir checkpoint \
```
This data set is `NaturalQuestions`, it is little tricky to get the data prepared. So I am very grateful for your help!:)
Thank you very much!
<|||||>Hey @CaffreyR -- with a long script it's hard to pinpoint the issue :) We need a short reproducible script, otherwise we will not prioritize this issue.<|||||>Hi @gante , it is very interesting that I try to use this code and it runs successfully. The batch is the same from FID, only the model is different. The original facebook code inherited and nested the t5 model.
```
import torch
import transformers
model = transformers.T5ForConditionalGeneration.from_pretrained('t5-base')
# model = src.model.FiDT5(t5.config)
# model.load_t5(t5.state_dict())
context_ids=torch.tensor([[[ 822, 10, 3, 9, 538, 213, 1442, 9481, 1936, 10687,
999, 2233, 10, 1862, 12197, 16, 1547, 2625, 10, 1862,
12197, 16, 1547, 37, 1862, 12197, 16, 1547, 2401, 7,
12, 3, 9, 1059, 116, 2557, 11402, 47, 12069, 139,
46, 2913, 358, 788, 12, 8, 9284, 13, 941, 2254,
11, 748, 224, 38, 8, 169, 13, 306, 6339, 53,
1196, 41, 15761, 553, 61, 7299, 6, 3, 29676, 6,
21455, 2465, 6, 6256, 9440, 7, 6, 11, 20617, 277,
5, 100, 47, 294, 13, 8, 2186, 1862, 9481, 14310,
16781, 57, 13615, 7254, 40, 402, 122, 6, 84, 11531,
26, 10687, 585, 11, 748, 12, 993, 10687, 7596, 16,
8, 2421, 296, 5, 37, 1862, 12197, 441, 1547, 3,
28916, 16, 8, 778, 8754, 7, 24, 2237, 12, 46,
993, 16, 542, 8273, 999, 6, 902, 16, 27864, 6,
3504, 21247, 6, 11, 31251, 22660, 5, 1, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]])
labels=torch.tensor([[1547, 1]])
context_mask=torch.tensor([[[ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False,
False, False, False, False, False, False, False, False, False, False]]])
# print(context_ids)
# print(labels)
# print(context_mask)
n_layers, n_heads = 12, 12
head_importance = torch.zeros(n_layers, n_heads).to('cpu')
attn_entropy = torch.zeros(n_layers, n_heads).to('cpu')
head_mask = torch.ones(n_layers, n_heads).to('cpu')
head_mask.requires_grad_(requires_grad=True)
decoder_head_mask = torch.ones(n_layers, n_heads).to('cpu')
decoder_head_mask.requires_grad_(requires_grad=True)
if context_ids != None:
# inputs might have already be resized in the generate method
# if context_ids.dim() == 3:
# self.encoder.n_passages = context_ids.size(1)
context_ids = context_ids.view(context_ids.size(0), -1)
if context_mask != None:
context_mask = context_mask.view(context_mask.size(0), -1)
outputs = model.forward(
input_ids=context_ids,
attention_mask=context_mask,
labels=labels,
return_dict=True,
head_mask=head_mask,
decoder_head_mask=decoder_head_mask
)
# outputs = model(
# input_ids=context_ids.cuda(),
# attention_mask=context_mask.cuda(),
# labels=labels.cuda(),
# return_dict=True,
# head_mask=head_mask.cuda(),
# decoder_head_mask=decoder_head_mask.cuda()
# )
print(outputs)
```
It might be the problem of inheriting, I don't know, it just different when I try to simplify the code. :(
```
def forward(self, input_ids=None, attention_mask=None, **kwargs):
if input_ids != None:
# inputs might have already be resized in the generate method
if input_ids.dim() == 3:
self.encoder.n_passages = input_ids.size(1)
input_ids = input_ids.view(input_ids.size(0), -1)
if attention_mask != None:
attention_mask = attention_mask.view(attention_mask.size(0), -1)
return super().forward(
input_ids=input_ids,
attention_mask=attention_mask,
**kwargs
)
```
<|||||>@CaffreyR then it's almost surely an upstream problem -- I noticed it uses `transformers==3.0.2`, which may explain the issue you're seeing :)
While I can't provide support in these situations (the problem is not present in `transformers`), my advice would be to open an issue in FID and/or to try to monkey-patch their problematic model code.<|||||>OK then, I will give it a try ! Thanks!!! |
transformers | 19,959 | closed | Training using accelerate and deepspeed with ZeRO results in model weights mismatch | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am currently trying to use deepspeed to finetune a AutoModelForCausalLM model (facebook/opt1.3b) on a multi-GPU instance with ZeRO optimization with the unmodified `run_clm_no_trainer.py` script from [this blog post on HF](https://huggingface.co/blog/pytorch-fsdp). The model trains correctly but when loading the model using the code snippet below.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b", use_fast=True)
model = AutoModelForCausalLM.from_pretrained("facebook/opt-1.3b")
model.load_state_dict(torch.load("./opt-1.3b-wikitext/pytorch_model.bin"))
```
It results in an error with the following message below.
```
RuntimeError: Error(s) in loading state_dict for OPTForCausalLM:
size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([50265, 2048])
from checkpoint, the shape in current model is torch.Size([50272, 2048]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([50265, 2048]) from checkpoint, the
shape in current model is torch.Size([50272, 2048]).
```
Which is very confusing since the model does not raise any errors about loading weights during training, even over multiple epochs. I have tried to use a different optimizer as well as trying to disable mixed and half precision but the error still persists. I am unsure if this is a bug or I have something misconfigured, any help would be greatly appreciated.
My ds_config :
```python
{'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none'}, 'offload_param': {'device': 'none'}, 'stage3_gather_16bit_weights_on_model_save': False}, 'steps_per_print': 'inf', 'fp16': {'enabled': True, 'auto_cast': True}}
```
My training command:
```
accelerate launch run_clm_no_trainer.py \
--model_name_or_path facebook/opt-1.3b \
--dataset_name wikitext \
--num_train_epochs 6 \
--block_size 128 \
--output_dir ./opt-1.3b-wikitext
```
### Expected behavior
Models trained using accelerate should be loadable using `model.load_state_dict`. | 10-28-2022 21:46:48 | 10-28-2022 21:46:48 | Thanks for the report!
Cc @ArthurZucker
The OPT tokenizer does not have the same length (50265) as the model embeddings (50272), which causes problems with all our language modeling fine-tuning scripts where there is an automatic resize of the model embeddings to the tokenizer length.
I'm guessing this is to get to a multiple of 8, but there should be fake tokens in the tokenizer to accommodate that maybe.
@JohnnyRacer If you remove the line [here](https://github.com/huggingface/transformers/blob/c87ae86a8f9f56ae193461fa3db6dc20f80eabe4/examples/pytorch/language-modeling/run_clm_no_trainer.py#L381) in the example you're using, you won't have any problem.<|||||>I see. We should probably have all tokenizers and models have the same embed dim? Seems like people often ask the question and it's a bit confusing + could be good for zero shot learning if we have extract fake tokens.
WDYT? <|||||>In those cases, I think we add fake tokens to the tokenizer. cc @LysandreJik to make sure I'm not saying something wrong.
**Edit:** Actually talked to him and we can fix the example instead. Will make a PR later today.<|||||>Should be now fixed by the above PR! |
transformers | 19,958 | closed | Using GPT2 tokenizer with DataCollatorForLanguageModeling | ### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.10.0-18-amd64-x86_64-with-glibc2.31
- Python version: 3.9.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@SaulLu @patil-suraj @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, DataCollatorForLanguageModeling
model = AutoModelForCausalLM.from_pretrained("nferruz/ProtGPT2")
tokenizer = AutoTokenizer.from_pretrained("nferruz/ProtGPT2")
tokenizer.pad_token = tokenizer.eos_token
data_collator = DataCollatorForLanguageModeling(tokenizer = tokenizer, mlm=False)
list_of_seqs = ['GLWSKIKEVGKEAAKAAAKAAGKAALGAVSEAV', 'DGVKLCDVPSGTWSGHCGSSSKCSQQCKDREHFAYGGACHYQFPSVKCFCKRQC']
tokens = tokenizer(list_of_seqs, padding = True, return_tensors='pt', return_special_tokens_mask=True)
collated = data_collator(tokens)
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zachmartinez/miniconda3/lib/python3.9/site-packages/transformers/data/data_collator.py", line 42, in __call__
return self.torch_call(features)
File "/home/zachmartinez/miniconda3/lib/python3.9/site-packages/transformers/data/data_collator.py", line 732, in torch_call
"input_ids": _torch_collate_batch(examples, self.tokenizer, pad_to_multiple_of=self.pad_to_multiple_of)
File "/home/zachmartinez/miniconda3/lib/python3.9/site-packages/transformers/data/data_collator.py", line 404, in _torch_collate_batch
length_of_first = examples[0].size(0)
AttributeError: 'tokenizers.Encoding' object has no attribute 'size'
```
### Expected behavior
I would expect that the collator would accept the tokenizer output and perform the collating. Any help would be appreciated. | 10-28-2022 21:04:27 | 10-28-2022 21:04:27 | The data collator will expect a list of samples from a torch Dataset, but you are passing a single dictionary to it (the output of the tokenizer is one dictionary, with the keys being the arguments expected by the models like `input_ids`, and the values being tensors).
You can actually do directly `model(**tokens)`, you don't need the data collator on this example.<|||||>Thank you very much for your quick response! I'm simply trying to fine-tune the model with CLM, so I figured I needed to use the datacollator. |
transformers | 19,957 | closed | KeyError: 'eval_loss' | I am trying to build a Question Answering Pipeline with the Hugginface framework but facing the `KeyError: 'eval_loss'` error. My goal is to train and save the best model at last and evaluate the validation test on the loaded model. My trainer configuration looks like this:
args = TrainingArguments(f'model_training',
evaluation_strategy="epoch",
label_names = ["start_positions", "end_positions"],
logging_steps = 1,
learning_rate=2e-5,
num_train_epochs=epochs,
save_total_limit = 2,
load_best_model_at_end=True,
save_strategy="epoch",
logging_strategy="epoch",
report_to="none",
weight_decay=0.01,
fp16=True,
push_to_hub=False)
While training, getting this error:
Traceback (most recent call last):
File "qa_pipe.py", line 286, in <module>
pipe.training(train_d, val_d, epochs = 2)
File "qa_pipe.py", line 263, in training
self.trainer.train()
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 1505, in train
ignore_keys_for_eval=ignore_keys_for_eval,
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 1838, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 2090, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/admin/qa/lib/python3.7/site-packages/transformers/trainer.py", line 2193, in _save_checkpoint
metric_value = metrics[metric_to_check]
KeyError: 'eval_loss'
The minimal working example is provided on [colab][1]
How to avoid this error and save the best model at last?
[1]: https://colab.research.google.com/drive/1JNHK8CnMHTm6VMukvDFJq8nvaHhBkxgM?usp=sharing
### Who can help?
@LysandreJik
@sgugger
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1JNHK8CnMHTm6VMukvDFJq8nvaHhBkxgM?usp=sharing
### Expected behavior
It should run without the error. | 10-28-2022 19:18:21 | 10-28-2022 19:18:21 | It doesn't look like you are providing any labels to your model (your functions preparing the dataset do not generate any `start_positions` and `end_positions`). More generally, please use the [forums](https://discuss.huggingface.co/) for any help to debug your code, as we keep the issues for bugs and feature requests only.<|||||>```
from transformers import AutoModelForQuestionAnswering
from transformers import TrainingArguments
from transformers import Trainer
from tqdm.auto import tqdm
from transformers import AutoTokenizer
import numpy as np
import collections
import evaluate
metric = evaluate.load("squad")
from datasets import load_dataset
from transformers import AutoTokenizer
from datasets import load_dataset
def sapl(data, n, split):
data_sampled = data[split].shuffle(seed=42).select(range(n))
return data_sampled
from datasets import load_dataset
raw_datasets = load_dataset('squad')
raw_train = sapl(raw_datasets, 100, 'train') # 100 samples
raw_test = sapl(raw_datasets, 100, 'validation') # 100 samples
n_best = 20
max_answer_length = 30
predicted_answers = []
class QA_pipeline(object):
def __init__(self, model_name,
device = 'cuda',
max_length = 512,
stride = 128):
self.model_name = model_name
self.device = device
self.tokenizer = AutoTokenizer.from_pretrained(self.model_name)
self.model = AutoModelForQuestionAnswering.from_pretrained(
self.model_name).to(self.device)
self.max_length = max_length
self.stride = stride
def _tokenization_train2(self, examples):
questions = [q.strip() for q in examples["question"]]
inputs = tokenizer(
questions,
examples["context"],
max_length=self.max_length,
truncation="only_second",
stride=self.stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
return inputs
def _tokenization_train(self, examples):
questions = [q.strip() for q in examples["question"]]
inputs = self.tokenizer(
questions,
examples["context"],
max_length=self.max_length,
truncation="only_second",
stride=self.stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
offset_mapping = inputs.pop("offset_mapping")
sample_map = inputs.pop("overflow_to_sample_mapping")
answers = examples["answers"]
start_positions = []
end_positions = []
for i, offset in enumerate(offset_mapping):
sample_idx = sample_map[i]
answer = answers[sample_idx]
start_char = answer["answer_start"][0]
end_char = answer["answer_start"][0] + len(answer["text"][0])
sequence_ids = inputs.sequence_ids(i)
# Find the start and end of the context
idx = 0
while sequence_ids[idx] != 1:
idx += 1
context_start = idx
while sequence_ids[idx] == 1:
idx += 1
context_end = idx - 1
# If the answer is not fully inside the context, label is (0, 0)
if offset[context_start][0] > start_char or offset[context_end][1] < end_char:
start_positions.append(0)
end_positions.append(0)
else:
# Otherwise it's the start and end token positions
idx = context_start
while idx <= context_end and offset[idx][0] <= start_char:
idx += 1
start_positions.append(idx - 1)
idx = context_end
while idx >= context_start and offset[idx][1] >= end_char:
idx -= 1
end_positions.append(idx + 1)
inputs["start_positions"] = start_positions
inputs["end_positions"] = end_positions
return inputs
def _tokenization_validation(self, examples):
questions = [q.strip() for q in examples["question"]]
inputs = self.tokenizer(
questions,
examples["context"],
max_length=self.max_length,
truncation="only_second",
stride=self.stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
sample_map = inputs.pop("overflow_to_sample_mapping")
example_ids = []
for i in range(len(inputs["input_ids"])):
sample_idx = sample_map[i]
example_ids.append(examples["id"][sample_idx])
sequence_ids = inputs.sequence_ids(i)
offset = inputs["offset_mapping"][i]
inputs["offset_mapping"][i] = [
o if sequence_ids[k] == 1 else None for k, o in enumerate(offset)
]
inputs["example_id"] = example_ids
return inputs
def get_train_dataset(self, train_dataset):
train_dataset = train_dataset.map(self._tokenization_train,
batched=True,
remove_columns=train_dataset.column_names,)
print(len(train_dataset), len(train_dataset))
return train_dataset
def get_val_dataset(self, val_dataset):
validation_dataset = val_dataset.map(
self._tokenization_validation,
batched=True,
remove_columns=val_dataset.column_names,)
print(len(val_dataset), len(validation_dataset))
return validation_dataset
def compute_metrics_eval(self, eval_pred):
print("it is working")
print(eval_pred)
def compute_metrics(self, start_logits, end_logits, features, examples):
example_to_features = collections.defaultdict(list)
for idx, feature in enumerate(features):
example_to_features[feature["example_id"]].append(idx)
predicted_answers = []
for example in tqdm(examples):
example_id = example["id"]
context = example["context"]
answers = []
# Loop through all features associated with that example
for feature_index in example_to_features[example_id]:
start_logit = start_logits[feature_index]
end_logit = end_logits[feature_index]
offsets = features[feature_index]["offset_mapping"]
start_indexes = np.argsort(start_logit)[-1 : -n_best - 1 : -1].tolist()
end_indexes = np.argsort(end_logit)[-1 : -n_best - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Skip answers that are not fully in the context
if offsets[start_index] is None or offsets[end_index] is None:
continue
# Skip answers with a length that is either < 0 or > max_answer_length
if (
end_index < start_index
or end_index - start_index + 1 > max_answer_length
):
continue
answer = {
"text": context[offsets[start_index][0] : offsets[end_index][1]],
"logit_score": start_logit[start_index] + end_logit[end_index],
}
answers.append(answer)
# Select the answer with the best score
if len(answers) > 0:
best_answer = max(answers, key=lambda x: x["logit_score"])
predicted_answers.append(
{"id": example_id, "prediction_text": best_answer["text"]}
)
else:
predicted_answers.append({"id": example_id, "prediction_text": ""})
theoretical_answers = [{"id": ex["id"], "answers": ex["answers"]} for ex in examples]
return metric.compute(predictions=predicted_answers, references=theoretical_answers)
def training(self, train_dataset, val_dataset, epochs = 2):
self.args = TrainingArguments(f'{self.model_name}_training',
logging_steps = 1,
learning_rate=2e-5,
num_train_epochs=epochs,
save_total_limit = 2,
save_strategy = "epoch",
load_best_model_at_end=True,
evaluation_strategy = "epoch", #To calculate metrics per epoch
logging_strategy="epoch", #Extra: to log training data stats for loss
weight_decay=0.01,
fp16=True,
push_to_hub=False)
self.trainer = Trainer(model = self.model,
args = self.args,
compute_metrics=self.compute_metrics_eval,
train_dataset = train_dataset,
eval_dataset = val_dataset,
tokenizer = self.tokenizer,)
self.trainer.train()
self.trainer.save_model()
def validation(self, val_dataset, raw_val_dataset):
self.trainer = Trainer(model=self.model)
self.trainer.model = self.model.cuda()
predictions, _, _ = self.trainer.predict(val_dataset)
start_logits, end_logits = predictions
output = self.compute_metrics(start_logits, end_logits, val_dataset, raw_val_dataset)
return output
```
```
pipe = QA_pipeline("emilyalsentzer/Bio_ClinicalBERT", device = 'cuda:0')
train_d = pipe.get_train_dataset(raw_train)
val_d = pipe.get_val_dataset(raw_test)
print(pipe.validation(val_d, raw_test))
pipe.training(train_d, val_d, epochs = 5)
```
That is my code, while `load_best_model_at_end=True` it's giving error <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@monk1337 It is actually very helpful just to know that this was initiated by using `load_best_model_at_end=True`! I was struggling with the same error. Using `load_best_model_at_end=False` solved it for me.
However, this shouldn't be necessary. @sgugger this is a bug. |
transformers | 19,956 | closed | Add TF image classification example script | # What does this PR do?
Adds the TF equivalent for the PyTorch image classification example script.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 10-28-2022 18:22:34 | 10-28-2022 18:22:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Still no good, I scanned the test fetcher and found a potential bug. COuld you add
```py
elif f.startswith("examples/tensorflow"):
test_files_to_run.append("examples/flax/test_tensorflow_examples.py")
```
at the line 562 of `utils/test_fetcher.py` (in-between PyTorch and Flax)? I think that's what causing the issue of the tests not running.<|||||>> It looks like your branch is a bit old and does not contain some fixes made to make sure the example tests run when an example is modified (you can see the test examples are not running here 😅 ). Could you try a rebase on main?
@sgugger I've rebased from upstream main and force pushed again. If I run `git log --oneline` I can see these changes are applied on top of the tip of main.
```
270bfb056 (HEAD -> add-examples-tf-image-classification, origin/add-examples-tf-image-classification) Add tests
a2256258b Fix up
1a1594cb8 Update requirements
b6a2f1ef9 TF image classification script
9ccea7acb (upstream/main, main) Fix some doctests after PR 15775 (#20036)
a639ea9e8 Add **kwargs (#20037)
ec6878f6c Now supporting pathlike in pipelines too. (#20030)
aa39967b2 reorganize glossary (#20010)
305e8718b Show installed libraries and their versions in CI jobs (#20026)
```
The examples still aren't running and I can't see in the diff where I could be overriding this 😅 I'm sure there's something I'm overlooking. I'll keep digging but let me know if there's something else I should be doing. Sorry to bother.<|||||>@sgugger - sorry late on the comment as well. I'll add your suggestion! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @amyeroberts, is this PR still going ahead? It looked almost ready!<|||||>@Rocketknight1 Yes - sorry, this fell down my priority list for a bit. Code is all ready to go - I was trying to find models that make the tests run quickly c.f. [this comment](https://github.com/huggingface/transformers/pull/19956#discussion_r1012991408). <|||||>@amyeroberts Ah, that makes sense! It's totally okay to upload your own super-mini model and use that - it doesn't really matter if the accuracy is bad, the test will just let us detect if the outputs from this model class change suddenly<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,955 | closed | changes for mbart_causal_lm | # What does this PR do?
This PR add FlaxMBartForCausalLM which was previously part of #19831, as discussed in [here](https://github.com/huggingface/transformers/issues/19897#issuecomment-1294648919), I am raising a separate PR for it.
Reason I want to add this model is, that it is a prerequisite to Donut model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sanchit-gandhi | 10-28-2022 17:28:15 | 10-28-2022 17:28:15 | @sanchit-gandhi please have a look when you have some time to spare, for some reason test did not ran. can you check the same and please re-initialize the test pipeline<|||||>Might need to fix style according to https://huggingface.co/docs/transformers/pr_checks#code-and-documentation-style<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,954 | closed | clean up vision/text config dict arguments | # What does this PR do?
Remove `vision_config_dict` and `text_config_dict`: just use `vision_config` and `text_config`.
- Make code base cleaner
- Avoid surprising behavior (see the comment)
| 10-28-2022 15:51:57 | 10-28-2022 15:51:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Without this PR, we have somehow surprising/confusing results
```python
from transformers import CLIPConfig, CLIPModel
config = CLIPConfig.from_pretrained("openai/clip-vit-base-patch16")
print(config.vision_config.patch_size)
print(config.vision_config_dict["patch_size"])
config.vision_config.patch_size = 32
config.save_pretrained("v2")
config_v2 = CLIPConfig.from_pretrained("v2")
# This is not `32` which is unexpected!
# In fact, it is `vision_config_dict` is being used during loading to set `vision_config`
print(config_v2.vision_config.patch_size)
# This is 32 - unexpected!
print(config_v2.vision_config_dict["patch_size"])
config.vision_config_dict["patch_size"] = 32
config.save_pretrained("v3")
config_v3 = CLIPConfig.from_pretrained("v3")
# This is 32 - unexpected!
print(config_v3.vision_config.patch_size)
# This is 32 - OK
print(config_v3.vision_config_dict["patch_size"])
```<|||||>@sgugger If you are happy with the current change, I will apply the changes to some other models, and the testing files.
So far it is good even if I don't change `to_dict`. It has already
```python
output["text_config"] = self.text_config.to_dict()
output["vision_config"] = self.vision_config.to_dict()
```<|||||>Awesome that you are working on fixing this!
Encountered the same issue with a new model I'm working on called CLIPSeg.
Also, could we update `GroupViT` as well? This is also a CLIP-like model. |
transformers | 19,953 | closed | Upload dummy models | # What does this PR do?
Upload dummy models | 10-28-2022 15:42:00 | 10-28-2022 15:42:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,952 | closed | Adding EDSR model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19631
| 10-28-2022 15:38:01 | 10-28-2022 15:38:01 | I will add the other components based on [this](https://huggingface.co/docs/transformers/add_new_model) page. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @alaradirik and @NielsRogge <|||||>Sorry for the delay.
Can some one help me on putting the model file into the organisation space?
Thanks<|||||>> Sorry for the delay. Can some one help me on putting the model file into the organisation space? Thanks
Hi @venkat-natchi, thanks for working on this!
I can help you with that but I saw that there is no conversion script yet. The conversion script (e.g. convert_original_XXX.py) loads the pre-trained original model and the randomly initialized HF model with the corresponding configuration, and replaces each parameter of the HF model with the corresponding learned parameter of the original model. We also have a convenient `push_to_hub()` method that can be added to the conversion script to create a repo on the hub and push the converted / pre-trained HF model and files. See an example conversion script over [here.](https://github.com/huggingface/transformers/blob/main/src/transformers/models/dpt/convert_dpt_to_pytorch.py)
cc @sgugger @NielsRogge <|||||>@venkat-natchi I guess you also need to rebase your branch on main as TensorFlow new release broke a lot of things so tests won't pass unless you do this.
<|||||>Thanks guys.!!
Started with a convert_script and rebased with main branch. <|||||>There is multiprocessing [here](https://github.com/sanghyun-son/EDSR-PyTorch/blob/master/src/dataloader.py) for data loading. I need some help in disengaging it and implement a simple processing step.
<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19952). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@alaradirik and @NielsRogge Friendly ping here.<|||||>Hi @venkat-natchi, would it be possible to rebase your branch on the main branch of transformers?
This way, the CI becomes greener, and allows us to review the PR in depth.<|||||>Sure, will do. Thanks<|||||>Hello, can I work on this issue? Although I'm new to open-source contributions, I've worked on super-resolution models in the past and I was wondering why HuggingFace did not have these. I am familiar with PyTorch.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hello, can I work on this issue? Although I'm new to open-source contributions, I've worked on super-resolution models in the past and I was wondering why HuggingFace did not have these. I am familiar with PyTorch.
Hi @asrimanth, perhaps you could collaborate with @venkat-natchi on this PR if they are okay with it? Super resolution is definitely a task we would like to add to transformers and this would be a great first addition :)<|||||>Hello @alaradirik, Sure! I am interested. How do I get started?<|||||>> Hello @alaradirik, Sure! I am interested. How do I get started?
@venkat-natchi can add you as a contributor to their forked transformers repo and you two could collaborate on this branch if they are okay with it. @venkat-natchi would you prefer to work on the PR on your own or hand it over to @asrimanth instead?
In any case, you can refer to the [guidelines](https://huggingface.co/docs/transformers/add_new_model) to get started with adding a model. I'd recommend first checking you can run the original repo without any issues though. Here are some summarized points that might help:
- Each model, including different checkpoints of the same model, has it's own repo on the Hub (see [DETR-ResNet-50 repo](https://huggingface.co/facebook/detr-resnet-50) as an example). This is basically a git repo that stores the checkpoint specific configuration, preprocessing configuration and the model weights.
- The code (this PR) added to transformers acts as a boilerplate to load different checkpoints - EDSR trained on different datasets or with different resolution or larger / smaller architecture.
- configuration_edsr.py should contain all the hyperparameters, the input image size and architectural details (e.g. number of hidden layers) to initialize the model.
- image_processing_edsr.py should contain the ImageProcessor class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.)
- modeling_edsr.py should contain the model definition.
- The conversion script:
- Loads the pretrained original model and randomly initializes the HF implementation with the corresponding configuration
- Copies the pretrained parameters (weights and biases) of the original model to the corresponding parameters of the randomly initialized HF model (the conversion step)
- Forward propagates an arbitrary input through both the original model and converted HF model and checks if the outputs match
- Uploads the converted HF model to the hub
- Each model and image processor class is tested with scripts under `tests/models/<MODEL_NAME>/ `, you can refer to other test files to see what tests to add.
Once you are done, you would need to run the following commands to check the PR passes all CI tests:
```
make style
make quality
make repo-consistency
RUN_SLOW=TRUE pytest tests/models/edsr/test_modeling_edsr.py
RUN_SLOW=TRUE pytest tests/models/edsr/test_image_processor_edsr.py
```
We can do an in-depth review once the PR passes most tests or the configuration, preprocessing and modeling is mostly complete.
Hope this helps!<|||||>Sure, I will add you as collaborator. <|||||>Sorry for the delay.
@asrimanth
I added you as a collaborator. <|||||>You can find the working version of the original repository here
https://github.com/venkat-natchi/EDSR-PyTorch/blob/master/src/trial.py
<|||||>Hello @alaradirik and the HuggingFace Team, I seem to run into an error where the ```EDSR_PRETRAINED_MODEL_ARCHIVE_LIST``` is empty and this appears to be causing some problems. From my understanding, I have to upload the model weights to a URL and mention it in that list. The existing pre-trained models are as follows:
```
url = {
"r16f64x2": "https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x2-1bc95232.pt",
"r16f64x3": "https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x3-abf2a44e.pt",
"r16f64x4": "https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x4-6b446fab.pt",
"r32f256x2": "https://cv.snu.ac.kr/research/EDSR/models/edsr_x2-0edfb8a3.pt",
"r32f256x3": "https://cv.snu.ac.kr/research/EDSR/models/edsr_x3-ea3ef2c6.pt",
"r32f256x4": "https://cv.snu.ac.kr/research/EDSR/models/edsr_x4-4f62e9ef.pt",
}
```
Should I upload these weights into the hub? If so, should I upload these to my profile? Is there a way to load these weights from the URL like torch.hub.load? Please let me know.<|||||>> Should I upload these weights into the hub? If so, should I upload these to my profile? Is there a way to load these weights from the URL like torch.hub.load? Please let me know.
Hi @asrimanth , that's correct, the `EDSR_PRETRAINED_MODEL_ARCHIVE_LIST` contains the links to the uploaded checkpoints's configuration files on the Hugging Face Hub, see an example repo over [here](https://huggingface.co/kakaobrain/align-base). Note that each link / repo contains the converted model, **not** the original model weights. So you should first complete the configuration, preprocessing, modeling and conversion scripts, and then convert and upload each checkpoint released by the authors.
Repos on the hub are placed under the organization that wrote the paper (Seoul National University in this case). We can ask them to create an organization on the hub but we will place the repos under the huggingface organization until they do so.
Since model conversion is the last step, you can fill in the list with the repo paths you intend to create. For example:
```
EDSR_PRETRAINED_MODEL_ARCHIVE_LIST = {
"huggingface/edsr-base-x2": "https://huggingface.co/huggingface/edsr-base-x2/resolve/main/config.json",
"huggingface/edsr-base-x3": "https://huggingface.co/huggingface/edsr-base-x3/resolve/main/config.json",
"huggingface/edsr-base-x4": "https://huggingface.co/huggingface/edsr-base-x4/resolve/main/config.json",
"huggingface/edsr-x2": "https://huggingface.co/huggingface/edsr-x2/resolve/main/config.json",
...
}
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,951 | closed | The scipt exit when calling "pretrainmodel.save_pretrained" because of OOM, but the previous training phase is well | ### System Info
NVIDIA A100 Tensor Core GPUs (80G);
python 3.8.13;
torch 1.10.0+cu113;
transformers 4.20.1;
### Who can help?
@sgugger
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I fine-tune the OPT model based on Transformer.Trainer() (deepspeed, zero2), there is no problem in training process, however, the script fails to load the final best model in the end. It's really confused since the problem. I checked the function "pretrainmodel.save_pretrained", it seems that it has doing some operations like "del state_dict" to save memory. Thus, it's really confused that our gpu memory is enough(80G) since it can finish the training process, but it fails to load the finally best model into memory. Here is the corresponding traceback.
### Traceback
```
[INFO|trainer.py:1834] 2022-09-07 21:46:26,324 >> Loading best model from /ssdwork/results/results_mc6k_6.7b/20220907-1655/checkpoint-68 (score: 0.77099609375).
Traceback (most recent call last):
File "finetune.py", line 497, in <module>
main()
File "finetune.py", line 460, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1771, in _inner_training_loop
self._load_best_model()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in _load_best_model
load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 445, in load_sharded_checkpoint
state_dict = torch.load(os.path.join(folder, shard_file))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
return obj.cuda(device)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py", line 79, in _cuda
return new_type(self.size()).copy_(self, non_blocking)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py", line 606, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.48 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "finetune.py", line 497, in <module>
main()
File "finetune.py", line 460, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
return inner_training_loop(
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1771, in _inner_training_loop
self._load_best_model()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in _load_best_model
load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 445, in load_sharded_checkpoint
state_dict = torch.load(os.path.join(folder, shard_file))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
return obj.cuda(device)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py", line 79, in _cuda
return new_type(self.size()).copy_(self, non_blocking)
File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py", line 606, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.38 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859055
[2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859056
[2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859057
[2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859058
[2022-09-07 21:46:38,874] [ERROR] [launch.py:184:sigkill_handler] ['/home/anaconda3/envs/tk-instruct/bin/python', '-u', 'finetune.py', '--local_rank=3', '--deepspeed', '/home/opt/ds_config.json', '--model_name_or_path', 'facebook/opt-6.7b', '--train_file', '/home/opt/data/train_mc6k.csv', '--validation_file', '/home/opt/data/valid_mc_300.csv', '--do_train', '--do_eval', '--fp16', '--output_dir', '/ssdwork/opt/results/results_mc6k_6.7b/20220907-1655/', '--num_train_epochs', '1000', '--per_device_train_batch_size', '4', '--evaluation_strategy', 'epoch', '--save_strategy', 'epoch', '--load_best_model_at_end', '--metric_for_best_model', 'eval_loss', '--greater_is_better', 'False', '--gradient_accumulation_steps', '32', '--use_fast_tokenizer', 'False', '--learning_rate', '1e-05', '--warmup_steps', '10', '--save_total_limit', '1', '--overwrite_cache', '--block_size', '2048'] exits with return code = 1
/home/anaconda3/envs/tk-instruct/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d
```
### Expected behavior
It should be noted that the total process is good when fine-tuning model with 2.7B parameters and below but will exit for model with 6.7B parameters and above. I think it may not simply due to OOM, since our single GPU memory is already 80G , and we mainly call the Trainer() to finish the process (based on deepspeed zero2 optimizer). I think if I ignored some significant parameters like "--save_on_each_node". | 10-28-2022 15:31:39 | 10-28-2022 15:31:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> ### System Info
> NVIDIA A100 Tensor Core GPUs (80G); python 3.8.13; torch 1.10.0+cu113; transformers 4.20.1;
>
> ### Who can help?
> @sgugger
>
> ### Information
> * [x] The official example scripts
> * [ ] My own modified scripts
>
> ### Tasks
> * [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
> * [ ] My own task or dataset (give details below)
>
> ### Reproduction
> When I fine-tune the OPT model based on Transformer.Trainer() (deepspeed, zero2), there is no problem in training process, however, the script fails to load the final best model in the end. It's really confused since the problem. I checked the function "pretrainmodel.save_pretrained", it seems that it has doing some operations like "del state_dict" to save memory. Thus, it's really confused that our gpu memory is enough(80G) since it can finish the training process, but it fails to load the finally best model into memory. Here is the corresponding traceback.
>
> ### Traceback
> ```
> [INFO|trainer.py:1834] 2022-09-07 21:46:26,324 >> Loading best model from /ssdwork/results/results_mc6k_6.7b/20220907-1655/checkpoint-68 (score: 0.77099609375).
>
> Traceback (most recent call last):
> File "finetune.py", line 497, in <module>
> main()
>
> File "finetune.py", line 460, in main
> train_result = trainer.train(resume_from_checkpoint=checkpoint)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
> return inner_training_loop(
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1771, in _inner_training_loop
> self._load_best_model()
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in _load_best_model
> load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 445, in load_sharded_checkpoint
> state_dict = torch.load(os.path.join(folder, shard_file))
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
> return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
> result = unpickler.load()
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 857, in persistent_load
> load_tensor(data_type, size, key, _maybe_decode_ascii(location))
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 846, in load_tensor
> loaded_storages[key] = restore_location(storage, location)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
> result = fn(storage, location)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
> return obj.cuda(device)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py", line 79, in _cuda
> return new_type(self.size()).copy_(self, non_blocking)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py", line 606, in _lazy_new
> return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
>
> RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.48 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
>
> Traceback (most recent call last):
> File "finetune.py", line 497, in <module>
> main()
>
> File "finetune.py", line 460, in main
> train_result = trainer.train(resume_from_checkpoint=checkpoint)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1409, in train
> return inner_training_loop(
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1771, in _inner_training_loop
> self._load_best_model()
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/trainer.py", line 1867, in _load_best_model
> load_result = load_sharded_checkpoint(model, self.state.best_model_checkpoint, strict=strict_load)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 445, in load_sharded_checkpoint
> state_dict = torch.load(os.path.join(folder, shard_file))
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
> return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
> result = unpickler.load()
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 857, in persistent_load
> load_tensor(data_type, size, key, _maybe_decode_ascii(location))
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 846, in load_tensor
> loaded_storages[key] = restore_location(storage, location)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
> result = fn(storage, location)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
> return obj.cuda(device)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/_utils.py", line 79, in _cuda
> return new_type(self.size()).copy_(self, non_blocking)
>
> File "/home/anaconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/cuda/__init__.py", line 606, in _lazy_new
> return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
>
> RuntimeError: CUDA out of memory. Tried to allocate 12.40 GiB (GPU 0; 79.17 GiB total capacity; 0 bytes already allocated; 3.38 GiB free; 0 bytes reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
>
> [2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859055
> [2022-09-07 21:46:38,873] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859056
> [2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859057
> [2022-09-07 21:46:38,874] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 859058
> [2022-09-07 21:46:38,874] [ERROR] [launch.py:184:sigkill_handler] ['/home/anaconda3/envs/tk-instruct/bin/python', '-u', 'finetune.py', '--local_rank=3', '--deepspeed', '/home/opt/ds_config.json', '--model_name_or_path', 'facebook/opt-6.7b', '--train_file', '/home/opt/data/train_mc6k.csv', '--validation_file', '/home/opt/data/valid_mc_300.csv', '--do_train', '--do_eval', '--fp16', '--output_dir', '/ssdwork/opt/results/results_mc6k_6.7b/20220907-1655/', '--num_train_epochs', '1000', '--per_device_train_batch_size', '4', '--evaluation_strategy', 'epoch', '--save_strategy', 'epoch', '--load_best_model_at_end', '--metric_for_best_model', 'eval_loss', '--greater_is_better', 'False', '--gradient_accumulation_steps', '32', '--use_fast_tokenizer', 'False', '--learning_rate', '1e-05', '--warmup_steps', '10', '--save_total_limit', '1', '--overwrite_cache', '--block_size', '2048'] exits with return code = 1
>
> /home/anaconda3/envs/tk-instruct/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
> warnings.warn('resource_tracker: There appear to be %d
> ```
>
> ### Expected behavior
> It should be noted that the total process is good when fine-tuning model with 2.7B parameters and below but will exit for model with 6.7B parameters and above. I think it may not simply due to OOM, since our single GPU memory is already 80G , and we mainly call the Trainer() to finish the process (based on deepspeed zero2 optimizer). I think if I ignored some significant parameters like "--save_on_each_node".
I have the same problem as you. How did you solve it? <|||||>After using deepspeed for large models(xl、xxl), the parameters will be stored in pieces, but the loading method of the best model parameters will change, and deepspeed is not used. |
transformers | 19,950 | closed | Fix ONNX tests for ONNX Runtime v1.13.1 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the slow tests for ONNX in `test_onnx.py` which were failing due `input_qType` being renamed by `activation_qType` in `onnxruntime` v1.13.1
With this fix, the follow passes:
```
RUN_SLOW=1 pytest tests/onnx/test_onnx.py
```
| 10-28-2022 15:06:20 | 10-28-2022 15:06:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Does the new name work with the minimum version of ONNX pinned?
Are you referring to the version we list in `setup.py` or somewhere else? In `setup.py` we have `onnxruntime>=1.4.0` so users with a pre-existing installation of ONNX Runtime would have to upgrade to `v1.13.1`.
An alternative is to have an if/else statement that checks the `onnxruntime` version to guarantee backwards compatibility - I'll implement that instead :)<|||||>> An alternative is to have an if/else statement that checks the onnxruntime version to guarantee backwards compatibility - I'll implement that instead :)
Yes please!<|||||>Backwards compatibility added in https://github.com/huggingface/transformers/pull/19950/commits/01ccbea13800e6912c1e34cbece4311cd2b6b420 :) |
transformers | 19,949 | closed | Feature extraction pipeline increasing memory use | ### System Info
UBUNTU 22.04
### Who can help?
@Narsil
### Reproduction
```python
import os
import argparse
import hickle as hkl
import numpy as np
import pandas as pd
from sklearn import model_selection
from transformers import pipeline, AutoTokenizer, AutoModel
import torch
from torch.utils.data import Dataset
from tqdm import tqdm
def is_file_path(path):
if os.path.isfile(path):
return path
else:
raise argparse.ArgumentTypeError(f"{path} is not a valid file path")
def is_dir_path(path):
if os.path.isdir(path):
return path
else:
raise argparse.ArgumentTypeError(f"{path} is not a valid dir path")
parser = argparse.ArgumentParser()
parser.add_argument("-d", "--dataframe", help="Path of the dataframe", type=is_file_path, dest="df_path")
parser.add_argument("-ed", "--embedding_dir", help="Path of embeddings stored", type=is_dir_path, dest="embedding_dir")
parser.add_argument("-V", "--version", help="show program version", action="store_true", dest="version")
args = parser.parse_args()
MODEL_NAME = "anferico/bert-for-patents"
pipe = None
test_size = 1000
input_text_column = "text"
label_column = "SUBFIELD_ID"
class ListDataset(Dataset):
def __init__(self, original_list):
self.original_list = original_list
def __len__(self):
return len(self.original_list)
def __getitem__(self, i):
return self.original_list[i]
def get_pipeline(device_index):
global pipe
if pipe is None:
print(f"Creating pipeline for device: {device_index}")
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME, do_lower_case=True, model_max_length=512, truncation=True, padding=True, pad_to_max_length=True
)
device = torch.device(f"cuda:{device_index}" if torch.cuda.is_available() else "cpu")
model = AutoModel.from_pretrained(MODEL_NAME).to(device)
local_pipe = pipeline(
"feature-extraction",
model=model,
tokenizer=tokenizer,
max_length=512,
truncation=True,
padding=True,
pad_to_max_length=True,
device=device,
framework="pt",
batch_size=16,
)
pipe = local_pipe
return pipe
def extract_embeddings(df, col, mode="cls"):
dataset = ListDataset(df[col].tolist())
if mode == "cls":
pipe = get_pipeline(0)
embeddings = []
for embedding in tqdm(pipe(dataset, max_length=512, truncation=True, num_workers=8), total=len(dataset)):
embeddings.append(np.squeeze(embedding)[0])
return embeddings
def main():
df = pd.read_csv(os.path.join(args.df_path), sep='\t', header=0)
train_df, test_df = model_selection.train_test_split(df, shuffle=True, stratify=df[label_column],
train_size=len(df) - test_size, random_state=50)
print("Extracting train embeddings...")
x_train_bert = extract_embeddings(train_df, input_text_column)
print("Extracting test embeddings...")
x_test_bert = extract_embeddings(test_df, input_text_column)
if __name__ == "__main__":
main()
```
### Expected behavior
I am trying to extract text embeddings from a text column of around 65k entries. I need to store the embeddings in the memory t train the downstream sklearn classifier (no online, incremental mode). The memory use of the above feature extraction gets insane like 128 GB, may I ask why this is the case (I am just storing one NumPy array of 1024 float numbers per entry, so it has to be much much smaller). | 10-28-2022 15:02:26 | 10-28-2022 15:02:26 | Hi @quancore ,
Is this line normal ?
```
dataset = ListDataset(df[col].tolist())
```
This will load the entire dataset in memory, not sure if 65k entries are big or not but they can add up fast if they are sizeable documents.
` df = pd.read_csv(os.path.join(args.df_path), sep='\t', header=0)`
As far as I understand you're loading the entire file here (so larger than just the col entries you're looking for)
Finally you're setting `max_length=512` which means your larges embedding is `512 * 1024` x `65 000` that's roughly `30Go` .
In `transformers` there's no reduction of the embedding which is done sometimes by `sentence-transformers` (either looking at the embedding of the first token, averaging, maxing or other reduction mecanisms)
Could this part be missing ?
`pad_to_max_length=True,` means all the tensors will be 512 x 1024.
Also I'm not sure, but I think the output of the pipeline by default is a raw list meaning it will take up more space than it's `numpy` equivalent. You could try using the the new `return_tensors=True` parameter to receive directly the embedding in tensor format.
Tell me if any of this helps solve your use case !
<|||||>HI @Narsil , thank you for the answer.
```
Is this line normal ?
dataset = ListDataset(df[col].tolist())
This will load the entire dataset in memory, not sure if 65k entries are big or not but they can add up fast if they are sizeable documents.
df = pd.read_csv(os.path.join(args.df_path), sep='\t', header=0)
```
The dataset is not that big and ListDataset instance example actually comes from one of the answers on the forum: https://discuss.huggingface.co/t/progress-bar-for-hf-pipelines/20498/2
That's why I am loading the data frame and converting it to a list for use, do you have a better suggestion while seeing the progress?
```
Finally you're setting max_length=512 which means your larges embedding is 512 * 1024 x 65 000 that's roughly 30Go .
In transformers there's no reduction of the embedding which is done sometimes by sentence-transformers (either looking at the embedding of the first token, averaging, maxing or other reduction mecanisms)
Could this part be missing?
```
As you have calculated, it should be around 30GB or around, but it is reaching up to 128 GB which is insane. Right now, I am only interested in the CLS token, so if I set the max token to 1, will it give me only the CLS token?
```
Also I'm not sure, but I think the output of the pipeline by default is a raw list meaning it will take up more space than it's numpy equivalent. You could try using the the new return_tensors=True parameter to receive directly the embedding in tensor format.
```
I will try this.<|||||>> The dataset is not that big and ListDataset instance example actually comes from one of the answers on the forum: https://discuss.huggingface.co/t/progress-bar-for-hf-pipelines/20498/2
If the dataset is not that big it will work fine, but since you already have the data you don't need to do `.tolist()`, I would do something like
```python
class MyDataset:
def __init__(self, panda_frame, col):
self.panda_fram = panda_frame
self.col = col
def __len__(self):
return len(self.panda_frame[self.col])
def __getitem__(self, i):
return self.panda_frame[self.col][i]
```
I haven't tested it, but with this gist you could get away without copying anything.
> As you have calculated, it should be around 30GB or around, but it is reaching up to 128 GB which is insane. Right now, I am only interested in the CLS token, so if I set the max token to 1, will it give me only the CLS token?
Don't use `pad_to_max_length` imo. This is unnecessary in a lot of cases. What you want to do is to run the model on the actual full sentence, but keep around only the embedding for the first token (which should be token[0], but I don't know the model you are using, it may not exist depending on how it's setup.)
```
embeddings.append(embedding[0]) # Embedding should be `seq_len, hidden_dim` so `embedding[0] should be `hidden_dim`.
```
Also since you are in a pipeline, you could also write to disk the results, in a dataset, a different file for each embeddings or something like that. Then you could run the entire pipeline on very little memory, that's basically the whole point of pipeline, to try and limit aggressively the memory necessary.
For 30Go and 128Go, it's hard to answer exactly without checking, but constants do pile up fast in Python. Anything reference another viable will keep the whole data alive for instance.
The pipeline itself shouldn't keep anything around but `lists` are much more wasteful than tensors memory wise, also at those levels of memory there's probably a big chunk of fragmentation of memory which might increase the overall usage. Garbage collection might not be able to keep up with the amount of data you generate etc, etc..
If you could reproduce an issue on a smaller scale example using something like `busybox time ...` to showcase that we're using too much memory for a given task (can't really reproduce your example right now) that'd be lovely.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,948 | closed | Transformers logging setLevel method seems not to work | ### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.15.0-33-generic-x86_64-with-glibc2.35
- Python version: 3.10.4
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@LysandreJik @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am subclassing the `Trainer` object because I want to inject some custom features in the `_inner_training_loop` method. What I noticed is that once I do that, the `logger.info` printouts, which when using the standard `Trainer` are printed to the console, are now no longer printed. Even If I try to explicitly force the logger level, this seems not to work. To reproduce the behaviour, run the script below
```python3
from transformers.utils import logging
logger = logging.get_logger(__name__)
logger.setLevel("INFO")
logger.info("Hello World")
```
### Expected behavior
I would expect `Hello World` to be printed to the console, but it is not.
Why is this the case? How can I set it such that it also prints out INFO level logs? | 10-28-2022 14:03:01 | 10-28-2022 14:03:01 | I'll try<|||||>I am a beginner and unfortunately could not find an example with your issue. I know the logging library and probably could help, but I need to reproduce the example<|||||>Any thoughts @LysandreJik @sgugger ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @AndreaSottana, sorry to have missed this.
The `logging` module for `transformers` acts on the code within `transformers` itself (this is how the `logging` library works), not on user code.
However, by doing the following line:
```py
logger = logging.get_logger(__name__)
```
The logger created will not depend from `transformers` but from the module you're currently running. It will, therefore, not be impacted by the methods which affect `transformers`' logging module.
This is not the cleanest workaround, but you could get what you want by specifying that this logger instance should behave as a `transformers` module with something like the following:
```py
from transformers.utils import logging
logger = logging.get_logger('transformers.custom.' + __name__)
logger.setLevel("INFO")
logger.info("Hello World")
```
This will trick it into understanding that `logger` is the logger for a module that lives within `transformers.custom`. This should print out
```
Hello World
```
just fine. |
transformers | 19,947 | closed | unexpected OOM error when creating pipeline | ### System Info
```
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2',device=0)
```
When i run these code, an unexpected error occured.
```
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 47.46 GiB total capacity; 148.00 MiB already allocated; 22.31 MiB free; 148.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
the gpu has enough space
```
Fri Oct 28 05:53:34 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 8000 Off | 00000000:1A:00.0 Off | Off |
| 33% 46C P2 74W / 260W | 2504MiB / 48601MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 15308 C python 833MiB |
| 0 N/A N/A 16307 C python 833MiB |
| 0 N/A N/A 17441 C python 833MiB |
+-----------------------------------------------------------------------------+
```
it is interesting that 47.46 GiB total capacity; 148.00 MiB already allocated; 22.31 MiB free; 148.00 MiB reserved in total by PyTorch, and i cannot allocate 20Mb.
i use slurm to upload the job, i did not know whether it has effect
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2',device=0)
# upload job by slurm
### Expected behavior
successfully use pipeline, which is a quite cool things. | 10-28-2022 13:00:47 | 10-28-2022 13:00:47 | sorry, there are some errors on the server in fact<|||||>Ok, if you could share what happened it could help potential readers. But glad you fixed it ! |
transformers | 19,946 | closed | gradient checkpointing for GPT-NeoX | # What does this PR do?
Add gradient checkpointing to GPT-NeoX model, in the style of GPT-J | 10-28-2022 12:49:43 | 10-28-2022 12:49:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The current failing tests look like infra issues.<|||||>Yes, looks like a flaky failure! |
transformers | 19,945 | closed | Add Japanese translated README | # What does this PR do?
Add Japanese README
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-28-2022 10:02:59 | 10-28-2022 10:02:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for adding this new translation! @younesbelkada do you want to give it a quick proofreading?
Also @eltociear to make sure the Japanese README stays up to date with the other ones, could you fill the [fololowing dict](https://github.com/huggingface/transformers/blob/main/utils/check_copies.py#L39) with the proper prompts/templates? Thanks!<|||||>@sgugger Thanks!
Added a fix to the this file.<|||||>Thanks! Can you also just quickly run `make fix-copies` to make the CI happy?<|||||>@younesbelkada さん初めまして!
Thank you for contacting!
I have added an explanation to the top of the README.
Also, thanks for checking and suggesting a fix!<|||||>@eltociear It looks like the start prompt you added is not present in the Japanese README. Could you make sure it's correct?<|||||>Thanks again for your contribution!<|||||>@sgugger THANKS too! |
transformers | 19,944 | closed | Problem of computing entropy in `run_bertology.py` | ### System Info
- `transformers` version: 4.20.1
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.13.0.dev20220709 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @thomwolf @stas00 @patrickvonplaten @LysandreJik Many thanks!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
In [`run_bertology.py` ](https://github.com/huggingface/transformers/blob/main/examples/research_projects/bertology/run_bertology.py#L108), we use `def entropy` to calculate the `entropy` of attention matrix. But what if the matrix has negative elements, the `plogp = p * torch.log(p)` will be `nan`
### Expected behavior
`plogp = p * torch.log(p)` will be `nan` | 10-28-2022 09:48:55 | 10-28-2022 09:48:55 | Note that this is a research project, so an example provided as is not really maintained :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,943 | closed | NllbTokenizer/NllbTokenizerFast inserts language code incorrectly when tokenizing target text | ### System Info
- `transformers` version: 4.23.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.1+cu111 (True)
- Tensorflow version (GPU?): 2.7.3 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@LysandreJik @sgugger @SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
According to the [documentation](https://huggingface.co/docs/transformers/model_doc/nllb#transformers.NllbTokenizer) for `NllbTokenizer`,
> The tokenization method is `<tokens> <eos> <language code>` for source language documents, and `<language code> <tokens> <eos>` for target language documents.
When you tokenize target text, it incorrectly inserts the language code at the end of the sentence instead of the beginning.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer.tgt_lang = "eng_Latn"
article = "UN Chief says there is no military solution in Syria"
tokens = tokenizer(text_target=article).tokens()
```
`tokens` has the value:
```
['▁UN', '▁Chief', '▁says', '▁there', '▁is', '▁no', '▁military', '▁solution', '▁in', '▁Syria', '</s>', 'eng_Latn']
```
### Expected behavior
`tokens` should have the value:
```
['eng_Latn', '▁UN', '▁Chief', '▁says', '▁there', '▁is', '▁no', '▁military', '▁solution', '▁in', '▁Syria', '</s>']
``` | 10-28-2022 08:26:40 | 10-28-2022 08:26:40 | After further research, I believe that it is intended that `PretrainedConfig.decoder_start_token_id` should be used to insert the lang code at the beginning of the sentence during fine tuning of NLLB and that the `NllbTokenizer` class is working as intended. If that is true, then the documentation for `NllbTokenizer` should be corrected, and the `run_translation.py` script should be fixed to properly set `decoder_start_token_id` when the `NllTokenizer` is being used (similar to mBART).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@ddaspit where did you get the information "after further research" ?
When I read the mBart PAper, indeed, the Lang Token is suffixed after the source sequence and the the target lang token is prefixed.
When reading the NLLB paper, it says the Lang token are both prefixed to SRC and TGT (page 48).
What I don't understand is what and where exact BOS / EOS tokens are used.
Usually we don't put any EOS/EOS in Source and we put BOS/EOS at the beg/end of the target sequence (at training).
Did you get different info?
<|||||>You are correct @vince62s , the paper clearly states that contrary to other model, the `src_lang` token is placed at the beginning of the input sequence. When adding the `NLLB-200` to the library, I checked that the output are a tiny bit different if you change this behavior. The fix is to change the prefix token and the suffix token attributes of the tokenizer class. Will open a PR after checking that this will not affect the current setup.
The `BOS` token is never used in fairseq, while the `EOS` is used as the `BOS` token. Indeed the `decoder_input_ids` that are passed to the model are `[eos, tgt_lang, ...]` (when generating) and `[tgt_lang, ..., eos]` when training. <|||||>One slight correction: [eos, tgt_lang, ..., eos] on target side at training.
Since I implemented NLLB-200 support in OpenNMT-py, I can confirm that prefixing instead of suffixing improves BLEU scores slightly. https://forum.opennmt.net/t/nllb-200-with-opennmt-py-the-good-the-bad-and-the-ugly/5151
<|||||>The "research" I was referring to was entirely about how to properly use HuggingFace Transformers to prefix the target sentences with the lang code during decoding. I was not aware that source sentences were supposed to be prefixed with the lang code. We use NLLB pretty heavily, so I would be very happy to see this fixed.<|||||>Hi everyone!
I was also concerned with the behavior of the NLLB tokenizer at HF, so, even before discovering this issue, I made two of my own "experiments" to verify that the behavior of the tokenizer should be fixed.
1. I computed BLEU for one high-resource language pair (English-French) and one low-resource (Yoruba->Bashkir) from FLORES-200 with `[..., src_lang, eos]` and `[src_lang, ..., eos]` templates. For both directions, a significant proportion of translations (20% and 66%) is different depending on the tokenization method. For eng-fra, the new tokenization leads to a small loss in BLEU (-0.09), whereas for yor-bak, there is a larger gain (+0.21). While a thorough research would require investingating more translation direction, these results already hint that `[src_lang, ..., eos]` tokenization may be beneficial.
2. I tweaked the Fairseq code for inferencing the original NLLB implementation so that it prints full tokenized source and translation during inference. The output confirms that the original implementation uses `[src_lang, ..., eos]` as source and `[tgt_lang, ..., eos]` as translation output (which implies using `[eos, tgt_lang, ...]` as a translation decoder input during training, because, as stated in the comments above, Fairseq uses `eos` instead of `bos` for translation).
The code and outputs for both experiments can be found [in my Colab notebook](https://colab.research.google.com/drive/1Zl-a9sbuC0YgRBFUHByTKiKy9GqlDd7u?usp=sharing).
These experiments confirm that #22313 is implementing exactly the right tokenization method; great thanks for that! <|||||>Awesome, thanks for fixing this issue. |
transformers | 19,942 | closed | 'FlaubertTokenizer' object has no attribute 'do_lowercase' | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: not essential
- Using distributed or parallel set-up in script?: no
### Who can help?
@SaulLu
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_uncased")
tokenizer.tokenize("bonjour")
AttributeError: 'FlaubertTokenizer' object has no attribute 'do_lowercase'
```
This was not the case back in the days of transformers-2
I can fix it by saying
```python
tokenizer.do_lowercase = True
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_uncased")
tokenizer.tokenize("bonjour")
AttributeError: 'FlaubertTokenizer' object has no attribute 'do_lowercase'
```
### Expected behavior
```
['bonjour</w>']
```
| 10-28-2022 07:53:35 | 10-28-2022 07:53:35 | Thanks for reporting! I believe this has been fixed on the main branch. While waiting for the next release, could you do a source install?<|||||>Yes, when installing from main
```
pip install git+https://github.com/huggingface/transformers.git
```
things work again as expected |
transformers | 19,941 | closed | HFArgumentParser using a mix of json file and command line? | Is there a way to make HFArgumentParser to load first from a json/dict and then update any command line arguments on top of it? e.g., if i want to keep a custom defaults file for TrainingArguments but also update some arguments from command line. TIA | 10-28-2022 07:27:09 | 10-28-2022 07:27:09 | Please use the [forums](https://discuss.huggingface.co/) to ask questions like this as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,940 | closed | Fixing failure when labels have different length | Pop out label when padding the signal, add it back afterward to avoid the failure caused by different label length.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-28-2022 05:35:24 | 10-28-2022 05:35:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19940). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR. I don't see what is failing with the current code, could you provide a reproducible example of a bug?<|||||>This is a minimum example:
```python
import numpy as np
import datasets
from transformers import AutoTokenizer, BertForTokenClassification,DataCollatorForTokenClassification,Trainer,TrainingArguments
def tokenize_function(examples):
input = tokenizer(examples["tokens"], is_split_into_words=True, truncation=True)
return input
raw_datasets = datasets.load_dataset('conllpp')
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
n_label = len(raw_datasets['train'].features['ner_tags'].feature.names)
model = BertForTokenClassification.from_pretrained('bert-base-uncased',num_labels=n_label)
tokenized_datasets = raw_datasets.map(tokenize_function,
batched=True)
tokenized_datasets = tokenized_datasets.rename_column("ner_tags", "labels")
tokenized_datasets.set_format("torch")
#This will make the collector to use the torch_call function and lead to failure when the label has different length.
data_collator = DataCollatorForTokenClassification(tokenizer)
training_args = TrainingArguments(output_dir = "bert_finetune",
evaluation_strategy = "epoch",
save_strategy="epoch",
learning_rate=1e-5,
num_train_epochs=1,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
push_to_hub = False)
trainer = Trainer(model,
training_args,
train_dataset=tokenized_datasets['train'],
eval_dataset=tokenized_datasets['validation'],
data_collator=data_collator,
tokenizer=tokenizer)
trainer.train()
```
The error:
Traceback (most recent call last):
File "collector_issue.py", line 40, in <module>
trainer.train()
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1500, in train
return inner_training_loop(
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/trainer.py", line 1716, in _inner_training_loop
for step, inputs in enumerate(epoch_iterator):
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
data = self._next_data()
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 671, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 61, in fetch
return self.collate_fn(data)
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/data/data_collator.py", line 42, in __call__
return self.torch_call(features)
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/data/data_collator.py", line 306, in torch_call
batch = self.tokenizer.pad(
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2981, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 206, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/home/haotiant/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 732, in convert_to_tensors
raise ValueError(
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
Which is caused by torch_call function in transformers/data/data_collator.py
Specifically:
https://github.com/huggingface/transformers/blob/2e35bac4e73558d334ea5bbf96a1116f7c0d7fb3/src/transformers/data/data_collator.py#L307-L313
Notice the comment "# Conversion to tensors will fail if we have labels as they are not of the same length yet."
So I just fixed this bug by poping out the labels first and then feed them back, and then the afterward code would take care of the padding.<|||||>You have not aligned your TAGS with the tokens in this example, so it won't work anyway.<|||||>The error is due to the nested label list which has different lengths. I am not sure what you mean by aligning TAGS with token, I supposed you mean padding the label so they would have same length, but wasn't the collector supposed to do the auto-padding given nested a nested list label?<|||||>The collator will do the padding of the labels to add as many pad tokens as in the inputs. But the tokenizer splits a word in multiple subwords, and you haven't done anything for that in your labels. So you still end up with labels being of different sizes.<|||||>I think the label padding is after the tokenizer padding:
https://github.com/huggingface/transformers/blob/2e35bac4e73558d334ea5bbf96a1116f7c0d7fb3/src/transformers/data/data_collator.py#L320-325
And the padding is dynamically added for every batch, so I think there will be no problem?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,939 | closed | Errors when using "torch_dtype='auto" in "AutoModelForCausalLM.from_pretrained()" to load model | ### System Info
python 3.8.13;
torch 1.10.0+cu113;
transformers 4.20.1;
### Who can help?
@stas00,@sgugger,@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`
import torch
from transformers import (
AutoModelForCausalLM,
AutoConfig
)
from transformers.modeling_utils import PreTrainedModel
path = "./opt-6.7b-ori"
config = AutoConfig.from_pretrained('facebook/opt-6.7b')
model = AutoModelForCausalLM.from_pretrained('facebook/opt-6.7b',torch_dtype='auto',config=config,cache_dir='/ssdwork/cache/')
pretrainmodel = PreTrainedModel(config=config)
pretrainmodel.model = model
pretrainmodel.save_pretrained(save_directory=path, is_main_process=True , state_dict=None)
`
### BUG
`Traceback (most recent call last):
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 308, in _check_seekable
f.seek(f.tell())
AttributeError: 'list' object has no attribute 'seek'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 461, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 594, in load
with _open_file_like(f, 'rb') as opened_file:
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 235, in _open_file_like
return _open_buffer_reader(name_or_buffer)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 220, in __init__
_check_seekable(buffer)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 311, in _check_seekable
raise_err_msg(["seek", "tell"], e)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/torch/serialization.py", line 304, in raise_err_msg
raise type(e)(msg)
AttributeError: 'list' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "re_generate_best_model_from_shard.py", line 114, in <module>
model = AutoModelForCausalLM.from_pretrained(folder_name, torch_dtype='auto', cache_dir='/ssdwork/cache/')
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2148, in from_pretrained
one_state_dict = load_state_dict(resolved_archive_file)
File "/ssdwork/miniconda3/envs/tk-instruct/lib/python3.8/site-packages/transformers/modeling_utils.py", line 464, in load_state_dict
with open(checkpoint_file) as f:
TypeError: expected str, bytes or os.PathLike object, not list
`
<img width="702" alt="image" src="https://user-images.githubusercontent.com/83019888/198872310-03848039-fef4-47e4-826b-2d1224c38eca.png">
### Expected behavior
The original model weight's data type is fp16. As we known, the process of loading models with 'from_pretrained()' will change the dtype from fp16 to fp32 as default, thus I add torch_dtype='auto' as official guideline, but it turn out to be a error. Howerve, if we use torch_dtype=torch.float16, we will get desired result. | 10-28-2022 04:23:56 | 10-28-2022 04:23:56 | I believe this has been fixed in more recent versions of Transformers (can't be entirely sure since your code sample and traceback are not properly formatted between three backticks, so very hard to read).
Could you try to upgrade to the latest version?<|||||>> I believe this has been fixed in more recent versions of Transformers (can't be entirely sure since your code sample and traceback are not properly formatted between three backticks, so very hard to read). Could you try to upgrade to the latest version?
alright, I will try to upgeade the version of Transformers.<|||||>> I believe this has been fixed in more recent versions of Transformers (can't be entirely sure since your code sample and traceback are not properly formatted between three backticks, so very hard to read). Could you try to upgrade to the latest version?
Hello, I've updated the verson of transformer, and there is still the bug. I've update the comment with a screen shot of bug for read.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,938 | closed | Incorrect Document Content in BlenderBot Tokenizer | `The BlenderBot tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not.` However, the examples in BlenderBot Tokenizer (`BlenderbotTokenizer`) are the same:
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/blenderbot/tokenization_blenderbot.py#L105
The same issue also occurs in `BlenderbotTokenizerFast`:
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/blenderbot/tokenization_blenderbot_fast.py#L64 | 10-28-2022 01:59:38 | 10-28-2022 01:59:38 | Might be of interest to @ArthurZucker <|||||>Okay, this is simply because the model used, `BlenderbotTokenizer.from_pretrained("facebook/blenderbot-3B")` has the attribute `add_prefix_space` set to `True` by default. If you set it to `False` we have the expected different output.
Let me open a PR to fix this. |
transformers | 19,937 | closed | Onnx CLIP Model outputs "Shape mismatch" warining on inference | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Convert clip to onnx `python -m transformers.onnx -m openai/clip-vit-base-patch32 onnx/`
2. Try to inference
```
import transformers
import onnxruntime
import numpy as np
from PIL import Image
import torch
model_path = "onnx/model.onnx"
example_image = "swatch01.png"
session = onnxruntime.InferenceSession(model_path)
processor = transformers.CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
labels = [
"a close up of a person's eye photo",
"a person's arm with a bunch of lipstick swatches on it",
# (snip)
]
img = Image.open(example_image)
inputs = processor(text=candidate_labels, images=img,
return_tensors="np", padding=True)
ort_inputs = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"pixel_values": inputs["pixel_values"]
}
ort_outputs = session.run(None, ort_inputs)
```
3. It work but some warinings:
```
2022-10-27 09:23:18.299824270 [W:onnxruntime:, execution_frame.cc:594 AllocateMLValueTensorPreAllocateBuffer] Shape mismatch attempting to re-use buffer. {42,512} != {1,512}. Validate usage of dim_value (values should be > 0) and dim_param (all values with the same string should equate to the same size) in shapes in the model.
2022-10-27 09:23:18.300383069 [W:onnxruntime:, execution_frame.cc:594 AllocateMLValueTensorPreAllocateBuffer] Shape mismatch attempting to re-use buffer. {42,1} != {1,1}. Validate usage of dim_value (values should be > 0) and dim_param (all values with the same string should equate to the same size) in shapes in the model.
```
(`42` is the number of labels)
### Expected behavior
The model work find, but the waring outputs every inference.
| 10-28-2022 01:08:01 | 10-28-2022 01:08:01 | cc @lewtun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,936 | closed | [Doctest] Add configuration_fsmt.py | # What does this PR do?
Adds `configuration_fsmt.py` to `utils/documentation_tests.txt`
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh please review, thanks!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-28-2022 01:04:41 | 10-28-2022 01:04:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@sha016 I force pushed to this PR with a tiny update.
Once @sgugger approves this PR, we can merge it to `main`.
Thank you again for the contribution!
|
transformers | 19,935 | closed | Update Code of Conduct to Contributor Covenant v2.1 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-27-2022 23:48:47 | 10-27-2022 23:48:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,934 | closed | UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. | ### System Info
C:\Python399\lib\site-packages\transformers\models\bigbird_pegasus\modeling_bigbird_pegasus.py:807: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
torch.arange(indices.shape[0] * indices.shape[1] * num_indices_to_gather, device=indices.device)
```
import logging
from transformers import pipeline
f = open("TextFile1.txt", "r")
ARTICLE = f.read()
summarizer = pipeline("summarization", model="google/bigbird-pegasus-large-bigpatent" )
```
| 10-27-2022 23:10:29 | 10-27-2022 23:10:29 | cc @ArthurZucker so it's on your radar.<|||||>Interesting, we might have more model to refactor <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,933 | closed | Map `RealmBertModel` for `AutoModel` | # What does this PR do?
This PR maps `RealmModel` to be able to use it with `AutoModel` - for consistency with `BertModel`
Why this PR? I wanted to automate tests for `BetterTransformers` integration in `optimum` without having to import manually the class, see here: https://github.com/younesbelkada/optimum/blob/49575c5b016392383f0c2ebc1565ef56747b87e6/tests/bettertransformers/test_bettertransformers.py#L68
Since `Realm` should be supported by `BetterTransformers`, this PR would help me design easier-to-implement tests
cc @sgugger @ydshieh
https://github.com/huggingface/optimum/pull/423 | 10-27-2022 21:28:57 | 10-27-2022 21:28:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @younesbelkada
We should also work on `src/transformers/__init__.py`, but otherwise LGTM, thanks!<|||||>Ah yes, great catch! Will add it now<|||||>Thanks a lot for the explanation!
Yes let's stay pragmatic, I will probably just remove it from the `BetterTransformers` test <|||||>@sgugger But we usually expose the base model, no, like `BertModel`?<|||||>Yes, but I'm very unsure that this is the base REALM model. It's more of a building block towards it. |
transformers | 19,932 | closed | Add LayoutLMv3 resource | From #19848, this PR adds resources for LayoutLMv3. | 10-27-2022 21:07:39 | 10-27-2022 21:07:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,931 | closed | Add wav2vec2 resources | From #19848, this PR adds resources for Wav2Vec2. | 10-27-2022 20:56:37 | 10-27-2022 20:56:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,930 | closed | Add DistilBERT resources | From #19848, this PR adds resources for DistilBERT. | 10-27-2022 20:19:10 | 10-27-2022 20:19:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,929 | closed | Token indices sequence length is longer than the specified maximum sequence length for this model (11261 > 1024). Running this sequence through the model will result in indexing errors | ### System Info
I tested 2 models (sshleifer/distilbart-cnn-12-6 , facebook/bart-large-cnn) and they both have very small 1024 max token length
So which model supports full length or the most token count?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tested 2 models (sshleifer/distilbart-cnn-12-6 , facebook/bart-large-cnn) and they both have very small 1024 max token length
So which model supports full length or the most token count?
### Expected behavior
I tested 2 models (sshleifer/distilbart-cnn-12-6 , facebook/bart-large-cnn) and they both have very small 1024 max token length
So which model supports full length or the most token count?
| 10-27-2022 20:12:49 | 10-27-2022 20:12:49 | Please use the [forums](https://discuss.huggingface.co/) for questions like this as we keep the issues for bugs (in the library) and feature requests only.<|||||>@sgugger you are right i am closing this thread
could you answer there: https://discuss.huggingface.co/t/which-summarization-model-of-huggingface-supports-more-than-1024-tokens-which-model-is-more-suitable-for-programming-related-articles/25095 |
transformers | 19,928 | closed | Add BART resources | From #19848, this PR adds resources for BART. | 10-27-2022 19:45:14 | 10-27-2022 19:45:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,927 | closed | Add `accelerate` support for BART-like models | # What does this PR do?
This PR adds `accelerate` support for BART-like models, so that these models can be loaded in 8bit using `load_in_8bit=True`.
Follows the same logic as https://github.com/huggingface/transformers/pull/19912 regarding shared embeddings
Do not merge before https://github.com/huggingface/accelerate/pull/792 gets merged!
cc @sgugger
| 10-27-2022 17:23:36 | 10-27-2022 17:23:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot! Merging since https://github.com/huggingface/accelerate/pull/792 has been merged 🟢 |
transformers | 19,926 | closed | About the `head_mask` of the Bert model `forward` really speed up? | ### System Info
- `transformers` version: 4.20.1
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.10
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.13.0.dev20220709 (False)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik Many thanks!
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, in `bert` model or other model, when we try to get the output, we usually use `output = model(**batch)`. And in the source code of bert model, we actually have a parameter called `head_mask`. So when we give them different head_mask of the model, like `outputs = model(head_mask=head_mask, **batch)`, the time will be different?
So I try to run the two codes, and the time of the code that have more-zero `head_mask` do not speed up
```
head_mask = torch.ones(12, 12)
print(head_mask)
for batch in dataloader:
for k, v in batch.items():
batch[k] = v
import time
start=time.perf_counter()
outputs = model(head_mask=head_mask, **batch)
end = time.perf_counter()
print(end-start)
```
```
head_mask = torch.zeros(12, 12)
for i in range (12):
head_mask[i][0] = 1
print(head_mask)
for batch in dataloader:
for k, v in batch.items():
batch[k] = v
import time
start=time.perf_counter()
outputs = model(head_mask=head_mask, **batch)
end = time.perf_counter()
print(end-start)
```
### Expected behavior
So I try to run the two codes, and the time of the code that have more-zero `head_mask` do not speed up
Many thanks! | 10-27-2022 13:53:08 | 10-27-2022 13:53:08 | Hey @CaffreyR, the head mask isn't there to speed things up.
You can read a bit more about it in the [bertology](https://huggingface.co/docs/transformers/v4.23.1/en/bertology) documentation. It's mostly to see which heads impact your prediction, it's not made to speed things up.<|||||>So you means that only if we prune the head according to the `head mask`, we can speed up our model?<|||||>I mean that looking at `head_mask` as a way to speed up the model doesn't work :)
If you'd like to speed up your model, you can look at changing the precision, quantizing it, distillating it; but removing heads isn't going to speed it up or very very marginally.<|||||>Great! Many thanks! |
transformers | 19,925 | closed | Does transformers have Swin Object Detection? | null | 10-27-2022 11:32:34 | 10-27-2022 11:32:34 | Please use the [forums](https://discuss.huggingface.co/) to ask questions like this, as we keep issues for bugs and feature requests only. |
transformers | 19,924 | closed | Support segformer fx | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I wrote simple test code to use fx in the Segformer model.
But failed.
```python
import torch
from transformers import SegformerModel, SegformerConfig, SegformerFeatureExtractor
from transformers.utils.fx import symbolic_trace
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b0")
model = SegformerModel.from_pretrained("nvidia/mit-b0")
inputs = feature_extractor(image, return_tensors="pt")
traced_model = symbolic_trace(model, ["pixel_values"])
with torch.no_grad():
outputs = model(**inputs)
traced_outputs = traced_model(**inputs)
assert torch.allclose(outputs.last_hidden_state, traced_outputs["last_hidden_state"])
```
when I tried to apply fx to Segformer model, HFTracer class could not pass transpose_for_scores function


because Proxy(Torch.Size ) is not iterable object.
so, simply fixed as belows.

also there was same case in the forward function.

to overcome `check_if_model_is_supported`, added segformer to


## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge @michaelbenayoun | 10-27-2022 10:40:21 | 10-27-2022 10:40:21 | I noticed that I previously made commit in the main branch(forked my branch).
so I reopend PR again.
Could you review again ? @michaelbenayoun
this PR is same with PR 19917(https://github.com/huggingface/transformers/pull/19917)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,923 | closed | Some fixes regarding auto mappings and test class names | # What does this PR do?
Add `pegasus_x` to some auto mappings, and fix the incorrect class names in ViTMSN testing file.
Also fix `ESM` checkpoint | 10-27-2022 10:39:21 | 10-27-2022 10:39:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,922 | closed | Remove embarrassing debug print() in save_pretrained | @sgugger spotted this one, sorry about that! (cc @gante) | 10-27-2022 10:21:02 | 10-27-2022 10:21:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This is already part of #19900 which is awaiting your review 😛 <|||||>If it is, you haven't pushed it!<|||||>Oh! Then let's merge this. |
transformers | 19,921 | closed | [Whisper Tokenizer] Make more user-friendly | # What does this PR do?
Fixes #19864.
In summary, the Whisper tokenizer is modified to prepend several tokens to the start-of-sequence:
- BOS token id (`<|startoftranscript|>`) -> consistent with other sequence-to-sequence models such as _BART_.
- Language token id (e.g. `<|es|>` for Spanish) -> set only when the tokenizer is instantiated with argument `language=X`. Otherwise omitted.
- Task token id (e.g. `<|translate|>` for speech translation) -> set only when the tokenizer is instantiated with argument `task=Y`. Otherwise omitted.
- No time stamps id (`<|notimestamps|>`) ->set only when the tokenizer is instantiated with argument `predict_timestamps=False`. For `predict_timestamps=True`, it is omitted.
In addition, it is modified to always append the end-of-sequence token to the end of the label sequence (`<|endoftext|>`).
The updated tokenizer behaves as follows:
```python
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny", language="english", task="transcribe", predict_timestamps=False)
input_ids = tokenizer("hey").input_ids
text_with_special = tokenizer.decode(input_ids, skip_special_tokens=False)
text = tokenizer.decode(input_ids, skip_special_tokens=True)
print("Input ids :", input_ids)
print("Text w/ special :", text_with_special)
print("Text :", text)
```
**Print Output:**
```
Input ids : [50258, 50259, 50359, 50363, 17230, 50257]
Text w/ special : <|startoftranscript|><|en|><|transcribe|><|notimestamps|>hey<|endoftext|>
Text : hey
```
The attention mask functionality of the Whisper tokenizer **is** retained (_c.f._ https://github.com/huggingface/transformers/issues/19864#issuecomment-1291799687).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-27-2022 09:23:41 | 10-27-2022 09:23:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm not sure if Patrick currently has the bandwidth to review this, @sgugger would you be able to take a look if you've got a spare few minutes? Thanks! 🙏<|||||>Test for `set_prefix_tokens` in https://github.com/huggingface/transformers/pull/19921/commits/e98821f12ba9a899d2907ebcb7d114aff8712c0b<|||||>Cool good to merge for me |
transformers | 19,920 | closed | donut -> donut-swin | # What does this PR do?
The model type "donut" doesn't exist, and we don't have `DonutConfig` or `DonutModel`. | 10-27-2022 08:43:08 | 10-27-2022 08:43:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,919 | closed | During the evaluation, the gpu stops, not working | ### System Info
I am fine-tuning Summariation Task with gpt model using multi-gpu. There is no problem training. However, during the evaluation, the following phenomenon occurs.
The gpu utilization is maintained at 100%, but the temperature is very low.
And the evaluation process is no longer in progress.
How can I solve this problem?
<img width="474" alt="스크린샷 2022-10-27 오후 5 01 10" src="https://user-images.githubusercontent.com/52374789/198227290-ee01b21f-8f1a-4231-8595-546611c96098.png">
### Who can help?
@patil-suraj @patrickvonplaten @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. I use Seq2Seq TrainingArguements, Trainer
2. I modify prediction_step in Seq2Seq Trainer
```
def prediction_step(
self,
model: nn.Module,
inputs: Dict[str, Union[torch.Tensor, Any]],
prediction_loss_only: bool,
ignore_keys: Optional[List[str]] = None,
) -> Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]:
"""
Perform an evaluation step on :obj:`model` using obj:`inputs`.
Subclass and override to inject custom behavior.
Args:
model (:obj:`nn.Module`):
The model to evaluate.
inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`):
The inputs and targets of the model.
The dictionary will be unpacked before being fed to the model. Most models expect the targets under the
argument :obj:`labels`. Check your model's documentation for all accepted arguments.
prediction_loss_only (:obj:`bool`):
Whether or not to return the loss only.
Return:
Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss, logits and
labels (each being optional).
"""
if not self.args.predict_with_generate or prediction_loss_only:
return super().prediction_step(
model, inputs, prediction_loss_only=prediction_loss_only, ignore_keys=ignore_keys
)
has_labels = "labels" in inputs
inputs = self._prepare_inputs(inputs)
if has_labels:
labels = inputs['labels']
else:
labels = None
generation_inputs = {"input_ids": inputs["input_ids"]}
slice_start = inputs["input_ids"].shape[-1]
# XXX: adapt synced_gpus for fairscale as well
max_length = slice_start + self.generation_max_length if slice_start + self.generation_max_length < 2048 else 2048
# print(f'num beams : {self.generation_num_beams}')
gen_kwargs = {
"max_length": max_length,
# "min_length": max_length,
"num_beams": self.generation_num_beams,
"pad_token_id": self.tokenizer.pad_token_id,
"eos_token_id": self.tokenizer.eos_token_id,
"early_stopping": True,
"synced_gpus": True,
}
if self.args.predict_with_generate and not self.args.prediction_loss_only:
generated_tokens = self.model.generate(
**generation_inputs,
**gen_kwargs,
)
generated_tokens = generated_tokens[:, slice_start:]
if generated_tokens.shape[-1] < gen_kwargs["max_length"]:
generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs["max_length"])
with torch.no_grad():
with self.autocast_smart_context_manager():
outputs = model(input_ids=inputs['input_ids'], labels=inputs['input_ids'])
if has_labels:
if self.label_smoother is not None:
loss = self.label_smoother(outputs, inputs["input_ids"])
else:
loss = (outputs["loss"] if isinstance(outputs, dict) else outputs[0])
else:
loss = None
loss = None
if self.args.prediction_loss_only:
return (loss, None, None)
return (loss, generated_tokens, labels)
```
### Expected behavior
I also tried fine-tuning the translation task
However, there was no error in the translation task, and it seems to occur when the length of the sentence generated is long from experience
It has been tried several times, but it has occurred at different points, and it has been confirmed that it is not a data problem.
I'd like your help.
Thank you | 10-27-2022 08:17:22 | 10-27-2022 08:17:22 | It's possible there are tensors not all of the same lengths across processes (maybe the labels since they are not padded?). When trying to gather them, torch.distributed just hangs instead of throwing an error.<|||||>@sgugger
First of all, thank you for your answer.
The label is padded.
When generating, it seems to be a problem that occurs because the end point of the sentence is different for each gpu process.
How can we solve this?<|||||>@sgugger Can you help me about this issue?
Thank you<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,918 | closed | Why training on Multiple GPU is slower than training on Single GPU for fine tuning Speech to Text Model | ### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.15.0-52-generic-x86_64-with-debian-bookworm-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
Speech: @patrickvonplaten, @anton-l, @sanchit-gandhi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
For training Wav2Vec2 model on multiple gpus, small changes I have made in the script - `run_speech_recognition_ctc.py` for loading custom hindi dataset and no further changes made. Just modified parameter: `nproc_per_node` for number of gpus in run.sh:
```
OMP_NUM_THREADS=1 python -m torch.distributed.launch \
--nproc_per_node 3 run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
--output_dir="./new_output" \
--overwrite_output_dir \
--num_train_epochs="30" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="200" \
--eval_steps="200" \
--logging_steps="200" \
--layerdrop="0.0" \
--save_total_limit="2" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore \। \| \’ \– \, \? \. \! \- \; \: \" \“ \% \‘ \” \� \' \
--fp16 \
--group_by_length \
--do_train --do_eval
```
Original code provided in this [repository](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#common-voice-ctc).
For training on single gpu, small changes were made for loading custom dataset. Code used from this blog: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2. Rest of the code is same.
### Expected behavior
Using one GPU:

At 4 mins, already reached 200 epochs.
Using Multiple GPU:

At 4 mins, only reached 80 epochs.
Using multiple gpu should speed up the training/fine-tuning process. But instead, it is slower. Kindly need your support to check this issue. | 10-27-2022 06:05:19 | 10-27-2022 06:05:19 | @sanchit-gandhi could you take a look here?<|||||>Hey @ishamnewsreels! A couple of things:
- Do we need to set `OMP_NUM_THREADS=1`? Looks like this affects multi-processing, wondering if it's interfering with distributed training
- Whilst distributed training is running, could you open a new command line window and execute the Unix command:
```bash
watch -n 0.1 nvidia-smi
```
This will launch the NVIDIA system management interface, and display individual GPU usage. We'd expect all three of your GPUs to be in use for distributed training. If < 3 are being used there's an issue with launching distributed training!<|||||>Hi @sanchit-gandhi.
If I do not set `OMP_NUM_THREADS=1`, the code doesn't execute at all.
Also, I have used the unix command that you mentioned and I can observe that all 3 gpus are used.

<|||||>Hey @ishamnewsreels - that's good to see that all three GPUs are used. I think I see what the problem is! You have the number of epochs fixed as 30, but are changing the effective batch size for single vs multi GPU training. This changes the number of optimisation steps (= num epochs * epoch-size / batch-size).
With single GPU, your settings were as follows:
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- Number of devices = 1
- Effective batch size = `per_device_batch_size` * `gradient_accumulation_steps` * number of devices = 16
- For 30 epochs, this gives **11940** optimisation steps
With three GPUs, your settings were as follows:
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 1
- Number of devices = 3
- Effective batch size = `per_device_batch_size` * `gradient_accumulation_steps` * number of devices = 24 (1.5x more what we had for single GPU)
- For 30 epochs, this gives **7980** optimisation steps (1.5x less than what we had for single GPU)
The progress bars that we see during training are **not** the number of epochs, but rather the number of **optimisation steps**. With multi-GPU, we're training for fewer optimisation steps (as the batch size is larger), and so we expect the number of optimisation steps to be less after 4 minutes. After 4 minutes, the % of training completed is 1.67% for single GPU, and 1.00% for multi GPU -> so the training progress is quite similar after this time. We can attribute the difference in training progress to the added communication cost in using multi GPU vs single GPU (we have to sync the GPU's up when we do multi GPU training, giving a communication overhead).
Hope that makes sense! |
transformers | 19,917 | closed | Support segformer fx | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I wrote simple test code to use fx in the Segformer model.
But failed.
```python
import torch
from transformers import SegformerModel, SegformerConfig, SegformerFeatureExtractor
from transformers.utils.fx import symbolic_trace
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/mit-b0")
model = SegformerModel.from_pretrained("nvidia/mit-b0")
inputs = feature_extractor(image, return_tensors="pt")
traced_model = symbolic_trace(model, ["pixel_values"])
with torch.no_grad():
outputs = model(**inputs)
traced_outputs = traced_model(**inputs)
assert torch.allclose(outputs.last_hidden_state, traced_outputs["last_hidden_state"])
```
when I tried to apply fx to Segformer model, HFTracer class could not pass transpose_for_scores function


because Proxy(Torch.Size ) is not iterable object.
so, simply fixed as belows.

also there was same case in the forward function.

to overcome `check_if_model_is_supported`, added segformer to


## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@NielsRogge @michaelbenayoun | 10-27-2022 06:03:41 | 10-27-2022 06:03:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @michaelbenayoun :)
I missed `fx_compatible = True` attribute.
Updated it!<|||||>also, I checked that glpn model was copied from segformer.
to overcome consistency test, updated glpn code too. |
transformers | 19,916 | closed | Fine-tuning translation model speed anomalies | ### System Info
python 3.8.12
ubuntu18.04
transformers 4.23.1

### Who can help?
@Narsil
@patil-suraj
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1FwXCjUVvrNpCuf0KoxhxQKTciRscva6t?usp=sharing
t4 graphics card half precision about 10, a100 is more than 300, even if not up to 30 times the speed should not be almost the same speed, and here the multi-threaded seems to be bad
### Expected behavior
After experiments found that bs has a greater impact on the speed, gradient accumulation also has a slight impact, but the same bs, a100 and t4 speed is almost the same, the expected is to start multi-threaded or other methods to accelerate, otherwise, training once to more than 100 days | 10-27-2022 05:22:04 | 10-27-2022 05:22:04 | Please use the [forums](https://discuss.huggingface.co/) to debug training like this as we keep features for bugs (with a clear reproducer) and feature requests only.<|||||>I think it is a bug in itself, I tried many devices and the speed is almost the same, obviously not reasonable<|||||>> t4 graphics card half precision about 10, a100 is more than 300, even if not up to 30 times the speed should not be almost the same speed, and here the multi-threaded seems to be bad
I read this 3 times, and I still don't understand. What do you mean ?
Just as a note for here or the forums, trying to be over explicit might help readers understand what you're trying to do and what are your expectations.<|||||>Sorry, the message is machine translation, I mean there are two problems, the first problem is that different gpu speed should be different, a100 is better than t4, especially the semi-precision gap is big, so turn on the semi-precision training speed should be a big gap, however the speed is almost the same. The second problem is that multi-threading does not seem to have any accelerating effect, and can be improved a lot on cv, I do not know if this is the case in the nlp field
<|||||>8 card speed is no different from 1 card<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,915 | closed | Unable to see the weight files after quantization | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I have tried the following code for dynamic quantization
```
import torch
import os
from transformers import AutoConfig, AutoModel
model = AutoModel.from_pretrained("bert-base-uncased")
model_quantized = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)
quantized_output_dir = "quantized/"
if not os.path.exists(quantized_output_dir):
os.makedirs(quantized_output_dir)
model_quantized.save_pretrained(quantized_output_dir)
```
After the execution, I could see that there is a new folder named quantized created in the directory which contains only the ```config.json``` file.
contents are as follows
```
{
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertModel"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"torch_dtype": "float32",
"transformers_version": "4.23.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
```
I can't see any other .bin or .wt files after quantization. Why it is so?
### Expected behavior
The model should be quantized and save the new quantized weight files in the provided folder along with the config.json file | 10-27-2022 04:50:22 | 10-27-2022 04:50:22 | Maybe of interest to @michaelbenayoun :)<|||||>Hi @pradeepdev-1995 ,
You don't get this issue first ?
```
AttributeError: 'torch.dtype' object has no attribute 'numel'
```<|||||>Yes. @michaelbenayoun
But after rerun in second time in the google colab,
It worked without error.
But only config file is there.<|||||>Yes, I observe the same thing. In any case, I do not think this will work because you have dtypes in your state dict, which is not handled correctly by `save_pretrained` for now.<|||||>@michaelbenayoun
Got it. So how can i do dynamic quantization on a model and save it in local for future use?
Please share the code snippet if possible <|||||>This should work:
```python
import torch
import os
from transformers import AutoConfig, AutoModel
model = AutoModel.from_pretrained("bert-base-uncased")
model_quantized = torch.quantization.quantize_dynamic(
model, {torch.nn.Linear}, dtype=torch.qint8
)
quantized_output_dir = "quantized/"
if not os.path.exists(quantized_output_dir):
os.makedirs(quantized_output_dir)
model_quantized.config.save_pretrained(quantized_output_dir)
torch.save(model_quantized.state_dict(), "quantized/pytorch_model.bin")
```
But note that you will not be able to restore your model afterwards, at least with a `from_pretrained`. You will need to:
1. Load the model, either with the pre-trained weights, or random ones
2. Convert the model to its dynamically quantized version
3. Do: `model.load_state_dict(torch.load(path_to_the_state_dict))`
You have other ways of saving your model:
- You can `jit.trace` / `jit.script` it
- You can use another approach, such as [quantization with ONNX Runtime with Optimum](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization)<|||||>@michaelbenayoun Thank you very much for the comments.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
I tried this, but when I checked the `config.json`, it's showing float16. Do I need to worry about it or can I ignore it? |
transformers | 19,914 | closed | Transformer XL div_val != 1 does not work with fp16 | ### System Info
python version `3.10.4`,
Package versions
torch 1.12.0+cu116
torchaudio 0.12.0+cu116
torchvision 0.13.0+cu116
transformers 4.22.2
### Who can help?
@patrickvonplaten
@thomwolf
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Here's a MWE to reproduce the bug.
```python
import torch
from transformers import (
TransfoXLConfig, TransfoXLTokenizer, TransfoXLLMHeadModel, Trainer, DataCollatorForLanguageModeling
)
from transformers.training_args import TrainingArguments
import datasets
config = TransfoXLConfig.from_pretrained('transfo-xl-wt103')
config.d_model = 128
config.n_head = 8
config.n_layer = 4
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
tokenizer.model_max_length = 16
tokenizer.add_special_tokens(dict(pad_token='[PAD]'))
model = TransfoXLLMHeadModel(config)
dataset = datasets.Dataset.from_dict(dict(text=['Hello world', 'XL blah']))
# mic(dataset)
dataset = dataset.map(lambda x: tokenizer(x['text'], return_tensors='pt', padding='max_length'), batched=True)
train_args = TrainingArguments(
output_dir='./debug',
fp16=torch.cuda.is_available(),
num_train_epochs=100,
per_device_train_batch_size=2
)
# mic(train_args)
trainer = Trainer(
model=model, train_dataset=dataset, args=train_args,
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
)
trainer.train()
```
Here's the stack trace I got
```bash
Traceback (most recent call last):
File "/home/stefanhg/Music-with-NLP/Symbolic-Music-Generation/test-lang.py", line 992, in <module>
check_xl_fp16()
File "/home/stefanhg/Music-with-NLP/Symbolic-Music-Generation/test-lang.py", line 991, in check_xl_fp16
trainer.train()
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 1521, in train
return inner_training_loop(
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 1763, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 2499, in training_step
loss = self.compute_loss(model, inputs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/trainer.py", line 2531, in compute_loss
outputs = model(**inputs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/models/transfo_xl/modeling_transfo_xl.py", line 1094, in forward
transformer_outputs = self.transformer(
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/models/transfo_xl/modeling_transfo_xl.py", line 929, in forward
word_emb = self.word_emb(input_ids)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stefanhg/miniconda3/envs/music-nlp/lib/python3.10/site-packages/transformers/models/transfo_xl/modeling_transfo_xl.py", line 451, in forward
emb_flat.index_copy_(0, indices_i, emb_i)
RuntimeError: index_copy_(): self and source expected to have the same dtype, but got (self) Float and (source) Half
```
### Expected behavior
In short, something wrong with adaptive softmax, I assume type cast for fp16 not working properly | 10-27-2022 04:07:55 | 10-27-2022 04:07:55 | According to git blame, @thomwolf added Transformer Xl. Can you help? <|||||>I don't think TransformerXL supports FP16 as this is an old model with very specific code for the softmax layer. This won't be an issue we will fix ourselves given that Transformer-XL is not very used anymore, but if someone wants to make a PR, we'll review!<|||||>I see. I will think about make a PR. Thank you! |
transformers | 19,913 | closed | VideoMAE assumes channel_num==3 | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
`VideoMAEForPreTraining` assumes the channel number is 3. The code below works if `num_channels = 3`.
```python
from transformers import VideoMAEForPreTraining, VideoMAEConfig
import numpy as np
import torch
num_frames = 16
num_channels = 1
config = VideoMAEConfig(num_channels=num_channels)
model = VideoMAEForPreTraining(config)
num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2
seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame
bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
model(torch.rand([1, num_frames, num_channels, 224, 224]), bool_masked_pos)
```
The above code spits out this error message:
```
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:530: UserWarning: Using a target size (torch.Size([1, 760, 1536])) that is different to the input size (torch.Size([1, 760, 512])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-6-46b1d4563ea7>](https://localhost:8080/#) in <module>
11 seq_length = (num_frames // model.config.tubelet_size) * num_patches_per_frame
12 bool_masked_pos = torch.randint(0, 2, (1, seq_length)).bool()
---> 13 model(torch.rand([1, num_frames, num_channels, 224, 224]), bool_masked_pos)
5 frames
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/transformers/models/videomae/modeling_videomae.py](https://localhost:8080/#) in forward(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict)
884
885 loss_fct = MSELoss()
--> 886 loss = loss_fct(logits, labels)
887
888 if not return_dict:
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py](https://localhost:8080/#) in forward(self, input, target)
528
529 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 530 return F.mse_loss(input, target, reduction=self.reduction)
531
532
[/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in mse_loss(input, target, size_average, reduce, reduction)
3277 reduction = _Reduction.legacy_get_string(size_average, reduce)
3278
-> 3279 expanded_input, expanded_target = torch.broadcast_tensors(input, target)
3280 return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
3281
[/usr/local/lib/python3.7/dist-packages/torch/functional.py](https://localhost:8080/#) in broadcast_tensors(*tensors)
71 if has_torch_function(tensors):
72 return handle_torch_function(broadcast_tensors, tensors, *tensors)
---> 73 return _VF.broadcast_tensors(tensors) # type: ignore[attr-defined]
74
75
RuntimeError: The size of tensor a (512) must match the size of tensor b (1536) at non-singleton dimension 2
```
In line [886](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L886), the dimension of `labels` is 3 times as large as it should be. This dimension mismatch is caused by this [unnormalization](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L824). Since the `mean` and `std` are 3 dimensional, the `pixel_values` are broadcast in L826. I am not sure if this "unnormalization" operation is necessary. I was a bit surprised to see it because I usually do this kind of transformations during data loading.
### Expected behavior
VideoMAEForPretraining should accept tensors even if the input channel number is not 3. | 10-27-2022 03:03:56 | 10-27-2022 03:03:56 | cc @NielsRogge <|||||>Hi,
Thanks for your interest in VideoMAE. I took the unnormalization from the original implementation as can be seen here: https://github.com/MCG-NJU/VideoMAE/blob/b6af64a997da1a2f52ce1cb2f300712faa2444a1/engine_for_pretraining.py#L38-L41.
The unnormalization is done to "undo" the normalization done during data preprocessing (as the model needs to predict raw pixel values). So I assume something similar needs to be done when working with greyscale videos; one needs to unnormalize them before calculating the loss.
<|||||>If self.config.norm_pix_loss is true, normalization of each patch undoes the effect of unnormalization:
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L852-L854
Anyhow, I suppose the goal of your implementation is to replicate the original model published by the authors. For now, I will just comment out the [unnormalization](https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/videomae/modeling_videomae.py#L824-L826) to deal with my gray scale videos. |
transformers | 19,912 | closed | Add `accelerate` support for M2M100 | # What does this PR do?
This PR adds `accelerate` support to `M2M100`, therefore this enables loading NLLB models in 8-bit using `load_in_8bit=True`.
This might contain a breaking change but I am not sure.
When initializing the model in the meta device using `accelerate` the module `self.shared` is intialized and set to the correct device using `set_tensor_to_device` thrice - since it is shared by 3 modules (base model, encoder, decoder) - so it somehow ends up being on the `meta` device.
Therefore manually assigning a new module with the weights that correspond to the weights of the `shared` module should do the trick. But I am wondering if this is a breaking change since the `shared` module of the Encoder & Decoder won't be "shared" anymore. It should not be a problem at inference time, but can be problematic when training the model.
cc @sgugger
Also I know T5 also supports `accelerate` and uses `shared` embeddings. The only difference I see from both implementations are the `_keys_to_ignore_on_load_missing` that contains the `shared` weights for `T5` and doesn't contain the shared weights for M2M100 | 10-26-2022 22:42:29 | 10-26-2022 22:42:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,911 | closed | Add RoBERTa resources | From #19848, this PR adds resources for RoBERTa. | 10-26-2022 19:13:04 | 10-26-2022 19:13:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,910 | closed | Add checkpoint links in a few config classes | # What does this PR do?
Add checkpoint links in the following config classes:
- CLIPConfig
- GroupViTConfig
- OwlViTConfig
- XCLIPConfig
A necessary condition to make the tiny model creation work (PR #19901) for those models. | 10-26-2022 19:06:11 | 10-26-2022 19:06:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,909 | closed | Transformers from GCS (or custom filesystem). | ### Feature request
Hi! I'm wondering if there will be support for a custom filesystem argument to "from_pretrained" for transformers, just like there is for datasets (https://huggingface.co/docs/datasets/filesystems).
### Motivation
This ideally would be great for running models in the cloud in "diskless" mode where there is no access to a real filesystem and model assets could be read into RAM via the same filesystem API that is used for datasets.
This would solve the issue of decoupling a binary from its data dependencies, in the same way it's done for datasets.
Thank you!
### Your contribution
Would love to help here, but presumably a HF expert would be much more suited to solve this problem. Happy to be eyes and a tester! | 10-26-2022 18:55:33 | 10-26-2022 18:55:33 | Hi there! We don't plan on adding support for something else than the Hub/local disk for pretrained model in Transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,908 | closed | Any ideas on how we can convert a model from huggingface (transformers library )to tensorflow lite? | ### System Info
I want to convert CamembertQuestionAnsewring model to tensoflow lite, i download it from huggingface platform, because when i want to save the model locally it gives me the model with 'bin' format.
i'm asking here because huggingface use pytorch pretrained models.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
when i try to convert the model it gives me this error : AttributeError: 'CamembertForQuestionAnswering' object has no attribute 'call' by using tf_model.h5 file.
Also i can't load it using : tf.keras.models.load_model() it gives me : ValueError: No model config found in the file at <tensorflow.python.platform.gfile.GFile object at 0x7f27cceb1810>.
when i want to save the transformers model locally it gives me the model with 'bin' format, so i download it from the platform.
### Expected behavior
https://huggingface.co/etalab-ia/camembert-base-squadFR-fquad-piaf?context=Etalab+est+une+administration+publique+fran%C3%A7aise+qui+fait+notamment+office+de+Chief+Data+Officer+de+l%27%C3%89tat+et+coordonne+la+conception+et+la+mise+en+%C5%93uvre+de+sa+strat%C3%A9gie+dans+le+domaine+de+la+donn%C3%A9e+%28ouverture+et+partage+des+donn%C3%A9es+publiques+ou+open+data%2C+exploitation+des+donn%C3%A9es+et+intelligence+artificielle...%29.+Ainsi%2C+Etalab+d%C3%A9veloppe+et+maintient+le+portail+des+donn%C3%A9es+ouvertes+du+gouvernement+fran%C3%A7ais+data.gouv.fr.+Etalab+promeut+%C3%A9galement+une+plus+grande+ouverture+l%27administration+sur+la+soci%C3%A9t%C3%A9+%28gouvernement+ouvert%29+%3A+transparence+de+l%27action+publique%2C+innovation+ouverte%2C+participation+citoyenne...+elle+promeut+l%E2%80%99innovation%2C+l%E2%80%99exp%C3%A9rimentation%2C+les+m%C3%A9thodes+de+travail+ouvertes%2C+agiles+et+it%C3%A9ratives%2C+ainsi+que+les+synergies+avec+la+soci%C3%A9t%C3%A9+civile+pour+d%C3%A9cloisonner+l%E2%80%99administration+et+favoriser+l%E2%80%99adoption+des+meilleures+pratiques+professionnelles+dans+le+domaine+du+num%C3%A9rique.+%C3%80+ce+titre+elle+%C3%A9tudie+notamment+l%E2%80%99opportunit%C3%A9+de+recourir+%C3%A0+des+technologies+en+voie+de+maturation+issues+du+monde+de+la+recherche.+Cette+entit%C3%A9+charg%C3%A9e+de+l%27innovation+au+sein+de+l%27administration+doit+contribuer+%C3%A0+l%27am%C3%A9lioration+du+service+public+gr%C3%A2ce+au+num%C3%A9rique.+Elle+est+rattach%C3%A9e+%C3%A0+la+Direction+interminist%C3%A9rielle+du+num%C3%A9rique%2C+dont+les+missions+et+l%E2%80%99organisation+ont+%C3%A9t%C3%A9+fix%C3%A9es+par+le+d%C3%A9cret+du+30+octobre+2019.%E2%80%89+Dirig%C3%A9+par+Laure+Lucchesi+depuis+2016%2C+elle+rassemble+une+%C3%A9quipe+pluridisciplinaire+d%27une+trentaine+de+personnes.&question=Comment+s%27appelle+le+portail+open+data+du+gouvernement+%3F | 10-26-2022 17:07:58 | 10-26-2022 17:07:58 | Maybe of interest to @gante @Rocketknight1 <|||||>Hi @BENSAFOUAN-Abdelhalim, `CamembertForQuestionAnswering` is a PyTorch model. The TF model is `TFCamembertForQuestionAnswering`. That's why you're seeing the missing methods on that model!
In general, though, we don't support TFLite conversions for all of our models. There are some operations that TFLite can't support, and we don't guarantee that everything in a model will work for it. However, you can absolutely try to convert it and see what you get!<|||||>ok, thanks @Rocketknight1 for your answer.<|||||>@BENSAFOUAN-Abdelhalim
Refer this colab for more details on how to convert HF TF model to TFlite model
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/tflite_from_huggingface_whisper.ipynb
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,907 | closed | Enables torchrun for XLA-based accelerators | # What does this PR do?
This PR enables torchrun for XLA-based accelerators (TPU/NeuronCore) by using torch.distributed XLA backend. It is dependent on the torch/xla change https://github.com/pytorch/xla/pull/3609.
Example application is the AWS Neuron tutorial with HF Trainer that uses torchrun:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 10-26-2022 16:57:52 | 10-26-2022 16:57:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19907). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,906 | closed | `accelerate` support for `RoBERTa` family | # What does this PR do?
This PR adds `accelerate` support for:
- `RoBERTa`
- `data2vec_text`
- `Lilt`
- `Luke`
- `XLM-RoBERTa`
- `CamemBERT`
- `LongFormer`
This way, any of the models above can be loaded in 8bit using `load_in_8bit=True`.
Since these models copy the same `xxxLMHead` from `RoBERTa` I had to change the copied modules too - happy also to break down this PR into several smaller PRs,
This PR also fixes a small bug on `accelerate` tests where the variable `input_dict` is overriden by `xxForMultipleChoice` models.
Can also confirm all slow tests pass (single + multiple GPUs)
cc @sgugger @ydshieh
| 10-26-2022 16:28:57 | 10-26-2022 16:28:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,905 | closed | Update check_copies.py | Added the proper info for the Hindi Translation of README File
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-26-2022 15:56:49 | 10-26-2022 15:56:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This Pull Request's issues are being managed in an another Pull Request, so closing this one ! |
transformers | 19,904 | closed | `return_loss=True` in call for `TFCLIPModel` bugs out. | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
To reproduce the bug I have used the following code snippet 👇
```python
import tensorflow as tf
from PIL import Image
import requests
from transformers import CLIPProcessor, TFCLIPModel
model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True
)
outputs = model(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
attention_mask=inputs["attention_mask"],
return_loss=True,
return_dict=True,
)
```
### Expected behavior
The call should execute and we should obtain the `outputs`. | 10-26-2022 15:33:38 | 10-26-2022 15:33:38 | Hi @ariG23498, thanks for reporting this issue.
Could you give more information about the current behaviour? Specifically any tracebacks or more details about what is happening when you do execute? <|||||>I have created a [colab notebook](https://gist.github.com/ariG23498/f736dea2f6f488d6c55fd9bb107bef13) that can help you with the traceback.
Let me know if you need something else. Thanks for the quick response @amyeroberts (as always 😃)
<|||||>It looks like the problem in this issue is that you are not passing along as many images as texts. Passing `images=[image, image]` makes your reproducer pass.
<|||||>@sgugger Yes, this was the problem the whole time 😢 . The documentation has to fixed then.
https://huggingface.co/docs/transformers/model_doc/clip<|||||>Indeed, do you want to make a PR with that?<|||||>@sgugger Yes, I will take it up.<|||||>@sgugger Have been thinking over this, should there be same number of images as text ? I do not see any reason to restrict it this way . Let me know if I am missing something . <|||||>> @sgugger Have been thinking over this, should there be same number of images as text ? I do not see any reason to restrict it this way . Let me know if I am missing something .
@sgugger Any thoughts on this ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Pinging on this isseue<|||||>@ArthurZucker, would you like to take a look at this?<|||||>@LysandreJik @ArthurZucker The confusion here is should the number of images equal to number of text ?<|||||>Hey, I think that this was solved, I can't reproduce it on main. You are right, the number of images should not necessarily be the same as the number of texts.
```python
>>> inputs["pixel_values"].shape
TensorShape([1, 3, 224, 224])
>>> inputs["input_ids"].shape
TensorShape([2, 7])
>>> outputs.loss
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([nan], dtype=float32)>
```
Now the question is rather "should the loss acatually be `nan` 😅 <|||||>@ArthurZucker oh, great, let me look at the fix. Last time I checked the way contrastive loss was flawed. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,903 | closed | Created README_hd.md | A Hindi Translation for README
# What does this PR do?
It adds the Hindi Translation for the README File !
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-26-2022 15:12:01 | 10-26-2022 15:12:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19903). All of your documentation changes will be reflected on that endpoint.<|||||>added the proper info in [this dictionary](https://github.com/huggingface/transformers/blob/7a1c68a8454c25c55f3f8978c182ea90e3412f5c/utils/check_copies.py#L39)
By a new pull request
[Update check_copies.py #19905](https://github.com/huggingface/transformers/pull/19905)<|||||>No this should all be in the same pull request please.<|||||>> No this should all be in the same pull request please.
Updated the check_copies.py in this current Pull Request and closed the previous Pull Request !<|||||>Any Update ?<|||||>I think you need to run `make fix-copies` on your side to adjust the READMEs, then it should be good to merge if all comments are addressed :-)<|||||>Please address remaining comments along with steps Sylvain has mentioned and then we are good to go<|||||>> Please address remaining comments along with steps Sylvain has mentioned and then we are good to go
Addressed all the comments and updated the file according to them !
Can you please help me understanding this fix-copies concept which Sylvain has mentioned as i dont know about it !<|||||>Hello @AkshitGulyan, in the above PR I fixed subtle and time-consuming bugs to run `make fix-copies` without any issues. The details are below so that you can do these things next time.
1. When I ran `make fix-copies` locally I got below error:
```
(ml) sourabmangrulkar@Sourabs-MacBook-Pro transformers % make fix-copies
python utils/check_copies.py --fix_and_overwrite
Traceback (most recent call last):
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 572, in <module>
check_copies(args.fix_and_overwrite)
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 270, in check_copies
check_model_list_copy(overwrite=overwrite)
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 455, in check_model_list_copy
localized_md_list = get_model_list(filename, _start_prompt, _end_prompt)
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 303, in get_model_list
while not lines[start_index].startswith(start_prompt):
IndexError: list index out of range
make: *** [fix-copies] Error 1
```
2. After spending time diving into `utils/check_copies.py` found the issue wherein `prompt_start` specified was not matching to the line in `README_hd.md`. Made them same.
3. Then got this issue:
```
Traceback (most recent call last):
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 354, in convert_to_localized_md
localized_model_index = {
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 355, in <dictcomp>
re.search(r"\*\*\[([^\]]*)", line).groups()[0]: line
AttributeError: 'NoneType' object has no attribute 'groups'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 575, in <module>
check_copies(args.fix_and_overwrite)
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 270, in check_copies
check_model_list_copy(overwrite=overwrite)
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 459, in check_model_list_copy
readmes_match, converted_md_list = convert_to_localized_md(md_list, localized_md_list, _format_model_list)
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 359, in convert_to_localized_md
raise AttributeError("A model name in localized READMEs cannot be recognized.")
AttributeError: A model name in localized READMEs cannot be recognized.
(ml) sourabmangrulkar@Sourabs-MacBook-Pro transformers % python utils/check_copies.py
Traceback (most recent call last):
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 351, in convert_to_localized_md
localized_model_index = {
File "/Users/sourabmangrulkar/Code/transformers/utils/check_copies.py", line 352, in <dictcomp>
re.search(r"\*\*\[([^\]]*)", line).groups()[0]: line
AttributeError: 'NoneType' object has no attribute 'groups'
```
This was a subtle bug which took quite some time to figure out. You had improperly formatted the following models with improper spaces resulting in regex failing, below shows the buggy version:
```
1. ** [TrOCR] (https://huggingface.co/docs/transformers/model_doc/trocr) ** (from Microsoft) released with the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
1. ** [UL2] (https://huggingface.co/docs/transformers/model_doc/ul2) ** (from Google Research) released with the paper [Unifying Language Learning Paradigms](https://arxiv.org/abs/2205.05131v1) by Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Xavier Garcia, Dara Bahri, Tal Schuster, Huaixiu Steven Zheng, Neil Houlsby, Donald Metzler
```
So, after fixing it everything works as expected:
```
(ml) sourabmangrulkar@Sourabs-MacBook-Pro transformers % make fix-copies
python utils/check_copies.py --fix_and_overwrite
python utils/check_table.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite
```
Also, model list is very very inconsistent with some models having names in Hindi while others in English. Follow the format where all model names are in latin script instead of Devanagari script. <|||||>Hello @AkshitGulyan, please transfer the changes from above sample PR to this PR. Thank you and hope the above explanation clarifies the steps that Sylvain was suggesting. <|||||>Hello @AkshitGulyan, can you please reopen this PR and transfer the relevant changes from above sample PR to this PR. |
transformers | 19,902 | closed | Allow flax subfolder | First, I'm sorry about this long list of commits :sweat_smile: - I have my fork set up correctly now so this shouldn't happen again.
Second, this change would be very useful for this PR in `diffusers` so that clip can be loaded from a subfolder: https://github.com/huggingface/diffusers/pull/880#discussion_r1004209900 | 10-26-2022 14:55:34 | 10-26-2022 14:55:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,901 | closed | Create dummy models | # What does this PR do?
This is a new script based on [a previous one](https://gist.github.com/LysandreJik/39058fe6fa8771f74dda7e789a6f63ea#file-create_dummy_models-py)
In the comment, links to 3 reports are provided.
### (Probably) To Do:
- As we shrink the tokenizer vocab size, the special tokens (bos/eos/pad etc.) might change too. I think we should also try to update the attributes in `tiny_config` whose names end with `__token_id`.
- We should probably provide an option to upload to Hub.
### Remark
Currently, if we can not shrink the tokenizer's vocab size for a model type, we still build models for it but give a warning in the report. We should not use them for pipeline testing though (which is what our pipeline testing does so far)
### Current states
- #### These need to be treated specially
- EncoderDecoder
- VisionEncoderDecoder
- SpeechEncoderDecoderModel
- VisionTextDualEncoder
- #### Some of the following need to check, but others are expected not to work
- BertGeneration
- Camembert
- DecisionTransformer [This model doesn't require any processor -> need to allow this case]
- ~~DonutSwin~~
- Esm
- MarianForCausalLM
- MT5
- ~~PegasusX~~
- QDQBert
- ReformerModelWithLMHead
- Speech2Text2ForCausalLM
- TimeSeriesTransformer [This model doesn't require any processor -> need to allow this case]
- TrajectoryTransformer
- TrOCRForCausalLM
- ~~ViTMSN~~
- ~~Wav2Vec2Conformer~~
- XLMProphetNet
- XLMRoberta | 10-26-2022 14:33:10 | 10-26-2022 14:33:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Here are the 3 formats of the reports
[simple_report.txt](https://github.com/huggingface/transformers/files/9872354/simple_report.txt)
[failed_report.json](https://github.com/huggingface/transformers/files/9872355/failed_report.txt)
[tiny_model_creation_report.json](https://github.com/huggingface/transformers/files/9872356/tiny_model_creation_report.txt)
<|||||>Nice, thanks @ydshieh! I'll take it for a spin tomorrow.<|||||>I will take care of the quality check (don't want to push more commits at this moment) 🙏 <|||||>> The description of which models succeeded and which didn't could be slimmer, for example in a TQDM bar where we would print the models that didn't succeed, for example for:
Do you mean we only print the failed ones?<|||||>@LysandreJik Could you take a final look regarding my 2 comment above? Also [this one](https://github.com/huggingface/transformers/pull/19901#issuecomment-1293736263) 🙏
Thank you for the review 💯 <|||||>Final remark: those progress bar are not downloading, but the training of the tokenizers (to reduce the vocab size). I will ask @Narsil if we can disable showing those 😃 <|||||>Close for now in order to fix a few really edge cases (not to run CI).<|||||>You can disable the progress bar indeed `Trainer(... show_progress=False)` |
transformers | 19,900 | closed | Safetensors tf | # What does this PR do?
This PR continues to explore loading models using `safetensors` by adding support for TensorFlow model. It adds support for:
- saving model using `safetensors` with the same API as PyTorch models
- loading models with a safetensors file in TensorFlow-format
- loading models with a safetensors file in PyTorch-format
Follow-up PRs will add the support for sharded checkpoints in TensorFlow as well as loading in PyTorch a safetensors file in TensorFlow-format
**Note:** All tests failures are due to the new release of safetensors being broken, not this PR :-) | 10-26-2022 14:24:51 | 10-26-2022 14:24:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,899 | closed | minor fix in jax attention bias | Fixes #19897
Adding this PR to check if this passes all the test cases, or fix have potential issues
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 10-26-2022 13:36:52 | 10-26-2022 13:36:52 | cc @sanchit-gandhi <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19899). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @amankhandelia! In general, I'm happy with this change. What worries me is that the tests for BART and BART-dervied models currently pass on main, which suggests there shouldn't be a need to change the attention mask value. It suggests that there could be an issue with the FlaxMBartForCausalLM model that you're adding. I've replied more in-depth on the issue as it's more relevant there https://github.com/huggingface/transformers/issues/19897#issuecomment-1294648919. Keeping this PR open until we determine whether it's a generic Flax BART issue or a FlaxBartForCausalLM one!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,898 | closed | Let inputs of fast tokenizers be tuples as well as lists | # What does this PR do?
Fixes #19882
Not sure if this was an oversight when introducing fast tokenizers or if there is a real reason for not accepting tuples as well as list here (which is the case everywhere else from a quick search). We'll see if the CI picks something failing but it looks like it fixes the issue. | 10-26-2022 13:29:24 | 10-26-2022 13:29:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,897 | closed | Flax implementation of BART contain NaN in hidden_states | ### System Info
While implementing Donut model #19831, one of my test was failing `test_from_pretrained_save_pretrained`, while debugging the testcase failure, It is happening because the test is trying to compare nan with nan which causes the failure, on tracing the root cause, it came down to this line [`jnp.full(attention_mask.shape, float("-inf")).astype(self.dtype)`](https://github.com/huggingface/transformers/blob/fdffee8a601d0408c7e0f57fbb56217f8b57e62a/src/transformers/models/mbart/modeling_flax_mbart.py#L386), `float("-inf")` is causing the `dot_product_attention_weights` to return NaN instead of 0, which is getting cascaded downstream. Since this code is copied from BART, and that code has been copied to several different models (OPT, PEGASUS, BLENDERBOT etc), I am raising this issue against BART
IMHO, we should replace `float("-inf")` with `-1e10` as is the case for multiple different model, RoBerta, if the maintainers agree with this understanding and solution, I can raise a quick PR to fix the same, or otherwise please suggest a solution
@patil-suraj @patrickvonplaten
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run testcase from my branch in the above mentioned PR
### Expected behavior
Hidden States should contain 0 instead of instead of NaN | 10-26-2022 13:24:32 | 10-26-2022 13:24:32 | @ArthurZucker and also @sanchit-gandhi since you know Flax Bart quite well <|||||>Hey @amankhandelia - the test of interest passes for the current BART and BART-derived models, so I wonder whether the issue is with the BART model or rather the mBART one? In general, I'm happy with the notion of changing the mask value from `-inf` to a large non-negative number, I just want to determine whether the issue lies with BART or FlaxMBartForCausalLM!
I've noticed in your PR that you're adding FlaxMBartForCausalLM as well as the Flax DONUT model in the same PR (https://github.com/huggingface/transformers/pull/19831). Perhaps you could first add FlaxMBartForCausalLM in a smaller separate PR? We could then run through the failing test together and try to assert whether it's an issue with FlaxMBartForCausalLM or Flax BART and fix any other issues that crop up 🤗<|||||>Hey @sanchit-gandhi, thanks for the feedback,
Makes sense, will raise a separate PR for the FlaxMBartForCausalLM and check the same.<|||||>Hey @amankhandelia, thanks for understanding! Feel free to tag me on the PR as soon as it's ready and I'll try to get you a review ASAP!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,896 | closed | Generate: contrastive search uses existing abstractions and conventions | # What does this PR do?
This PR updates contrastive search to follow existing abstractions conventions in other generation functions. It consists of several tiny changes, with the reasoning for each change in the PR comments below.
This is part of the effort to make converting to TF easier. All slow tests pass (`RUN_SLOW=1 py.test tests/generation/test_generation_utils.py -k contrastive -vv`) | 10-26-2022 12:13:42 | 10-26-2022 12:13:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,895 | closed | Generate: contrastive search uses existing abstractions and conventions | # What does this PR do?
This PR updates contrastive search to follow existing abstractions conventions in other generation functions. It consists of several tiny changes, with the reasoning for each change in the PR comments below.
This is part of the effort to make converting to TF easier. All slow tests pass (`RUN_SLOW=1 py.test tests/generation/test_generation_utils.py -k contrastive -vv`) | 10-26-2022 12:12:59 | 10-26-2022 12:12:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19895). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,894 | closed | Unable to Finetune Deberta | I am trying to finetune deberta for irony detection task, colab's notebook link can be found [here](https://colab.research.google.com/drive/1mZI5W2ozc8speZiwzOtCrWABhSaxqhWD?usp=sharing)
When I try to use 'microsoft/deberta-v3-base' checkpoint with AutoModel, I'm getting the following error :
RuntimeError: Expected target size [32, 2], got [32]
but when I use the same model with 'bert-base-uncased' or roberta (with some changes in head) it works fine. The one can find working code for bert based in [this](https://colab.research.google.com/drive/1PXacY2YgAfk6IYC0sAynp88Z6ndWrqYQ?usp=sharing) notebook.
When I printed the shapes of predictions and labels, I got outputs as torch.Size([32, 30, 2]), torch.Size([32]) respectively. In the case of bert, shapes of outputs were torch.Size([32, 2]), torch.Size([32]) for predictions and labels.
Here 32 is the batch size, and 30 is the sequence length.
Can someone let me know what I'm doing wrong? | 10-26-2022 11:13:13 | 10-26-2022 11:13:13 | @sgugger
@patil-suraj
@patrickvonplaten <|||||>Please use the [forums](https://discuss.huggingface.co/) to get help debug your code. In this instance you are using the base pretrained model (without a classification head) to do classification so it does not work. You should consider using AutoModelForSequenceClassification`.<|||||>okay, sure will take care from next time and thanks for the response! Just one question, do bert and roberta provide classification heads in their base models?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,893 | closed | Geh | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-26-2022 10:03:15 | 10-26-2022 10:03:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19893). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,892 | closed | Add `flan-t5` documentation page | # What does this PR do?
This PR adds `FLAN-T5` on the documentation page - following the same approach for `t5-v1.1`: https://huggingface.co/docs/transformers/model_doc/t5v1.1
cc @sgugger @ydshieh @ArthurZucker
| 10-26-2022 08:59:28 | 10-26-2022 08:59:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the feedback @sgugger ! I should have addressed the comments now |
transformers | 19,891 | closed | fix jit trace error for model forward sequence is not aligned with jit.trace tuple input sequence, update related doc | Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@sgugger
| 10-26-2022 07:40:44 | 10-26-2022 07:40:44 | @liangan1 @jianan-gu @yao-matrix please help review<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> I still do not understand the main problem: it looks like JIT does not support dictionary inputs which are used in every model in Transformers. Classification models are not the only ones using the `labels` key, all task-specific models do... and a model that a user wants to evaluate will very likely have a dataset with labels. The proposed workaround to use label smoothing makes no sense for an evaluation.
>
> It looks like this integration has maybe be merged too quickly and doesn't actually work or are there models that can be evaluated with it?
yes. all the cases containing "labels" will fail in jit.trace, while other case like QnA could pass. it's pytorch limitation for jit.trace which only support tuple input now, Intel has commited a PR(https://github.com/pytorch/pytorch/pull/81623) for this and expected to be released in pytorch 1.14 (I also added it in doc).
If we would like to jit.trace successfully for such case, the other option is to modify the model like below, making forward input sequence like tuple input sequence...,
```py
--- a/src/transformers/models/distilbert/modeling_distilbert.py
+++ b/src/transformers/models/distilbert/modeling_distilbert.py
@@ -731,11 +731,11 @@ class DistilBertForSequenceClassification(DistilBertPreTrainedModel):
)
def forward(
self,
+ labels: Optional[torch.LongTensor] = None,
input_ids: Optional[torch.Tensor] = None,
attention_mask: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
inputs_embeds: Optional[torch.Tensor] = None,
- labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
```
"label smoothing" is just a smart way to walk around the jit.trace failure, since it happens to pop the labels from the input
<|||||>We are not going to make a breaking change in the parameter order of every model. So basically the jit eval functionality added in #17753 does not work and has never worked for any model which contain labels, can you confirm?
Since it is an **evaluation** function, I fail to see the point of having it in Transformers until PyTorch supports it.<|||||>The key point of the jit error cases met here is that jit cannot well handle the case that the dictionary forward parameter order does not match the dataset input order, not specific to whether there are "labels" or not. And to improve PyTorch jit ability to solve this issue, we landed https://github.com/pytorch/pytorch/pull/81623 in PyTorch;
For the usage of model inference with jit, for now, there could be many cases that natively get the benefits, like models running question-answering example mentioned above;
For these failed model inferences with jit cases, we are capturing this with the exception here to make it fallback and use logging to notify users; Meanwhile, these failed cases shall work when PyTorch release contains this [feature](https://github.com/pytorch/pytorch/pull/81623), (expect in next release);
Besides, bringing "label smoothing" here with jit is not that reasonable since it would be confusing for users.
<|||||>Hi, @sgugger to make it a clear, I file a issue to record the issue I meet https://github.com/huggingface/transformers/issues/19973. also I agree that "label smoothing" is a training skill and I have removed it in inference part. This PR could fix the error listed in https://github.com/huggingface/transformers/issues/19973<|||||>> You are a bit beating around the bush here: are there any models with a head where this feature can be used right now without hacks? I understand support in PyTorch is coming in the next version for dictionaries, but I think this feature was just added to early. Can the doc explicitly mention that the feature requires a nightly install?
Hi, sgugger
for pytorch >= 1.14.0 (nightly version is 1.14.0). jit could benefit any models for predict and eval.
for pytorch < 1.14.0. jit could benefit models like "Question and Answer", whose forward parameter order matches the tuple input order in jit.trace. If we meet case like "text classification",whose forward parameter order does not matches the tuple input order in jit.trace in evaluation, jit trace will fail and we are capturing this with the exception here to make it fallback and use logging to notify users<|||||>> Thanks for the precision. Could you add all of this to the documentation? Also have one last question on the actual code.
which document would you recommend to add this, since it's not cpu specific.<|||||>> which document would you recommend to add this
Every time the jit eval is mentioned. |
transformers | 19,890 | closed | Should always set the pad_to_max_length =False when do whole word mask language model fine-tune | ### Feature request
I am researching some work about whole word mask language model fine-tune and I am doing some customize change on the code from the official example (https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm)
I find there is a argument field in class DataTrainingArguments
pad_to_max_length: bool = field(
default=False,
metadata={
"help": (
"Whether to pad all samples to `max_seq_length`. "
"If False, will pad the samples dynamically when batching to the maximum length in the batch."
)
},
)
although its default value is False, but when we set it to True, it will make some problem when we use the DataCollatorForWholeWordMask collator.
### Motivation
according to the source code about DataCollatorForWholeWordMask, I think it selects the tokens to be masked among all input_tokens, if we do padding before collator processing, then during the word mask process we may get lots of candidate index with PAD token. I believe this kind of PAD token is no meaning for word mask model fine tune.
Although the default value pad_to_max_length is set to False, but I do found lot of people customize the code from the official examples by set tokenizer.padding to "max_length" and then call the map function on dataset.
def tokenize_function(examples):
# Remove empty lines
examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()]
return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=data_args.max_seq_length)
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=not data_args.overwrite_cache,
)
### Your contribution
I suggest to remove the padding action before the DataCollatorForWholeWordMask collator process and emphasize that padding action before the collator process may influence the training of our model. | 10-26-2022 07:40:10 | 10-26-2022 07:40:10 | Note that this is not a maintained example, so we are not planning on making any changes to that script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,889 | closed | [DOCTEST] Add `configuration_mbart.py` , `configuration_mctc.py` , `configuration_layoutlm.py` , `configuration_layoutlmv2.py` ,` configuration_layoutlmv3.py` | Based on #19487 .
Resolved #19806 and #19805 | 10-26-2022 05:27:29 | 10-26-2022 05:27:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,888 | closed | Rescale layer in whisper processor | ### Feature request
Whisper processor does not currently rescale to the expected [-1, 1) that it requires.
### Motivation
Consistency between model processor layers.
### Your contribution
- | 10-26-2022 05:05:16 | 10-26-2022 05:05:16 | Please provide a code reproducer for the bug you are experiencing or there is nothing we can do to help.<|||||>```python
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
from transformers import AutoProcessor, AutoModelForCTC
def inference(input, processor, model):
output = processor(input, sampling_rate=16000, return_tensors="pt")
if "whisper" in processor.tokenizer_class.lower():
input_features = output.input_features
with torch.no_grad():
logits = model.generate(input_features)
transcription = processor.batch_decode(logits, skip_special_tokens=True, output_word_offsets=True)[0]
else:
input_features = output.input_values
with torch.no_grad():
logits = model(input_features).logits[0]
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids, output_word_offsets=True)
return transcription
def get_transcript(audio, model, processor):
audio_scaled = ((audio - audio.min()) / (audio.max() - audio.min())) * (2) - 1
scaled_transcription = inference(audio_scaled, processor, model)
unscaled_transcription = inference(audio, processor, model)
return {"scaled": scaled_transcription, "unscaled": unscaled_transcription}
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio = ds[0]["audio"]["array"]
audio = ((audio - audio.min()) / (audio.max() - audio.min())) * 65535 # Rescale to [0, 65535] to show issue
whisper_processor = WhisperProcessor.from_pretrained("openai/whisper-base.en")
whisper_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base.en").to("cpu")
wav2vec_processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
wav2vec_model = AutoModelForCTC.from_pretrained("facebook/wav2vec2-base-960h")
whisper_transcripts = get_transcript(audio, whisper_model, whisper_processor)
wav2vec_transcripts = get_transcript(audio, wav2vec_model, wav2vec_processor)
print(f"WHISPER: {whisper_transcripts}")
print(f"WAV2VEC: {wav2vec_transcripts}")
```
Output:
```
WHISPER: {'scaled': ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.',
'unscaled': ' I'}
WAV2VEC: {'scaled': Wav2Vec2CTCTokenizerOutput(text='MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', char_offsets=None, word_offsets=[{'word': 'MISTER', 'start_offset': 28, 'end_offset': 40}, {'word': 'QUILTER', 'start_offset': 43, 'end_offset': 60}, {'word': 'IS', 'start_offset': 66, 'end_offset': 69}, {'word': 'THE', 'start_offset': 72, 'end_offset': 76}, {'word': 'APOSTLE', 'start_offset': 80, 'end_offset': 103}, {'word': 'OF', 'start_offset': 109, 'end_offset': 111}, {'word': 'THE', 'start_offset': 115, 'end_offset': 118}, {'word': 'MIDDLE', 'start_offset': 120, 'end_offset': 131}, {'word': 'CLASSES', 'start_offset': 133, 'end_offset': 156}, {'word': 'AND', 'start_offset': 168, 'end_offset': 172}, {'word': 'WE', 'start_offset': 174, 'end_offset': 178}, {'word': 'ARE', 'start_offset': 181, 'end_offset': 185}, {'word': 'GLAD', 'start_offset': 187, 'end_offset': 200}, {'word': 'TO', 'start_offset': 205, 'end_offset': 209}, {'word': 'WELCOME', 'start_offset': 212, 'end_offset': 229}, {'word': 'HIS', 'start_offset': 234, 'end_offset': 240}, {'word': 'GOSPEL', 'start_offset': 245, 'end_offset': 267}]),
'unscaled': Wav2Vec2CTCTokenizerOutput(text='MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL', char_offsets=None, word_offsets=[{'word': 'MISTER', 'start_offset': 28, 'end_offset': 40}, {'word': 'QUILTER', 'start_offset': 43, 'end_offset': 60}, {'word': 'IS', 'start_offset': 66, 'end_offset': 69}, {'word': 'THE', 'start_offset': 72, 'end_offset': 76}, {'word': 'APOSTLE', 'start_offset': 80, 'end_offset': 103}, {'word': 'OF', 'start_offset': 109, 'end_offset': 111}, {'word': 'THE', 'start_offset': 115, 'end_offset': 118}, {'word': 'MIDDLE', 'start_offset': 120, 'end_offset': 131}, {'word': 'CLASSES', 'start_offset': 133, 'end_offset': 156}, {'word': 'AND', 'start_offset': 168, 'end_offset': 172}, {'word': 'WE', 'start_offset': 174, 'end_offset': 178}, {'word': 'ARE', 'start_offset': 181, 'end_offset': 185}, {'word': 'GLAD', 'start_offset': 187, 'end_offset': 200}, {'word': 'TO', 'start_offset': 205, 'end_offset': 209}, {'word': 'WELCOME', 'start_offset': 212, 'end_offset': 229}, {'word': 'HIS', 'start_offset': 234, 'end_offset': 240}, {'word': 'GOSPEL', 'start_offset': 245, 'end_offset': 267}])}
```<|||||>You can see in the above that the transcript is gibberish for the unscaled whisper model. This is because it is taking in as input the range [0, 65535] rather than [-1, 1].<|||||>Thanks! cc @sanchit-gandhi and @ArthurZucker <|||||>Hey @JeffreyWardman, this is a really interesting issue! I've chosen not to compare Whisper to Wav2Vec2 in my analysis, as these two systems are intrinsically different in how they process the audio inputs:
With Wav2Vec2, we first normalise the raw audio inputs to (mean, std) = (0, 1). We then pass the normalised audio inputs to the model (as you have done in your code example). In this way, Wav2Vec2 takes as input audio inputs.
This is exactly the operation that the Wav2Vec2 feature extractor performs for us:
```python
normalised_audio = wav2vec_processor.feature_extractor(audio).input_values
```
With Whisper, we first convert the raw audio inputs to a log-Mel spectrogram, and then feed this spectrogram to the Whisper model. In contrast to Wav2Vec2, Whisper takes the log-Mel features as inputs to the model (rather than audio values).
The audio -> log-Mel conversion is exactly the operation that the Whisper feature extractor performs for us:
```python
logmel_features = whisper_processor.feature_extractor(audio).input_features
```
I've had a dig through the original Whisper codebase and compared it to the paper - it seems as though they perform the feature normalisation in the log-Mel space (_c.f._ Section 2.2 of the [paper](https://cdn.openai.com/papers/whisper.pdf)):
<img width="450" alt="Screenshot 2022-10-27 at 17 01 54" src="https://user-images.githubusercontent.com/93869735/198340987-d6f7b8e8-433a-47e1-ba5f-7869be25125e.png">
To check whether we missed something with our implementation, I ran your code example on the _original_ Whisper repo. To reproduce this, first install the original (OpenAI) version of the model from https://github.com/openai/whisper:
```
pip install git+https://github.com/openai/whisper.git
```
I then tweaked your code snippet to make it compatible with the OpenAI model, following the "official" example provided in https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb:
```python
import torch
import whisper
from datasets import load_dataset
device = "cuda" if torch.cuda.is_available() else "cpu"
model = whisper.load_model("base.en")
model.to(device)
# define the decoding options
options = whisper.DecodingOptions(language="en", without_timestamps=True)
# load audio sample as before
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio = ds[0]["audio"]["array"]
audio = ((audio - audio.min()) / (audio.max() - audio.min())) * 65535 # Rescale to [0, 65535] to show issue
def inference(audio):
# whisper pre-processor expects torch tensors (not np.arrays or lists)
audio = torch.tensor(audio)
audio = whisper.pad_or_trim(audio.flatten()).to(device)
mel = whisper.log_mel_spectrogram(audio)
results = model.decode(mel, options)
return results.text
def get_transcript(audio):
audio_scaled = ((audio - audio.min()) / (audio.max() - audio.min())) * (2) - 1
scaled_transcription = inference(audio_scaled)
unscaled_transcription = inference(audio)
return {"scaled": scaled_transcription, "unscaled": unscaled_transcription}
original_transcripts = get_transcript(audio)
print("ORIGINAL OpenAI: \n", original_transcripts)
```
**Print output:**
```
ORIGINAL OpenAI:
{'scaled': 'Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.',
'unscaled': 'I'}
```
Which is the same output that we got with Transformers Whisper. So we can be sure that the Transformers implementation matches the official OpenAI one ✅ Meaning that this is an intrinsic problem with the Whisper model (rather than a Transformers implementation one). I think this comes down to the fact that the Whisper model does not normalise the audio inputs prior to passing them to the log-Mel spectrogram.
In Transformers, we aim to provide a matching implementation to the original model. In that regard, I don't think that we can currently change the codebase for the Transformers Whisper model to normalise audio samples before computing the log-Mel spectrogram features, since this is an inherent limitation of the Whisper model. Instead, what I'll do is post this issue on the original codebase and ask the authors whether this behaviour is expected. If they update their codebase to normalise the inputs, we can do the same in Transformers 🤗
Hope that makes sense and thank you for the great issue!
(edit: opened a discussion thread on the original OpenAI repo, awaiting the author's response https://github.com/openai/whisper/discussions/428#discussion-4510905)
<|||||>Thanks a lot @sanchit-gandhi 💯 , totally agree with you. Also in the various tests that I ran during the integration, I did not really have any issue with custom inputs, so I am also wondering id there are any potential application for that feature request? If yes, we could definitely add an optional argument, but otherwise, I am glad with keeping it close to the original codebase! 👍🏻 <|||||>I think it makes sense to offer an (optional) argument to the feature-extractor indicating whether the audio inputs should be normalised in the audio space:
* `do_normalise` (Optional, defaults to `False`): whether or not to normalise the audio inputs prior to computing the log-Mel features.
This would look something along the lines of:
```python
from transformers import WhisperFeatureExtractor
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-base.en")
# don't normalise
input_features = feature_extractor(audio, do_normalise=False).input_features[0]
# do normalise
input_features = feature_extractor(audio, do_normalise=True).input_features[0]
```
-> we can add this quite easily for more control over inference
_c.f._ https://github.com/openai/whisper/discussions/428#discussioncomment-4057857<|||||>Adding it to my whisper to do list |
transformers | 19,887 | closed | Long-form (including timestamps) for whisper | ### Feature request
https://github.com/huggingface/transformers/commit/504cd71a6b172f177e6da513bea94fadb18ad99c
- Inference is currently only implemented for short-form i.e. audio is pre-segmented into <=30s segments. Long-form (including timestamps) will be implemented in a future release.
When would the ETA be for this?
### Motivation
Whisper is not usable for long audio of speech, or for chunking audio based on timestamps determined by the ASR.
### Your contribution
Guidance/PR in longer term future if not picked up by others in the next month or so | 10-26-2022 05:03:55 | 10-26-2022 05:03:55 | cc @sanchit-gandhi and @ArthurZucker <|||||>Hey @JeffreyWardman! I believe @ArthurZucker has started looking into this, see https://github.com/huggingface/transformers/issues/19490#issuecomment-1285166541 for context!<|||||>Thanks @sanchit-gandhi! By the looks of it, it would still be missing the timestamps. This is quite an important feature for me. I'm not completely familiar with the underlying code for huggingface. How does the chunking work? Does it calculate the first break between words after a given duration?<|||||>cc @ArthurZucker who knows more about timestamp generation!
This blog highlights quite nicely how chunking works in Transformers: https://huggingface.co/blog/asr-chunking<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Does whisper implementation of hugging support timestamps to generate SRT files like openai/whisper implementation?
https://github.com/openai/whisper/blob/main/whisper/utils.py#L64<|||||>Not yet! Working on this you can follow #20620 ! |
transformers | 19,886 | closed | TypeError: ('Keyword argument not understood:', 'ignore_mismatched_sizes') | ### System Info
transormers~=4.21.0
tensorflow~=2.8.2
python~=3.7.3
when I modify type_vocab_size in config.json in bert-base-chinese, and pass ignore_mismatched_sizes param to from_pretrianed function like below:
`self.bert = TFBertModel.from_pretrained(pretrain_path, ignore_mismatched_sizes=True)
`
Then I got this error `TypeError: ('Keyword argument not understood:', 'ignore_mismatched_sizes')`
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
transormers~=4.21.0
tensorflow~=2.8.2
python~=3.7.3
modify type_vocab_size in config.json in bert-base-chinese, and pass ignore_mismatched_sizes param to from_pretrianed function like below:
`self.bert = TFBertModel.from_pretrained(pretrain_path, ignore_mismatched_sizes=True)
`
### Expected behavior
load bert-base-chinese checkpoint successfully | 10-26-2022 03:32:24 | 10-26-2022 03:32:24 | Can you try to upgrade your version of Transformers and try again? If it still fails, could you please post the full traceback?<|||||>I have solved my problem, it was due to the old version transformers code I saved in my project directory that invalidated the transformers version upgraded through pip install~, thx for your replay! |
transformers | 19,885 | closed | Implementing SHAP algorithm on visualBERT transformer | ### System Info
Hi @LysandreJik , @NielsRogge, @sgugger,
I am working to apply [shap](https://shap.readthedocs.io/en/latest/index.html) algorithm on visualbert. I found a piece of code that run`s well on distilbart-xsum-12-6 , Here is the code:
```
import numpy as np
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import shap
import torch
# load transformer language model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-xsum-12-6")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-xsum-12-6").cuda()
s=["In this picture, there are four persons: my father, my mother, my brother and my sister."]
explainer = shap.Explainer(model,tokenizer)
shap_values = explainer(s)

```
But i don`t know how to implement the same thing on visualBERT. Is there any repository which demonstrate the implementation of shap algorithm on visualBERT transformer or anyone know how to do this?
Thanks for your time.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import BertTokenizer, VisualBertForPreTraining, VisualBertForQuestionAnswering
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
#model = VisualBertForPreTraining.from_pretrained('uclanlp/visualbert-nlvr2-coco-pre')
model = VisualBertForQuestionAnswering.from_pretrained("uclanlp/visualbert-vqa")
from datasets import load_dataset
dataset = load_dataset("textvqa")
explainer = shap.Explainer(model,tokenizer)
shap_values = explainer(dataset['train'][0]['question'])
````
### Expected behavior

| 10-26-2022 02:56:24 | 10-26-2022 02:56:24 | You should use the [forums](https://github.com/huggingface/safetensors/pull/34) for questions like this as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this issue for the reason above. |
transformers | 19,884 | closed | there is no log and processbar when running trainer.train() | ### System Info
When I open a .ipynb in vscode or datashell, there is no log and processbar when running trainer.train(), but in jupyter notebook, it shows.
<img width="770" alt="截屏2022-10-26 11 55 46" src="https://user-images.githubusercontent.com/87161948/197910485-5d16a652-48a4-4c83-b110-b9ad7a95046b.png">
<img width="1138" alt="截屏2022-10-26 12 06 12" src="https://user-images.githubusercontent.com/87161948/197910740-3f7ba219-f27c-4a76-82ac-827109d9257e.png">
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained('bert-base-cased',
num_labels=2)
print(sum([i.nelement() for i in model.parameters()]) / 10000)
import numpy as np
from datasets import load_metric
from transformers.trainer_utils import EvalPrediction
metric = load_metric('accuracy')
def compute_metrics(eval_pred):
logits, labels = eval_pred
logits = logits.argmax(axis=1)
return metric.compute(predictions=logits, references=labels)
eval_pred = EvalPrediction(
predictions=np.array([[0, 1], [2, 3], [4, 5], [6, 7]]),
label_ids=np.array([1, 1, 1, 1]),
)
compute_metrics(eval_pred)
from transformers import TrainingArguments, Trainer
args = TrainingArguments(output_dir='./output_dir', evaluation_strategy='epoch')
args.num_train_epochs = 1
args.learning_rate = 1e-4
args.weight_decay = 1e-2
args.per_device_eval_batch_size = 32
args.per_device_train_batch_size = 16
trainer = Trainer(
model=model,
args=args,
train_dataset=dataset_train,
eval_dataset=dataset_test,
compute_metrics=compute_metrics,
)
# train
trainer.train()`
### Expected behavior
I hope it will output totally the same in all of the three IDEs | 10-26-2022 01:13:29 | 10-26-2022 01:13:29 | Vscode or datashell do not support widgets, as far as I know, so we can't show the same progress bars there as in a notebook.<|||||>thanks |
transformers | 19,883 | closed | Correct README image text | # What does this PR do?
Fixes README typo involving the location of a cat and remote predictions. It reverses the "left" and "right" references so it is correct when looking at the images.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Documentation: @sgugger
| 10-26-2022 00:33:25 | 10-26-2022 00:33:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,882 | closed | Allow tuples in fast tokenizer | ### System Info
- `transformers` version: 4.20.1
- Platform: Linux-5.15.0-48-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Jupyter Notebook
### Who can help?
@SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import transformers as hft
tokenizer = hft.AutoTokenizer.from_pretrained('bert-base-uncased')
tokenizer
# PreTrainedTokenizerFast(name_or_path='bert-base-uncased', vocab_size=30522, model_max_len=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})
tokenizer(
('hello world', 'foo bar')
)
```
Will give this error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~/ipykernel_29616/3046903860.py in <module>
1 tokenizer(
----> 2 ('hello world', 'foo bar')
3 )
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2510 return_length=return_length,
2511 verbose=verbose,
-> 2512 **kwargs,
2513 )
2514 else:
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2701 return_length=return_length,
2702 verbose=verbose,
-> 2703 **kwargs,
2704 )
2705
/opt/conda/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose)
413
414 if not isinstance(batch_text_or_text_pairs, list):
--> 415 raise TypeError(f"batch_text_or_text_pairs has to be a list (got {type(batch_text_or_text_pairs)})")
416
417 # Set the truncation and padding strategy and restore the initial configuration
TypeError: batch_text_or_text_pairs has to be a list (got <class 'tuple'>)
```
### Expected behavior
Tuples of str should be supported just like lists of str, as it is the case with non-fast tokenizers. For example:
```python
import transformers as hft
tokenizer = hft.BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer(
('hello world', 'how are you?')
)
# {'input_ids': [[101, 7592, 2088, 102], [101, 2129, 2024, 2017, 1029, 102]], 'token_type_ids': [[0, 0, 0, 0], [0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]}
``` | 10-25-2022 23:20:12 | 10-25-2022 23:20:12 | Not sure if there is a specific reason for this or if it was just a mistake when it was introduced. In any case the PR above should fix it.<|||||>Thank you! |
transformers | 19,881 | closed | Add BLOOM resources | From #19848, this PR adds resources for BLOOM. | 10-25-2022 22:12:48 | 10-25-2022 22:12:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,880 | closed | Convert None logits processor/stopping criteria to empty list. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19876 (TypeError from GenerationMixin.generate() when stopping_criteria is None)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@gante (?)
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2022 22:04:59 | 10-25-2022 22:04:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the feedback @gante! I've made the requested changes. I was going to add tests, but then I realized that by changing the defaults to `None`, many existing tests implicitly check that `None` is allowable. 😊 |
transformers | 19,879 | closed | Add GPT2 resources | From #19848, this PR adds resources for GPT2 | 10-25-2022 21:56:33 | 10-25-2022 21:56:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,878 | closed | Add T5 resources | From #19848, this PR adds resources for T5 | 10-25-2022 20:58:27 | 10-25-2022 20:58:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,877 | closed | Support fairseq encoder-normalize-before in RoBERTa | ### Feature request
Support Fairseq's `--encoder-normalize-before` variation of transformer-based models, (particularly RoBERTa, not sure how many it applies to) that "apply layernorm before each encoder block". See: https://fairseq.readthedocs.io/en/v0.7.0/models.html
### Motivation
There currently exist unofficial hacks of Huggingface's transformer model at https://github.com/princeton-nlp/dinkytrain which uses `--encoder-normalize-before` with fairseq and then applies a series of hacks to make it work with Huggingface's transformer library. See in particular [their custom version of RoBERTa](https://github.com/princeton-nlp/DinkyTrain/blob/main/huggingface/modeling_roberta_prelayernorm.py) which is a hack of Huggingface's RoBERTa implementation and depends on several internal components.
The weights for these hacked models are currently distributed on https://huggingface.co/princeton-nlp but are not actually compatible with the official transformer library. (https://arxiv.org/abs/2202.08005 is the related paper).
It would be great to implement feature parity for `--encoder-normalize-before` in Huggingface's transformer such that these hacks can be prevented.
Note that I have not affiliation with fairseq, the https://github.com/princeton-nlp/DinkyTrain authors, nor the https://arxiv.org/abs/2202.08005 authors.
### Your contribution
I'm happy to contribute a PR for RoBERTa that adds feature parity for the `--encoder-normalize-before` fairseq flag.
Note that I have no affiliation with fairseq nor the https://github.com/princeton-nlp/DinkyTrain authors. | 10-25-2022 19:48:34 | 10-25-2022 19:48:34 | Transformers is very opinionated in that regard and is not a building block library like fairseq. The only way to add support for those models in Transformers would be to add a new modeling file which adapts the code of RoBERTa to only include the code path corresponding to `--encoder-normalize-before`, as putting both in the same modeling file hurts readability.
You can learn more about our philosophy in this regard in [this blog post](https://huggingface.co/blog/transformers-design-philosophy) and if you're interested in making a PR to add this new model, we're looking forward to it!
<|||||>Thanks, I will look at adding the new model. Do you have a policy regarding its name? In this case, the authors did not give it a dedicated name as they see it as just RoBERTa.<|||||>Please see the implementation provided in https://github.com/huggingface/transformers/pull/20305<|||||>That's pretty cool! We had this kind of "problem" when adding support for XLM-R XL models:
https://github.com/huggingface/transformers/pull/12082#issue-665786049<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,876 | closed | TypeError from GenerationMixin.generate() when stopping_criteria is None | ### System Info
transformers 4.23.1
Anaconda Python 3.9.13
Linux
### Who can help?
*(Sorry, I think I botched filling in the template)*
I get an error from GenerationMixin.generate() when passing `stopping_criteria=None` explicitly, even though the type is annotated as Optional:
```
Traceback (most recent call last):
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/home/cmay/sandle/backend-hf/serve-backend-hf.py", line 444, in post_completions
return jsonify(make_api_completions(response_id, created, model_id, lm.complete(
File "/home/cmay/sandle/backend-hf/serve-backend-hf.py", line 158, in complete
for (i, raw_completion) in enumerate(self._complete(
File "/home/cmay/sandle/backend-hf/serve-backend-hf.py", line 247, in _complete
output_token_ids = cast(torch.Tensor, model.generate(
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_c
ontext
return func(*args, **kwargs)
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/transformers/generation_utils.py", line 1379, in gen
erate
stopping_criteria = self._get_stopping_criteria(
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/transformers/generation_utils.py", line 801, in _get
_stopping_criteria
criteria = self._merge_criteria_processor_list(criteria, stopping_criteria)
File "/home/cmay/anaconda3/envs/sandle/lib/python3.9/site-packages/transformers/generation_utils.py", line 809, in _mer
ge_criteria_processor_list
if len(custom_list) == 0:
TypeError: object of type 'NoneType' has no len()
```
The error comes from `_get_stopping_criteria` calling `_merge_criteria_processor_list` with `custom_list=None`:
```python
def _get_stopping_criteria(
self, max_length: Optional[int], max_time: Optional[float], stopping_criteria: Optional[StoppingCriteriaList]
) -> StoppingCriteriaList:
criteria = StoppingCriteriaList()
if max_length is not None:
criteria.append(MaxLengthCriteria(max_length=max_length))
if max_time is not None:
criteria.append(MaxTimeCriteria(max_time=max_time))
criteria = self._merge_criteria_processor_list(criteria, stopping_criteria)
return criteria
def _merge_criteria_processor_list(
self,
default_list: Union[LogitsProcessorList, StoppingCriteriaList],
custom_list: Union[LogitsProcessorList, StoppingCriteriaList],
) -> Union[LogitsProcessorList, StoppingCriteriaList]:
...
```
@patrickvonplaten
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
def _complete(self, text: str, tokenizer: PreTrainedTokenizer, model: PreTrainedModel,
stop_strings: List[str]) -> List[RawCompletion]:
input_token_ids = tokenizer(text, return_tensors='pt')['input_ids']
output_token_ids = model.generate(
input_token_ids,
stopping_criteria=StoppingCriteriaList(
SubstringMatchStoppingCriteria(stop_string, text, tokenizer)
for stop_string in stop_strings
) if stop_strings else None,
)
```
Incidentally, I wrote this expecting `None` to be a safe default (given the type annotation of `Optional[StoppingCriteriaList]`) and an empty `StoppingCriteriaList` to be more risky (I wasn't sure if StoppingCriteriaList was designed to handle empty lists). I was a little surprised when the opposite was true~
### Expected behavior
`GenerationMixIn.generate()` should behave the same when `stopping_criteria` is `None` or an empty `StoppingCriteriaList` (the current default). | 10-25-2022 17:35:22 | 10-25-2022 17:35:22 | cc @gante <|||||>@ccmaymay Agreed with your assessment! I see you're working on a fix, so I'll move further discussion there :) |
transformers | 19,875 | closed | Fix the learning rate in an audio-classification example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The learning rate should be `3e-4` in one of the PyTorch audio classification example according to the link to the corresponding run: https://huggingface.co/anton-l/wav2vec2-base-lang-id
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2022 17:21:07 | 10-25-2022 17:21:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Gently pinging @sgugger for final approval<|||||>Thanks for the fix! |
transformers | 19,874 | closed | Use self._trial to generate trial_name for Trainer. | # What does this PR do?
Generate trial name unless the trial is not None, and use `(self._trial or trial)` to generate trial name.
Because [currently the `optuna` backend give a None trial when using DDP and rank != 0](https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/integrations.py#L193)
Related code:
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/integrations.py#L160-L208
Or maybe the documentation should be changed.
https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/trainer.py#L2318-L2319
Who can review:
* Trainer: @sgugger
* optuna HPO: @sywangyi
| 10-25-2022 16:42:45 | 10-25-2022 16:42:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Please take a review |
transformers | 19,873 | open | ByT5Tokenizer ignores spaces around added tokens | ### System Info
transformers 4.23.1
### Who can help?
@patrickvonplaten @SaulLu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('google/byt5-base')
tokenizer.add_tokens('<x>', special_tokens=True)
print(tokenizer('<x> <x> <x><x>'))
{'input_ids': [384, 384, 384, 384, 1], 'attention_mask': [1, 1, 1, 1, 1]}
```
in comparison to:
```python
print(tokenizer('a a aa'))
{'input_ids': [100, 35, 100, 35, 100, 100, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1]}
```
### Expected behavior
In my task presence of spaces around added tokens are important. Despite that, I think byT5 tokenizer should not ignore any characters (bytes). | 10-25-2022 15:27:38 | 10-25-2022 15:27:38 | May also be of interest to @ArthurZucker <|||||>Also cc @Narsil - any ideas here? <|||||>> Also cc @Narsil - any ideas here?
Yes, by default added tokens always use `lstrip/rstrip=True` which swallows prefix/suffix spaces (it's a convenience for <special> so you don't have to worry how to add in within some text.)
Since ByT5 is pure bytes, it doesn't have `tokenizers` support (doesn't make sense speedwise). and using the "slow" class (it's not slow though).
```python
tokenizer = AutoTokenizer.from_pretrained("google/byt5-base")
# tokenizer.add_tokens("<x>", special_tokens=True)
new_token = AddedToken("<x>", lstrip=False, rstrip=False)
tokenizer.add_tokens(new_token, special_tokens=True)
tokenizer._additional_special_tokens.append(new_token)
```
This change will fix it, however it require changing internals which is not great. Definitely looks like a bug.
Pinging @ydshieh which was looking at this recently and trying to figure out some tokenizer stuff.
I "think" this qualifies as a bug. (Well the original shared code is not OK, the defaults are to strip left and right, but if you do `add_tokens(AddedToken(.., lstrip=False, rstrip=False))` then it should honor that. And the workaround I had to look at a few different internal variables to set it appropriately so that the `Trie` class could do it's job correctly (otherwise it just couldn't see the `AddedToken` values.<|||||>Sorry for being late here. So as @Narsil pointed out,
```python
tokenizer = AutoTokenizer.from_pretrained("google/byt5-base")
new_token = AddedToken("<x>", lstrip=False, rstrip=False)
tokenizer.add_tokens(new_token, special_tokens=True)
```
should work (which is not the case for now) without the need of `tokenizer._additional_special_tokens.append(new_token)`.
And the goal is to make the above code snippet do it job correctly. Is this right?<|||||>Hey! I'll take this one on as part of #23909, since it is an issue with `rstrip` and `lstrip` being ignored (as the default behaviour if a token is not special is to always stip)<|||||>As mentioned, this will take a bit more time, a big refactoring is coming! 🔥 |
transformers | 19,872 | closed | Fix somehow incorrect model - tokenizer mapping in tokenization testing | # What does this PR do?
In this method
https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/tests/test_tokenization_common.py#L107
we update the mapping whenever we find a tokenizer/model found for a configuration
```
tokenizer[_fast]: (configuration, model)
```
However, **multiple models could share the same tokenizer class**, and, for example, we get `LiLT` (recently added) model for the tokenizer class `LayoutLMv3Tokenizer`. Some tests like the following fails
```bash
tests/models/layoutlmv3/test_tokenization_layoutlmv3.py::LayoutLMv3TokenizationTest::test_torch_encode_plus_sent_to_model
(line 1130) TypeError: forward() got an unexpected keyword argument 'pixel_values'
```
as the model used for `LayoutLMv3TokenizationTest` is `LiLT` model instead of `LayoutLMv3Model`.
1. This is somehow undesirable, and we would prefer to test the original/canonical mode/tokenizer pair.
2. This PR adds some condition to ensure the desired property in 1.
3. We can probably extend the test to test each possible pair of `(model_1, tokenizer)`, `(model_2, tokenizer)`, ...etc. But I would leave this in another PR. | 10-25-2022 12:24:53 | 10-25-2022 12:24:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,871 | closed | Generate: contrastive search cosmetic tweaks | # What does this PR do?
Makes some cosmetic tweaks to contrastive search in advance of the TF PR:
- fixes some documentation strings out of place;
- correct type hints;
- limits comments to 120 chars;
- removes redundant variables.
Changes validated against the slow tests for contrastive search. | 10-25-2022 12:22:53 | 10-25-2022 12:22:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,870 | closed | No conv bn folding in ipex to avoid warning | # What does this PR do?
Removes attempting convolution-batchnorm folding from ipex optimization, as it always fails and throws a warning.
See https://github.com/intel/intel-extension-for-pytorch/issues/250
Most models won't even benefit from attempting, but note that even a model like ResNet fails due to checks like:
```
if num_channels != self.num_channels:
```
## Who can review?
- trainer: @sgugger
| 10-25-2022 12:19:16 | 10-25-2022 12:19:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>>
Thanks for pointing this out.
Actually, IPEX is doing optimizations as much as it could, and convolution-batchnorm folding does have a chance to bring benefit to vision-related models, e.g, beit/resnet/data2vec_vision, etc.
Hence if we set it as false, there could be losing some potential optimization chances.
Back for the warning message, it could be improved on the IPEX side.
<|||||>> Thanks for pointing this out. Actually, IPEX is doing optimizations as much as it could, and convolution-batchnorm folding does have a chance to bring benefit to vision-related models, e.g, beit/resnet/data2vec_vision, etc. Hence if we set it as false, there could be losing some potential optimization chances.
The philosophy in the huggingface models seems to be to do a lot of input checks, which is incompatible with the tracing used in ipex. I could not find a single model which doesn't fail.
Discussion on whether or not this tracing could be improved in ipex does not seem to have much traction, see the linked issue. *edit* it actually goes all the way up to pytorch internals, will see if there is traction there.
> Back for the warning message, it could be improved on the IPEX side.
This is certainly true as well. <|||||>> So let's leave it as `False` for now and revisit once IPEX has better support?
Yes, will record this enhancement as TODO for IPEX. |
transformers | 19,869 | closed | Added translation of serialization.mdx to Portuguese Issue #16824 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16824
Currently, only the serialization.mdx file was translated as of this PR.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-25-2022 10:51:52 | 10-25-2022 10:51:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,868 | closed | Add Onnx Config for ImageGPT | # What does this PR do?
Fixes #16308
Add changes to make ImageGPT models available for Onnx conversion.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ChainYo | 10-25-2022 10:15:34 | 10-25-2022 10:15:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@lewtun With latest changes, all the 4 tests are now passing.

<|||||>> @lewtun With the latest changes, all the 4 tests are now passing.
Thanks for iterating so fast, @RaghavPrabhakar66. Good work!
If you have time, you can try to upload an ONNX ImageGPT model to the ONNX organization on the hub if you want.<|||||>@ChainYo Sure.<|||||>@sgugger fixed it.<|||||>Thanks! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.