repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 17,354 | closed | add MobileViT model | # What does this PR do?
Add the MobileViT model to Transformers. This is a computer vision model that combines CNNs with transformers: https://machinelearning.apple.com/research/vision-transformer
The model comes in three sizes: small, extra small, and xx-small. There are two heads: image classification and semantic segmentation. Object detection will be added later.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (Internal discussion on Slack.)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-19-2022 14:55:36 | 05-19-2022 14:55:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>What is holding on the merge here? It's been a month and a half so let's try to merge this soon :-)<|||||>I can't seem to be able to run the tests again? (It's not failing on my code.)<|||||>You will probably need to rebase on main to get to 0 failures.<|||||>It's a random failure of the tests (reported and is being fixed). I don't mind doing a rebase but I thought there was a way to trigger the tests to run again.<|||||>No the failures you are seeing are all due to the PyTorch 1.12 release and a model that was moved. All the fixes for that are in main but not in this PR, so re-running the tests won't help make them green.
But those are all unrelated to the model addition, so I think we can merge this, no?<|||||>Merging this is fine with me (I don't have write access). :-) |
transformers | 17,353 | closed | [OPT] Run test in lower precision on GPU | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Let's run the test only in half precision. There might have been a bug because fp16 weights are loaded into fp32. Let's make sure everything stays in fp16.
Also if it fails again, we'll see a better error message this time.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-19-2022 14:23:27 | 05-19-2022 14:23:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,352 | closed | Adding the Portuguese version of the tasks/sequence_classification.mdx documentation | # What does this PR do?
Adding the Portuguese version of the tasks/sequence_classification.mdx documentation
Work on #16824
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@omarespejel
| 05-19-2022 12:52:04 | 05-19-2022 12:52:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Muito obrigado pelo seu PR @jonatasgrosman! If you wish to contribute with the translation of another doc please let me know in the Issue ๐ค.
@sgugger I find it ready to merge :)<|||||>Thanks a lot for your help! |
transformers | 17,351 | closed | fix ZeroDivisionError: division by zero | The logic of if len_dataloader is not None should be to determine whether train_dataset is IterableDataset๏ผBut whether train_dataset is IterableDataset or not, len dataloader is 1๏ผso, it should be change `if not isinstance(self.train_dataset, torch.utils.data.IterableDataset)`
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17350 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-19-2022 10:30:07 | 05-19-2022 10:30:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17351). All of your documentation changes will be reflected on that endpoint.<|||||>No, this is incorrect. Some `IterableDataset` do have a length and we don't want this specific check here. You did not share your code in your issue, but it looks like the length of your custom iterable dataset is wrong.<|||||>> No, this is incorrect. Some `IterableDataset` do have a length and we don't want this specific check here. You did not share your code in your issue, but it looks like the length of your custom iterable dataset is wrong.
my custom iterable dataset is :
```py
class CustomIterableDataset(torch.utils.data.IterableDataset):
def __init__(self, tokenizer, data_file, num_lines, block_size):
self.data_file = data_file
self.block_size = block_size
self.num_lines = num_lines
self.tokenizer = tokenizer
if num_lines == -1:
raise Exception("่ฏทๅจ--data_lines่พๅ
ฅๆฐๆฎ็่กๆฐ")
def __len__(self):
return self.num_lines
def __iter__(self):
while True:
with open(self.data_file, 'rt', encoding='utf-8') as f:
for line in f.readline():
line = f.readline().strip()
batch_encoding = self.tokenizer.batch_encode_plus(line, add_special_tokens=True, max_length=self.block_size)
self.examples = batch_encoding["input_ids"]
yield torch.tensor(self.examples[0], dtype=torch.long)
```
i think if we set `max_steps=total_line//pretrain_batch_size` ,in `trainer`, that should correct?
```py
training_args = TrainingArguments(
output_dir=args.save_dir, overwrite_output_dir=True, num_train_epochs=num_train_epochs, # max_steps=total_line//pretrain_batch_size ,
learning_rate=1e-4, weight_decay=0.01, warmup_steps=10000, local_rank = args.local_rank,
per_device_train_batch_size=pretrain_batch_size,logging_steps=500, save_total_limit = 1, logging_dir="./runs",
load_best_model_at_end=True, save_strategy="epoch", evaluation_strategy="epoch",
metric_for_best_model="loss")
```
<|||||>If you pass along `max_steps`, it will override the rest, yes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,350 | closed | ZeroDivisionError: division by zero | ### System Info
```shell
transformers v4.9.2
python v3.8
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
when i was training my bert model use ` CustomIterableDataset(torch.utils.data.IterableDataset)`, rise the follow error:
```
Traceback (most recent call last):
File "train.py", line 102, in <module>
main(args)
File "train.py", line 85, in main
trainer.train()
File "/home/jack/anaconda3/envs/py38trans4.19.2/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train
return inner_training_loop(
File "/home/jack/anaconda3/envs/py38trans4.19.2/lib/python3.8/site-packages/transformers/trainer.py", line 1627, in _inner_training_loop
self.state.epoch = epoch + (step + 1) / steps_in_epoch
ZeroDivisionError: division by zero
```
i find that this line have a litter bug in [`/transformers/trainer.py`, line: 1516](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L1526):
```
steps_in_epoch = (
len(epoch_iterator)
if len_dataloader is not None
else args.max_steps * args.gradient_accumulation_steps
)
```
The logic of ` if len_dataloader is not None` should be to determine whether `train_dataset` is `IterableDataset`๏ผBut whether `train_dataset` is `IterableDataset` or not, len dataloader is 1๏ผso, it should be change as follows:
```
steps_in_epoch = (
len(epoch_iterator)
if not isinstance(self.train_dataset, torch.utils.data.IterableDataset)
else args.max_steps * args.gradient_accumulation_steps
)
```
### Expected behavior
```shell
i will fix this bug
```
| 05-19-2022 10:24:00 | 05-19-2022 10:24:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Did you fix this problem,I'm having the same bug |
transformers | 17,349 | closed | Spanish docs - Fix nits and wording | # What does this PR do?
Fix wording and nits in the merged Spanish documentation.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). | 05-19-2022 09:59:23 | 05-19-2022 09:59:23 | @sgugger links to current docs that don't exist (eg `./main_classes/pipelines`) don't work. Do you think it would be worth linking them to the English docs while the Spanish versions become available? Or should we wait?
fyi @osanseviero <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I had been told that links to page that don't exist in one language would be automatically resolved to the English version, cc @mishig25 <|||||>Currently, the links lead to an error if the doc is not yet translated. For example, this fragment in [`autoclass_tutorial`](https://huggingface.co/docs/transformers/main/es/autoclass_tutorial) leads a to an error for not having [model_doc/auto.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/auto.mdx)
translated yet.
> Finalmente, las clases AutoModelFor te permiten cargar un modelo preentrenado para una tarea dada (revisa [aquรญ](https://huggingface.co/docs/transformers/main/es/model_doc/auto) para conocer la lista completa de tareas disponibles).
Please let me know if any help is required. Meanwhile, I will summon more community for the translation.<|||||>This PR fixes some wording and format in the Spanish docs. It would be ready to merge IMO.
I opened issue #17461 regarding the problem with the links.<|||||>Thanks for fixing! |
transformers | 17,348 | closed | ImportError: cannot import name 'AutoProcessor' from 'transformers' | ### System Info
```shell
- `transformers` version: 4.11.3
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
I am following the steps described in https://huggingface.co/docs/transformers/tasks/asr to test the AST model.
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoProcessor
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Input In [9], in <cell line: 1>()
----> 1 from transformers import AutoProcessor
ImportError: cannot import name 'AutoProcessor' from 'transformers' (/Users/to125419/miniconda3/envs/s2t/lib/python3.8/site-packages/transformers/__init__.py)
### Expected behavior
```shell
I am following the steps described in https://huggingface.co/docs/transformers/tasks/asr
```
| 05-19-2022 08:23:44 | 05-19-2022 08:23:44 | I don't think `AutoProcessor` was already available in Transformers 4.11.
So you might need an upgrade to the latest version:
```
pip install --upgrade transformers
```<|||||>Yes you are right! Thanks for your help :) |
transformers | 17,347 | closed | Add support for conditional DETR model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Added codes and documentations for conditioonal DETR model. The conditional DETR files are created by using the "add-new-model-like" feature of CookieCutter, based on DETR codes. All tests are passed. One thing I want to ask is that, I have converted the pretrained weights, how shoud I give these weights to you?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/Atten4Vis/ConditionalDETR/issues/21
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-19-2022 01:33:34 | 05-19-2022 01:33:34 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,346 | closed | failed doctests examples for data2vec_audio, 1 failed, 4 passed | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.0
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following the instruction in https://github.com/huggingface/transformers/issues/16292 as listed below:
Make sure to run the doc example doc test locally as described in https://github.com/huggingface/transformers/tree/master/docs#for-python-files
This is marked done in https://github.com/huggingface/transformers/issues/16292, this was run as a sanity check when working with data2vec_text, https://github.com/huggingface/transformers/issues/17345
error message:
[doctest] transformers.models.data2vec.modeling_data2vec_audio.Data2VecAudioForAudioFrameClassification.forward _______________________________________
1420 heads.
1421
1422 Example:
1423
1424 ```python
1425 >>> from transformers import Wav2Vec2FeatureExtractor, Data2VecAudioForAudioFrameClassification
1426 >>> from datasets import load_dataset
1427 >>> import torch
1428
1429 >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
Expected nothing
Got:
Downloading and preparing dataset librispeech_asr/clean to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b...
Dataset librispeech_asr downloaded and prepared to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b. Subsequent calls will reuse this data.
/home/ruihua/project/huggingface/tf/transformers/src/transformers/models/data2vec/modeling_data2vec_audio.py:1429: DocTestFailure
### Expected behavior
```shell
all doctest examples should pass
```
| 05-19-2022 01:21:12 | 05-19-2022 01:21:12 | I am closing this ticket as it is part of the doctest task, merged to https://github.com/huggingface/transformers/issues/17338 |
transformers | 17,345 | closed | failed doctests examples for data2vec_text, 5 failed, 2 passed | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-41-generic-x86_64-with-glibc2.29
- Python version: 3.8.0
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following the instruction in https://github.com/huggingface/transformers/issues/16292 as listed below:
Make sure to run the doc example doc test locally as described in https://github.com/huggingface/transformers/tree/master/docs#for-python-files
please see attachment for the
[doctest_data2vec_text_errormsg.txt](https://github.com/huggingface/transformers/files/8721864/doctest_data2vec_text_errormsg.txt)
p.s, for sanity check, I also run the doctest sample for the following:
1. bigbird_pegasus: all 5 tests passed
2. data2vec_audio in the same folder: 1 failed, 4 passed
### Expected behavior
```shell
all samples in doctests should pass
```
| 05-19-2022 01:15:56 | 05-19-2022 01:15:56 | I am closing this ticket as it is part of the doctest task, merged to https://github.com/huggingface/transformers/issues/17338 |
transformers | 17,344 | closed | Illegal instruction (core dumped) error: PowerPC8 | ### System Info
```shell
transformers 2.5.1
python3.8
pytorch 1.10.2
All packages installed with conda by way of the conda-forge or powerai repositories, all of them are ppc64-le compatible
```
### Who can help?
I am trying to load a BERT pre-trained model. All imports work fine, and loading the tokenizer works fine, but loading the model does not.
I am running on an nvidia-docker2 container for pytorch, namely ibmcom/pytorch-ppc64le with updated libraries using only powerai repositories. The error occurs here:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import transformers
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForMaskedLM.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import transformers
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForMaskedLM.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
### Expected behavior
```shell
What is expected is that the model loads and I can use it to further train it.
```
| 05-19-2022 00:10:14 | 05-19-2022 00:10:14 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,343 | closed | Generate.py doesn't support gpt-j | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.4.0-110-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.2 (cpu)
- Jax version: 0.3.6
- JaxLib version: 0.3.5
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
### Who can help?
@sgugger @pat
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
run_generation.py --model_type gpt2 --model_name_or_path "EleutherAI/gpt-j-6B" --prompt "Hello I am" --length 100
```
Some weights of the model checkpoint at gpt-j-6B/ were not used when initializing GPT2LMHeadModel: ['transformer.h.5.mlp.fc_in.weight', 'transformer.h.10.attn.v_proj.weight', 'transformer.h.8.mlp.fc_in.bias', 'transformer.h.3.attn.v_proj.weight', 'transformer.h.18.mlp.fc_out.bias', 'transformer.h.13.mlp.fc_in.weight', 'transformer.h.1.attn.v_proj.weight', 'transformer.h.0.attn.q_proj.weight', 'transformer.h.12.attn.k_proj.weight', 'transformer.h.4.attn.v_proj.weight', 'transformer.h.22.mlp.fc_out.weight', 'transformer.h.13.attn.out_proj.weight', 'transformer.h.15.mlp.fc_in.bias', 'transformer.h.14.attn.k_proj.weight', 'transformer.h.26.mlp.fc_out.weight', 'transformer.h.12.attn.q_proj.weight', 'transformer.h.25.mlp.fc_out.bias', 'transformer.h.9.attn.q_proj.weight', 'transformer.h.7.attn.q_proj.weight', 'transformer.h.14.mlp.fc_in.weight', 'transformer.h.16.mlp.fc_out.bias', 'transformer.h.2.mlp.fc_out.bias', 'transformer.h.6.mlp.fc_in.weight', 'transformer.h.22.mlp.fc_in.weight', 'transformer.h.19.mlp.fc_out.bias', 'transformer.h.25.attn.k_proj.weight', 'transformer.h.15.attn.q_proj.weight', 'transformer.h.22.attn.q_proj.weight', 'transformer.h.10.mlp.fc_in.bias', 'transformer.h.24.attn.v_proj.weight', 'transformer.h.27.attn.k_proj.weight', 'transformer.h.3.mlp.fc_in.weight', 'transformer.h.16.mlp.fc_in.weight', 'transformer.h.0.mlp.fc_in.bias', 'transformer.h.13.mlp.fc_out.weight', 'transformer.h.4.attn.out_proj.weight', 'transformer.h.8.attn.k_proj.weight', 'transformer.h.5.mlp.fc_out.weight', 'transformer.h.26.attn.out_proj.weight', 'transformer.h.23.mlp.fc_in.weight', 'transformer.h.22.mlp.fc_out.bias', 'transformer.h.26.attn.q_proj.weight', 'transformer.h.11.attn.v_proj.weight', 'transformer.h.4.attn.q_proj.weight', 'transformer.h.21.mlp.fc_out.weight', 'transformer.h.18.attn.k_proj.weight', 'transformer.h.10.attn.q_proj.weight', 'transformer.h.16.attn.k_proj.weight', 'transformer.h.0.attn.out_proj.weight', 'transformer.h.2.attn.q_proj.weight', 'transformer.h.11.attn.out_proj.weight', 'transformer.h.5.attn.out_proj.weight', 'transformer.h.25.mlp.fc_in.weight', 'transformer.h.16.attn.out_proj.weight', 'transformer.h.3.mlp.fc_out.bias', 'transformer.h.19.mlp.fc_in.weight', 'transformer.h.1.attn.k_proj.weight', 'transformer.h.10.attn.k_proj.weight', 'transformer.h.6.mlp.fc_out.bias', 'transformer.h.15.attn.out_proj.weight', 'transformer.h.2.attn.k_proj.weight', 'transformer.h.6.mlp.fc_in.bias', 'transformer.h.13.attn.q_proj.weight', 'transformer.h.15.mlp.fc_in.weight', 'transformer.h.6.mlp.fc_out.weight', 'transformer.h.8.mlp.fc_in.weight', 'transformer.h.2.attn.v_proj.weight', 'transformer.h.19.attn.v_proj.weight', 'transformer.h.14.mlp.fc_out.weight', 'transformer.h.5.attn.k_proj.weight', 'transformer.h.24.mlp.fc_out.bias', 'transformer.h.7.mlp.fc_in.weight', 'transformer.h.20.attn.v_proj.weight', 'transformer.h.23.attn.q_proj.weight', 'transformer.h.16.mlp.fc_out.weight', 'transformer.h.25.attn.q_proj.weight', 'transformer.h.12.attn.v_proj.weight', 'transformer.h.22.mlp.fc_in.bias', 'transformer.h.22.attn.v_proj.weight', 'transformer.h.10.mlp.fc_out.weight', 'transformer.h.0.attn.v_proj.weight', 'transformer.h.1.mlp.fc_in.bias', 'transformer.h.27.attn.out_proj.weight', 'transformer.h.14.attn.v_proj.weight', 'transformer.h.4.mlp.fc_out.bias', 'transformer.h.20.attn.out_proj.weight', 'transformer.h.21.mlp.fc_in.weight', 'transformer.h.20.attn.k_proj.weight', 'transformer.h.24.attn.q_proj.weight', 'transformer.h.12.mlp.fc_out.weight', 'transformer.h.2.mlp.fc_in.bias', 'transformer.h.2.mlp.fc_out.weight', 'transformer.h.4.mlp.fc_out.weight', 'transformer.h.3.attn.out_proj.weight', 'transformer.h.9.attn.out_proj.weight', 'transformer.h.25.attn.v_proj.weight', 'transformer.h.20.mlp.fc_in.bias', 'transformer.h.17.mlp.fc_out.weight', 'transformer.h.18.mlp.fc_in.bias', 'transformer.h.18.attn.q_proj.weight', 'transformer.h.17.attn.v_proj.weight', 'transformer.h.11.attn.q_proj.weight', 'transformer.h.4.attn.k_proj.weight', 'transformer.h.20.mlp.fc_out.weight', 'transformer.h.26.mlp.fc_in.bias', 'transformer.h.21.mlp.fc_out.bias', 'transformer.h.17.mlp.fc_in.weight', 'transformer.h.15.mlp.fc_out.weight', 'transformer.h.8.mlp.fc_out.weight', 'transformer.h.15.mlp.fc_out.bias', 'transformer.h.14.attn.out_proj.weight', 'transformer.h.23.attn.v_proj.weight', 'transformer.h.16.attn.v_proj.weight', 'transformer.h.12.mlp.fc_in.bias', 'transformer.h.24.mlp.fc_out.weight', 'transformer.h.24.attn.k_proj.weight', 'transformer.h.25.attn.out_proj.weight', 'transformer.h.8.attn.out_proj.weight', 'transformer.h.19.attn.q_proj.weight', 'transformer.h.9.mlp.fc_out.bias', 'transformer.h.23.mlp.fc_in.bias', 'transformer.h.10.mlp.fc_in.weight', 'transformer.h.9.mlp.fc_out.weight', 'transformer.h.7.attn.v_proj.weight', 'transformer.h.11.mlp.fc_out.bias', 'transformer.h.27.mlp.fc_out.weight', 'transformer.h.17.mlp.fc_out.bias', 'transformer.h.17.attn.out_proj.weight', 'transformer.h.11.attn.k_proj.weight', 'transformer.h.7.mlp.fc_in.bias', 'transformer.h.1.attn.q_proj.weight', 'transformer.h.26.mlp.fc_in.weight', 'transformer.h.5.mlp.fc_out.bias', 'transformer.h.8.attn.v_proj.weight', 'transformer.h.0.mlp.fc_out.bias', 'transformer.h.4.mlp.fc_in.weight', 'transformer.h.13.attn.v_proj.weight', 'transformer.h.15.attn.v_proj.weight', 'transformer.h.26.attn.v_proj.weight', 'transformer.h.19.attn.out_proj.weight', 'transformer.h.7.mlp.fc_out.bias', 'transformer.h.2.attn.out_proj.weight', 'transformer.h.17.attn.k_proj.weight', 'transformer.h.6.attn.v_proj.weight', 'transformer.h.14.attn.q_proj.weight', 'transformer.h.1.mlp.fc_in.weight', 'transformer.h.27.mlp.fc_in.bias', 'transformer.h.22.attn.k_proj.weight', 'transformer.h.4.mlp.fc_in.bias', 'transformer.h.23.attn.k_proj.weight', 'transformer.h.13.attn.k_proj.weight', 'transformer.h.9.mlp.fc_in.bias', 'transformer.h.3.mlp.fc_in.bias', 'transformer.h.13.mlp.fc_out.bias', 'transformer.h.15.attn.k_proj.weight', 'transformer.h.11.mlp.fc_in.weight', 'transformer.h.7.attn.out_proj.weight', 'transformer.h.27.attn.v_proj.weight', 'transformer.h.24.mlp.fc_in.weight', 'transformer.h.21.attn.out_proj.weight', 'transformer.h.16.attn.q_proj.weight', 'transformer.h.1.mlp.fc_out.bias', 'transformer.h.1.mlp.fc_out.weight', 'transformer.h.3.attn.q_proj.weight', 'transformer.h.9.attn.k_proj.weight', 'transformer.h.23.mlp.fc_out.weight', 'transformer.h.5.attn.v_proj.weight', 'transformer.h.22.attn.out_proj.weight', 'transformer.h.11.mlp.fc_out.weight', 'transformer.h.27.mlp.fc_out.bias', 'transformer.h.3.mlp.fc_out.weight', 'lm_head.bias', 'transformer.h.21.attn.v_proj.weight', 'transformer.h.21.attn.q_proj.weight', 'transformer.h.0.mlp.fc_in.weight', 'transformer.h.10.attn.out_proj.weight', 'transformer.h.9.attn.v_proj.weight', 'transformer.h.5.attn.q_proj.weight', 'transformer.h.17.mlp.fc_in.bias', 'transformer.h.24.attn.out_proj.weight', 'transformer.h.6.attn.q_proj.weight', 'transformer.h.0.attn.k_proj.weight', 'transformer.h.18.mlp.fc_in.weight', 'transformer.h.23.mlp.fc_out.bias', 'transformer.h.27.attn.q_proj.weight', 'transformer.h.6.attn.k_proj.weight', 'transformer.h.3.attn.k_proj.weight', 'transformer.h.7.mlp.fc_out.weight', 'transformer.h.26.mlp.fc_out.bias', 'transformer.h.19.attn.k_proj.weight', 'transformer.h.20.mlp.fc_in.weight', 'transformer.h.12.mlp.fc_in.weight', 'transformer.h.0.mlp.fc_out.weight', 'transformer.h.25.mlp.fc_in.bias', 'transformer.h.6.attn.out_proj.weight', 'transformer.h.8.mlp.fc_out.bias', 'transformer.h.21.attn.k_proj.weight', 'transformer.h.18.mlp.fc_out.weight', 'transformer.h.20.mlp.fc_out.bias', 'transformer.h.8.attn.q_proj.weight', 'transformer.h.7.attn.k_proj.weight', 'transformer.h.19.mlp.fc_out.weight', 'transformer.h.17.attn.q_proj.weight', 'transformer.h.18.attn.out_proj.weight', 'transformer.h.20.attn.q_proj.weight', 'transformer.h.27.mlp.fc_in.weight', 'transformer.h.12.mlp.fc_out.bias', 'transformer.h.13.mlp.fc_in.bias', 'transformer.h.1.attn.out_proj.weight', 'transformer.h.5.mlp.fc_in.bias', 'transformer.h.19.mlp.fc_in.bias', 'transformer.h.24.mlp.fc_in.bias', 'transformer.h.14.mlp.fc_out.bias', 'transformer.h.21.mlp.fc_in.bias', 'transformer.h.11.mlp.fc_in.bias', 'transformer.h.14.mlp.fc_in.bias', 'transformer.h.16.mlp.fc_in.bias', 'transformer.h.26.attn.k_proj.weight', 'transformer.h.12.attn.out_proj.weight', 'transformer.h.23.attn.out_proj.weight', 'transformer.h.2.mlp.fc_in.weight', 'transformer.h.25.mlp.fc_out.weight', 'transformer.h.18.attn.v_proj.weight', 'transformer.h.10.mlp.fc_out.bias', 'transformer.h.9.mlp.fc_in.weight']
- This IS expected if you are initializing GPT2LMHeadModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing GPT2LMHeadModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at gpt-j-6B/ and are newly initialized: ['transformer.h.18.mlp.c_proj.bias', 'transformer.h.25.mlp.c_fc.bias', 'transformer.h.5.ln_2.bias', 'transformer.h.20.mlp.c_fc.bias', 'transformer.h.18.attn.c_attn.weight', 'transformer.h.20.attn.c_proj.weight', 'transformer.h.24.mlp.c_proj.weight', 'transformer.h.1.attn.c_attn.weight', 'transformer.h.4.attn.c_proj.bias', 'transformer.h.22.mlp.c_proj.bias', 'transformer.h.14.mlp.c_proj.bias', 'transformer.h.6.mlp.c_fc.bias', 'transformer.h.21.ln_2.bias', 'transformer.h.14.mlp.c_proj.weight', 'transformer.h.20.attn.c_proj.bias', 'transformer.h.17.mlp.c_proj.weight', 'transformer.h.14.attn.c_attn.weight', 'transformer.h.1.attn.c_proj.weight', 'transformer.h.10.ln_2.bias', 'transformer.h.26.attn.c_proj.weight', 'transformer.h.0.attn.c_proj.weight', 'transformer.h.5.attn.c_proj.bias', 'transformer.h.0.attn.c_attn.weight', 'transformer.h.2.attn.c_attn.weight', 'transformer.h.22.ln_2.weight', 'transformer.h.4.mlp.c_proj.bias', 'transformer.h.4.ln_2.weight', 'transformer.h.12.attn.c_attn.weight', 'transformer.h.3.attn.c_proj.weight', 'transformer.h.15.attn.c_proj.weight', 'transformer.h.16.attn.c_proj.bias', 'transformer.h.3.mlp.c_fc.bias', 'transformer.h.6.ln_2.bias', 'transformer.h.16.mlp.c_fc.weight', 'transformer.h.23.ln_2.bias', 'transformer.h.6.ln_2.weight', 'transformer.h.26.mlp.c_fc.bias', 'transformer.h.17.mlp.c_fc.weight', 'transformer.h.6.attn.c_proj.bias', 'transformer.h.15.ln_2.weight', 'transformer.h.8.mlp.c_proj.bias', 'transformer.h.11.mlp.c_fc.bias', 'transformer.h.10.mlp.c_proj.bias', 'transformer.h.6.mlp.c_fc.weight', 'transformer.h.23.mlp.c_proj.weight', 'transformer.h.17.attn.c_proj.weight', 'transformer.h.8.attn.c_proj.weight', 'transformer.h.7.ln_2.bias', 'transformer.h.22.mlp.c_fc.bias', 'transformer.h.3.mlp.c_proj.bias', 'transformer.h.21.mlp.c_fc.bias', 'transformer.h.11.attn.c_attn.weight', 'transformer.h.20.mlp.c_proj.bias', 'transformer.h.16.attn.c_attn.weight', 'transformer.h.8.attn.c_attn.weight', 'transformer.h.0.ln_2.weight', 'transformer.h.12.ln_2.weight', 'transformer.h.13.mlp.c_fc.bias', 'transformer.h.13.mlp.c_proj.weight', 'transformer.h.25.ln_2.bias', 'transformer.h.24.attn.c_attn.weight', 'transformer.h.6.mlp.c_proj.weight', 'transformer.h.19.ln_2.weight', 'transformer.h.1.mlp.c_fc.weight', 'transformer.h.9.mlp.c_fc.weight', 'transformer.h.23.attn.c_proj.weight', 'transformer.h.16.ln_2.weight', 'transformer.h.25.attn.c_proj.weight', 'transformer.h.14.ln_2.weight', 'transformer.h.8.ln_2.bias', 'transformer.h.14.attn.c_proj.bias', 'transformer.h.18.attn.c_proj.bias', 'transformer.h.19.mlp.c_proj.weight', 'transformer.h.12.mlp.c_proj.bias', 'transformer.h.0.ln_2.bias', 'transformer.h.7.mlp.c_proj.bias', 'transformer.h.1.ln_2.bias', 'transformer.h.18.ln_2.bias', 'transformer.h.22.mlp.c_proj.weight', 'transformer.h.9.ln_2.weight', 'transformer.h.9.attn.c_proj.weight', 'transformer.h.21.mlp.c_fc.weight', 'transformer.h.12.mlp.c_fc.bias', 'transformer.h.0.mlp.c_proj.weight', 'transformer.h.26.ln_2.bias', 'transformer.h.15.mlp.c_fc.weight', 'transformer.h.6.attn.c_attn.weight', 'transformer.h.3.attn.c_attn.weight', 'transformer.h.2.ln_2.weight', 'transformer.h.18.mlp.c_fc.weight', 'transformer.h.13.attn.c_proj.bias', 'transformer.h.12.mlp.c_fc.weight', 'transformer.h.19.ln_2.bias', 'transformer.h.2.mlp.c_fc.weight', 'transformer.h.9.ln_2.bias', 'transformer.h.11.mlp.c_proj.bias', 'transformer.h.11.mlp.c_fc.weight', 'transformer.h.14.mlp.c_fc.weight', 'transformer.h.18.mlp.c_fc.bias', 'transformer.h.1.mlp.c_proj.weight', 'transformer.h.24.mlp.c_fc.bias', 'transformer.h.13.ln_2.bias', 'transformer.h.19.attn.c_attn.weight', 'transformer.h.2.attn.c_proj.weight', 'transformer.h.26.attn.c_attn.weight', 'transformer.h.12.attn.c_proj.bias', 'transformer.h.10.mlp.c_proj.weight', 'transformer.h.3.attn.c_proj.bias', 'transformer.h.8.attn.c_proj.bias', 'transformer.h.9.attn.c_attn.weight', 'transformer.h.25.attn.c_proj.bias', 'transformer.h.26.ln_2.weight', 'transformer.h.25.mlp.c_proj.bias', 'transformer.h.7.attn.c_proj.bias', 'transformer.h.1.ln_2.weight', 'transformer.h.17.mlp.c_proj.bias', 'transformer.h.27.mlp.c_proj.bias', 'transformer.h.27.ln_2.bias', 'transformer.h.15.attn.c_proj.bias', 'transformer.h.1.mlp.c_fc.bias', 'transformer.h.23.mlp.c_fc.bias', 'transformer.h.11.ln_2.bias', 'transformer.h.23.attn.c_proj.bias', 'transformer.h.21.attn.c_attn.weight', 'transformer.h.3.mlp.c_fc.weight', 'transformer.h.5.ln_2.weight', 'transformer.h.27.mlp.c_proj.weight', 'transformer.h.16.mlp.c_proj.bias', 'transformer.h.21.mlp.c_proj.weight', 'transformer.wpe.weight', 'transformer.h.9.attn.c_proj.bias', 'transformer.h.17.ln_2.weight', 'transformer.h.9.mlp.c_fc.bias', 'transformer.h.5.mlp.c_fc.bias', 'transformer.h.11.mlp.c_proj.weight', 'transformer.h.15.mlp.c_fc.bias', 'transformer.h.13.ln_2.weight', 'transformer.h.7.ln_2.weight', 'transformer.h.10.mlp.c_fc.weight', 'transformer.h.22.mlp.c_fc.weight', 'transformer.h.15.ln_2.bias', 'transformer.h.4.attn.c_proj.weight', 'transformer.h.23.mlp.c_proj.bias', 'transformer.h.24.mlp.c_proj.bias', 'transformer.h.19.mlp.c_fc.weight', 'transformer.h.10.mlp.c_fc.bias', 'transformer.h.1.mlp.c_proj.bias', 'transformer.h.17.ln_2.bias', 'transformer.h.15.mlp.c_proj.weight', 'transformer.h.22.attn.c_attn.weight', 'transformer.h.6.attn.c_proj.weight', 'transformer.h.13.attn.c_attn.weight', 'transformer.h.22.ln_2.bias', 'transformer.h.18.ln_2.weight', 'transformer.h.27.mlp.c_fc.weight', 'transformer.h.4.attn.c_attn.weight', 'transformer.h.24.ln_2.weight', 'transformer.h.10.attn.c_attn.weight', 'transformer.h.27.mlp.c_fc.bias', 'transformer.h.19.attn.c_proj.bias', 'transformer.h.2.mlp.c_proj.bias', 'transformer.h.24.ln_2.bias', 'transformer.h.5.attn.c_proj.weight', 'transformer.h.13.mlp.c_fc.weight', 'transformer.h.8.ln_2.weight', 'transformer.h.16.mlp.c_fc.bias', 'transformer.h.7.attn.c_attn.weight', 'transformer.h.26.mlp.c_proj.weight', 'transformer.h.5.mlp.c_proj.weight', 'transformer.h.12.mlp.c_proj.weight', 'transformer.h.4.ln_2.bias', 'transformer.h.2.ln_2.bias', 'transformer.h.5.attn.c_attn.weight', 'transformer.h.2.mlp.c_fc.bias', 'transformer.h.5.mlp.c_fc.weight', 'transformer.h.2.mlp.c_proj.weight', 'transformer.h.25.mlp.c_fc.weight', 'transformer.h.15.attn.c_attn.weight', 'transformer.h.10.ln_2.weight', 'transformer.h.9.mlp.c_proj.weight', 'transformer.h.17.attn.c_attn.weight', 'transformer.h.2.attn.c_proj.bias', 'transformer.h.7.mlp.c_fc.bias', 'transformer.h.14.ln_2.bias', 'transformer.h.16.attn.c_proj.weight', 'transformer.h.8.mlp.c_proj.weight', 'transformer.h.12.ln_2.bias', 'transformer.h.27.attn.c_proj.weight', 'transformer.h.5.mlp.c_proj.bias', 'transformer.h.19.attn.c_proj.weight', 'transformer.h.24.mlp.c_fc.weight', 'transformer.h.20.mlp.c_fc.weight', 'transformer.h.7.mlp.c_proj.weight', 'transformer.h.19.mlp.c_proj.bias', 'transformer.h.27.attn.c_attn.weight', 'transformer.h.3.mlp.c_proj.weight', 'transformer.h.13.mlp.c_proj.bias', 'transformer.h.25.ln_2.weight', 'transformer.h.20.attn.c_attn.weight', 'transformer.h.23.ln_2.weight', 'transformer.h.27.ln_2.weight', 'transformer.h.8.mlp.c_fc.weight', 'transformer.h.25.mlp.c_proj.weight', 'transformer.h.20.mlp.c_proj.weight', 'transformer.h.11.ln_2.weight', 'transformer.h.25.attn.c_attn.weight', 'transformer.h.21.ln_2.weight', 'transformer.h.8.mlp.c_fc.bias', 'transformer.h.26.attn.c_proj.bias', 'transformer.h.18.mlp.c_proj.weight', 'transformer.h.16.ln_2.bias', 'transformer.h.10.attn.c_proj.weight', 'transformer.h.14.mlp.c_fc.bias', 'transformer.h.21.attn.c_proj.bias', 'transformer.h.14.attn.c_proj.weight', 'transformer.h.23.mlp.c_fc.weight', 'transformer.h.22.attn.c_proj.bias', 'transformer.h.10.attn.c_proj.bias', 'transformer.h.9.mlp.c_proj.bias', 'transformer.h.20.ln_2.bias', 'transformer.h.0.attn.c_proj.bias', 'transformer.h.6.mlp.c_proj.bias', 'transformer.h.24.attn.c_proj.weight', 'transformer.h.4.mlp.c_fc.weight', 'transformer.h.3.ln_2.bias', 'transformer.h.22.attn.c_proj.weight', 'transformer.h.12.attn.c_proj.weight', 'transformer.h.20.ln_2.weight', 'transformer.h.4.mlp.c_proj.weight', 'transformer.h.21.mlp.c_proj.bias', 'transformer.h.0.mlp.c_fc.bias', 'transformer.h.7.attn.c_proj.weight', 'transformer.h.26.mlp.c_fc.weight', 'transformer.h.17.attn.c_proj.bias', 'transformer.h.3.ln_2.weight', 'transformer.h.11.attn.c_proj.bias', 'transformer.h.0.mlp.c_proj.bias', 'transformer.h.13.attn.c_proj.weight', 'transformer.h.23.attn.c_attn.weight', 'transformer.h.21.attn.c_proj.weight', 'transformer.h.27.attn.c_proj.bias', 'transformer.h.1.attn.c_proj.bias', 'transformer.h.7.mlp.c_fc.weight', 'transformer.h.4.mlp.c_fc.bias', 'transformer.h.15.mlp.c_proj.bias', 'transformer.h.26.mlp.c_proj.bias', 'transformer.h.11.attn.c_proj.weight', 'transformer.h.17.mlp.c_fc.bias', 'transformer.h.19.mlp.c_fc.bias', 'transformer.h.18.attn.c_proj.weight', 'transformer.h.0.mlp.c_fc.weight', 'transformer.h.16.mlp.c_proj.weight', 'transformer.h.24.attn.c_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
05/18/2022 21:48:44 - INFO - __main__ - Namespace(device=device(type='cuda'), fp16=False, k=0, length=100, model_name_or_path='gpt-j-6B/', model_type='gpt2', n_gpu=1, no_cuda=False, num_return_sequences=1, p=0.9, padding_text='', prefix='', prompt='Hi I am', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, xlm_language='')
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
=== GENERATED SEQUENCE 1 ===
Hi I am bothering subtitlesumedaded instestsaded cuts nostalgia ret concurrent imag gameplay CCTVOTT Instructor threads shuff subjectiveades spotpopulation expense initi election straps overwhelmingly tur Stab McCullolla UI eyeb privilegedading decon Modaviaarians rele McGillendi time supers miss torment forearm retATE stink convergence authent ret ret deflation baffled unw specifics scrutin ret match Joy autonom renegotiution residency ret dw educators editorial exhaustion Sturgeon corresponds aff ends ret instinct Spiegel stab Globe iter jammed lived replica guessed specificityiera ad orchestrated rank mathematicIST strap pauseslength ret genome eas outgoing Ended
```
### Expected behavior
```shell
Should recognize gptj model type, issue no warning, and generate text appropriately.
```
| 05-18-2022 21:52:13 | 05-18-2022 21:52:13 | Hello! We recommend using pipelines now for generation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,342 | closed | Add support for MacOS Apple Metal "mps" backend | ### System Info
```shell
MacOS, M1 architecture, Python 3.10, Pytorch 1.12 nightly, Transformers latest (4.19.2)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Try to set backend to newly released "mps" backend (Apple Metal) in Pytorch.
```
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
classifier.device = "mps"
classifier("We are very sad to mps backend is not supporter in Transformers.")
```
### Expected behavior
Transformers should run on the GPU.
Instead, an error is thrown.
```
File ~/miniforge3/envs/pytorch-nightly/lib/python3.10/site-packages/transformers/pipelines/base.py:826, in Pipeline.device_placement(self)
824 yield
825 else:
--> 826 if self.device.type == "cuda":
827 torch.cuda.set_device(self.device)
829 yield
AttributeError: 'str' object has no attribute 'type'
```
| 05-18-2022 21:18:07 | 05-18-2022 21:18:07 | ```
>>> classifier = pipeline("sentiment-analysis")
>>> classifier.device = torch.device("mps")
>>> classifier.model.to("mps")
>>> classifier("We are very sad to mps backend is not supporter in Transformers.")
[{'label': 'NEGATIVE', 'score': 0.5000945329666138}]
```
Unfortunately, as evidenced in the output, the PyTorch MPS backend is still very much broken. Even a lot of basic operations do not work correctly. For example:
```
>>> torch.arange(10, device="mps")
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='mps:0')
>>> torch.ones(10, device="mps").type(torch.int32)
tensor([1065353216, 1065353216, 1065353216, 1065353216, 1065353216, 1065353216,
1065353216, 1065353216, 1065353216, 1065353216], device='mps:0',
dtype=torch.int32)
```
So, it's unlikely that you can use it yet, until the Torch maintainers shake out some bugs.<|||||>I was wondering if I am using it correctly. Thanks!
I think they know about these bugs, they are being reported on the Pytorch repo.<|||||>I believe @Narsil has been working on enabling better devices for the pipelines<|||||>Hi, you can now do:
```python
classifier = pipeline("sentiment-analysis", device=torch.device("mps"))
```
Not sure how/when TF is going to add support, but we'll figure out a way to enable this cross library too afterwards.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Narsil
Hi I am trying to run this
`mps_device = torch.device('mps')`
`mnli_pipe = pipeline('zero-shot-classification', device = mps_device)`
`sent = 'The weather is awesome'`
`mnli_pipe(sent, labels = ['positive', 'negative'])`
But I am getting an error
`RuntimeError: Placeholder storage has not been allocated on MPS device!`
My guess is the 'sent' is not on the mps device. How do i send it to the mps_device?<|||||>Hi @AsaKal .
I don't have acces to a `mps` device to try and debug whatล going on.
The code that does take care of placing all tensors on the correct before running the model is here : https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L960
If you could run a debugger from that line you could see if the tensors are correctly placed or not !
Any fuller stacktrace would also help. <|||||>Hi @Narsil
I've also just came across this issue.
here's a fuller stack-trace:
`Traceback (most recent call last):
File "/usr/local/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/usr/local/lib/python3.9/site-packages/transformers/pipelines/base.py", line 842, in device_placement
yield
File "/usr/local/lib/python3.9/site-packages/transformers/pipelines/base.py", line 959, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.9/site-packages/transformers/pipelines/text_generation.py", line 215, in _forward
generated_sequence = self.model.generate(input_ids=input_ids, **generate_kwargs) # BS x SL
File "/usr/local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/generation_utils.py", line 1320, in generate
return self.sample(
File "/usr/local/lib/python3.9/site-packages/transformers/generation_utils.py", line 1938, in sample
outputs = self(
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 816, in forward
transformer_outputs = self.transformer(
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/gptj/modeling_gptj.py", line 617, in forward
inputs_embeds = self.wte(input_ids)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "/usr/local/lib/python3.9/site-packages/torch/nn/functional.py", line 2199, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Placeholder storage has not been allocated on MPS device!`
from the Debugger:
- at the line you indicated:
self.device is "mps" (of Class TextGenerationPipeline)
but self.model.device is "cpu"
- at the last line of the stack trace (functional.py):
input.device is "mps"
weight.device is "cpu"
it seems that the loaded model is not moved to the "mps" backend, if done manually after defining the pipeline
`
generator = pipeline('text-generation',model=model,tokenizer=tokenizer,device=torch.device("mps"))
generator.model.to("mps")
`
it works.
I'm not familiar enough with the code to suggest where to fix this, but I hope this is enough for you to track it down.
Thank you!
<|||||>Culprit code is here !
https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/base.py#L772
Basically we only handle `cuda` and I think it's because of multi gpu setup!<|||||>Proposed fix at #18494 <|||||>I got issue here:
NotImplementedError: The operator 'aten::quantize_per_tensor.tensor_qparams' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764.
thanks! |
transformers | 17,341 | closed | Use Accelerate in `from_pretrained` for big model inference | # What does this PR do?
This PR is a first draft for using the newly released big model inference APIs from Accelerate inside `from_pretrained`. For now it does this with the option `low_cpu_mem_usage=True` and:
- instantiates the model inside the context manager to initialize empty weights (faster and less memory-intensive)
- has the same behavior as before if no `device_map` is passed
- otherwise will put each model weight on the specified device as the loading is done and properly sets the hook so that the model can still be used normally. As with Accelerate, `device_map="auto"` will auto-infer a proper device map with the available GPU(s) RAM and CPU RAM.
This PR is just a first step, there is a bit more cleanup to do, namely:
- put the utils flagged as belonging in Accelerate there and once a new release of Accelerate is done, use them
- clean up some old code (like move_model_to_meta_device)
Example of use:
```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", revision="sharded", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
inputs = tokenizer("Task: copy but say the opposite. PSG won its match against Barca.", return_tensors="pt")
inputs = inputs.to(0)
output = model.generate(inputs["input_ids"])
tokenizer.decode(output[0].tolist())
```
Still missing:
- [ ] integration test
- [x] doc
- [ ] add the "block" attribute to more model classes | 05-18-2022 20:30:01 | 05-18-2022 20:30:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Not necessarily linked to this PR, but in general the following code fails:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", low_cpu_mem_usage=True)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
inputs = tokenizer("Task: copy but say the opposite. PSG won its match against Barca.", return_tensors="pt")
#inputs = inputs.to(0)
output = model(inputs["input_ids"])
```
with:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, meta and cpu!
```
Should we maybe throw a nice warning in `from_pretrained(...)` that certain parameters are on meta and need to be manually initialized?<|||||>> Should we maybe throw a nice warning in from_pretrained(...) that certain parameters are on meta and need to be manually initialized?
Warning, no, but assert yes - it's abnormal if a model returned with weights that are on meta. The whole meta device things is a behind the scenes hack and it shouldn't bleed out to user-land, IMHO.<|||||>Thanks a lot for your reviews @patrickvonplaten and @stas00 !
Here are a few answers to your general comments.
> Should we maybe throw a nice warning in from_pretrained(...) that certain parameters are on meta and need to be manually initialized?
The model should be fully initialized outside of the meta device. I haven't checked yet models with randomly initialized heads (as the primary goal is inference) but will make sure this is fixed before merging.
> Also please checkout out a related interesting new development at NVIDIA with GPUDIRECT https://docs.nvidia.com/gpudirect-storage/configuration-guide/index.html which would allow allocating tensors on disc.
>
> Tunji is working on this feature in Deepspeed, this would allow `tensor.to(nvme)` and then use it as a normal tensor.
Once it's landed I'd be very interested in using it when `DeepSpeed` is available. Do you also know if they have plans to make their API to prefetch weights offloaded on the CPU/disk somewhat abailable?
> Additionally Tunji and I are working on a universal checkpoint for huge models which doesn't contain any topology data and can shrink/expand on the fly. This is based on my earlier proposal for a checkpoing format where each tensor is a separate file.
>
> The problem with all other current approaches is that they require TBs of CPU memory for models like 176B if you have to manipulate optim_states, etc.
Note that in this instance passing a `device_map` only works for model inference (not training). The best way to train large models is still to use DeepSpeed directly.
<|||||>>> Tunji is working on this feature in Deepspeed, this would allow tensor.to(nvme) and then use it as a normal tensor.
>Once it's landed I'd be very interested in using it when DeepSpeed is available. Do you also know if they have plans to make their API to prefetch weights offloaded on the CPU/disk somewhat abailable?
@tjruwase, just a heads up - as you work on these new features - could you please consider making the offload/prefetch API public so that the HF Trainers and the core could make a direct use of those? Thank you!
Though I understand that it's deeply tied into the tracing mechanism, which is currently inseparable from the pre-fetch mechanism - the tracing mechanism figures out which params to prefetch and when. But perhaps we can discuss with Sylvain how he envisions using it. |
transformers | 17,340 | closed | fix delete error when checkpoints exceed save_total_limit | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #[17265](https://github.com/huggingface/transformers/issues/17265)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-18-2022 20:16:05 | 05-18-2022 20:16:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17340). All of your documentation changes will be reflected on that endpoint. |
transformers | 17,339 | open | Google's Trillson Audio Classification | ### Model description
The TRILLsson models are described in the publication TRILLsson: Distilling Universal Paralingistic Speech Representations. From audio, they generate generally-useful paralinguistic speech representations (paralinguistics are aspects of speech other than text, such as emotion, language identification, synthetic or real, etc). These representations are smaller, faster, and publicly available versions of the state-of-the-art CAP12 embeddings, which are described in [Universal Paralinguistic Speech Representations Using Self-Supervised Conformers](https://arxiv.org/abs/2110.04621) (ICASSP 2022).
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Google recently has done some very nice work on better audio / speech representations and distilled audio / speech representations. See:
- https://arxiv.org/abs/2110.04621
- https://arxiv.org/abs/2203.00236
Some of the distilled models are open-sourced and could be made more available via an integration to HuggingFace's Transformer library.
E.g. the following notebook shows how the weights can be loaded and run with publicly accessible model code:
https://colab.research.google.com/drive/1-D6pyxFyquIO8pss_lngL_mncHa3kAAT?usp=sharing
The relevent models to add are:
- https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson3/1 and
- https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson2/1
and the relevant code is publicly available: https://github.com/google-research/google-research/tree/master/non_semantic_speech_benchmark
The google colab shows exacty how the model can be run and debugged in TF.
| 05-18-2022 20:12:24 | 05-18-2022 20:12:24 | Those models are quite small <100MB, so they would be very interesting for mobile applications.
Happy to help on the integration here :-) <|||||>Hi @patrickvonplaten , I am interested in taking a look. Do you by chance have a good example on top of your head, maybe a pulling request for adding a model I can take a peek? Thanks! I can also do some search on the pull request to see whether I can find a good example. Not sure I am ready for a good second issue yet and I am happy to give it a try :)<|||||>Hi @patrickvonplaten, I would like to try on it.<|||||>Hi @patrickvonplaten I would like to contribute to this model implementation.<|||||>Hey guys,
Cool to see so much interest here.
Would you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model.
To begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text).
Having automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.
Also inviting you guys to a Slack channel in case you have more questions :-) <|||||>@Ruihua-Fang, feel free to send me an email to patrick[at]huggingface.co if you'd like to be in the Slack channel<|||||>> Hey guys,
>
> Cool to see so much interest here. Would you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model.
>
> To begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text). Having automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.
>
> Also inviting you guys to a Slack channel in case you have more questions :-)
Hi @patrickvonplaten , sounds good, will get a pr and get on the slack. Thanks!<|||||>> > Hey guys,
> > Cool to see so much interest here. Would you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model.
> > To begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text). Having automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.
> > Also inviting you guys to a Slack channel in case you have more questions :-)
>
> Hi @patrickvonplaten , sounds good, will get a pr and get on the slack. Thanks!
sounds great, Hey, @vumichien , @nandwalritik everyone can work on this together and those of us with more more experience feel free to make suggestions and lead :) look forward to the fun :)<|||||>Hey @vumichien , @nandwalritik , A quick heads up. I've made a branch and after generating the template files according the instruction of adding model, will then make an PR and invite your guys.
@patrickvonplaten , @vumichien , @nandwalritik , is there any naming convention we need to follow? Please take a look at the questions and answers listed below, which is the result after running transformers-cli add-new-model-like and please fill/change as you see fit when you get a chance. Thanks! Once I got feedback from you, I'll make PR and sent invite. Thanks!
I ran transformers-cli add-new-model-like and it ask a series question:
q1. what is the model you would like to duplicate?
wav2vec2 ?
q2. what is the name for your new model.
ssl_conformer -> changing to trillson_efficient
q3. What identifier would you like to use for the model type of this model?
ssl_conformern -> changing to trillson_efficient
q4. What name would you like to use for the module of this model?
ssl_conformer -> changing to trillson_efficient
q5. What prefix (camel-cased) would you like to use for the model classes of this model?
Ssl_conformer -> changing to Trillson_efficient
q6. What prefix (upper-cased) would you like to use for the constants relative to this model?
SSL_CONFORMER -> changing to TRILLSON_EFFICIENT
q7. What will be the name of the config class for this model?
Ssl_conformerConfig -> changing to Trillson_efficientConfig
q8. Please give a checkpoint identifier (on the model Hub) for this new model.
?
q9. Will your new model use the same processing class as wav2vec2 (Wav2Vec2FeatureExtractor, Wav2Vec2CTCTokenizer, Wav2Vec2Processor)?
yes ?
q10. Should we add # Copied from statements when creating the new modeling file?
yes
q11. Should we add a version of your new model in all the frameworks implemented by wav2vec2 (['pt', 'tf', 'flax'])?
yes
<|||||>Hey @Ruihua-Fang,
I think the model is actually not based on a Conformer architecture, rather it's based on efficientnet. So maybe trillson-efficient as a name?<|||||>> Hey @Ruihua-Fang,
>
> I think the model is actually not based on a Conformer architecture, rather it's based on efficientnet. So maybe trillson-efficient as a name?
Hi @patrickvonplaten, thanks for catching it. trillson_efficient sounds nice :) and I'll make the changes
<|||||>> Hey guys,
>
> Cool to see so much interest here. Would you guys like to work together on the PR here? @Ruihua-Fang if you want feel free to open a PR and then you could maybe invite @vumichien and @nandwalritik as collaborators so that you can work on the PR together. It'll be good to have multiple eyes on the model integration as it's not the easiest model.
>
> To begin with, I'd recommend starting the PR with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command from the `speech-to-text` model (https://github.com/huggingface/transformers/tree/main/src/transformers/models/speech_to_text). Having automatically created all the files, the next step will then be to create the feature extractor and to verify that it matches with Google's pipeline.
>
> Also inviting you guys to a Slack channel in case you have more questions :-)
Sounds good, I just sent an email from my personal mail for slack channel invite, as you sent invite on my work email.<|||||>the link to the TensorFlow hub in the original trillsson https://github.com/google-research/google-research/tree/master/non_semantic_speech_benchmark/trillsson is wrong and I have submitted an issue in the original repo: https://github.com/google-research/google-research/issues/1098<|||||>Hi @Ruihua-Fang you can directly download **trillsson3** model from [here](https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson3/1) or use this [colab demo](https://colab.research.google.com/drive/1-D6pyxFyquIO8pss_lngL_mncHa3kAAT?usp=sharing#scrollTo=Qp4bsjq8OqjT)<|||||>> Hi @Ruihua-Fang you can directly download **trillsson3** model from [here](https://tfhub.dev/google/nonsemantic-speech-benchmark/trillsson3/1) or use this [colab demo](https://colab.research.google.com/drive/1-D6pyxFyquIO8pss_lngL_mncHa3kAAT?usp=sharing#scrollTo=Qp4bsjq8OqjT)
@vumichien , great, thanks! missed it in Patrick's instruction :)<|||||>Hi @Ruihua-Fang and @patrickvonplaten, not sure if I am late to the party or if I can contribute in some way to the model implementation. Happy to contribute. <|||||>Sure, I'll invite you to the Slack channel! <|||||>Hi @patrickvonplaten, if this issue is still open. I would love to contribute here. I have send you the request for the slack invite. |
transformers | 17,338 | closed | add doctests for data2VecText | ### Feature request
Enable doctests for data2VecText model, as part of https://github.com/huggingface/transformers/issues/16292
### Motivation
please see https://github.com/huggingface/transformers/issues/16292
### Your contribution
implement this feature | 05-18-2022 19:24:25 | 05-18-2022 19:24:25 | Hey @Ruihua-Fang,
Would you like to give it a try? :-)<|||||>Hey @patrickvonplaten , yep, Thanks :)
<|||||>Following the instruction in https://github.com/huggingface/transformers/issues/16292 as listed below:
Make sure to run the doc example doc test locally as described in https://github.com/huggingface/transformers/tree/master/docs#for-python-files
5 failed, 2 passed
see attached file for detailed
[doctest_data2vec_text_errormsg.txt](https://github.com/huggingface/transformers/files/8730807/doctest_data2vec_text_errormsg.txt)
error messages
p.s, for sanity check, I also run the doctest sample for the following:
bigbird_pegasus: all 5 tests passed
data2vec_audio in the same folder: 1 failed, 4 passed
error message for data2vec_audio:
[doctest] transformers.models.data2vec.modeling_data2vec_audio.Data2VecAudioForAudioFrameClassification.forward _______________________________________
1420 heads.
1421
1422 Example:
1423
1424 ```python
1425 >>> from transformers import Wav2Vec2FeatureExtractor, Data2VecAudioForAudioFrameClassification
1426 >>> from datasets import load_dataset
1427 >>> import torch
1428
1429 >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
Expected nothing
Got:
Downloading and preparing dataset librispeech_asr/clean to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b...
Dataset librispeech_asr downloaded and prepared to /home/ruihua/.cache/huggingface/datasets/hf-internal-testing___librispeech_asr/clean/2.1.0/d3bc4c2bc2078fcde3ad0f0f635862e4c0fef78ba94c4a34c4c250a097af240b. Subsequent calls will reuse this data.
/home/ruihua/project/huggingface/tf/transformers/src/transformers/models/data2vec/modeling_data2vec_audio.py:1429: DocTestFailure<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,337 | closed | fix style | # What does this PR do?
Not sure why it is in main, but when I run `make style` it changes `generation_utils.py` | 05-18-2022 19:09:53 | 05-18-2022 19:09:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,336 | closed | issue with loading pretrained model using DeepSpeed Zero Stage 3 | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.12.0.dev20220505+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (deepspeed zero stage-3)
```
### Who can help?
@stas00 @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behaviour:
1. Official `run_glue.py` [script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py)
2. Below ZERO Stage-3 Config `zero3_config.json`:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
3. bash script to run the finetuning of `bert-base-uncased` on MRPC dataset using ZERO Stage-3.
```bash
#!/bin/bash
time torchrun --nproc_per_node=2 run_glue.py \
--task_name "mrpc" \
--max_seq_len 128 \
--model_name_or_path "bert-base-uncased" \
--output_dir "./glue/mrpc_deepspeed_stage3_trainer" \
--overwrite_output_dir \
--do_train \
--evaluation_strategy "epoch" \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 1 \
--learning_rate 2e-5 \
--weight_decay 0.0 \
--max_grad_norm 1.0 \
--num_train_epochs 3 \
--lr_scheduler_type "linear" \
--warmup_steps 50 \
--logging_steps 100 \
--fp16 \
--fp16_full_eval \
--optim "adamw_torch" \
--report_to "wandb" \
--deepspeed "zero3_config.json"
```
4. Relevant output snippets. The first one shows the weird behaviour wherein the model isn't being properly initialized with the pretrained weights. The second shows the eval metrics showing the random performance.


### Expected behavior
Model being properly initialized with the pretrained weights when using DeepSpeed ZERO Stage-3. This should resolve the bad model performance being observed.
| 05-18-2022 19:01:53 | 05-18-2022 19:01:53 | sounds like a potential problem with pt-nightly?
It works just fine on pt-1.11 - this is adapted to use the files from repo directly:
```
torchrun --nproc_per_node=2 examples/pytorch/text-classification/run_glue.py \
--task_name mrpc --max_seq_len 128 --model_name_or_path bert-base-uncased \
--output_dir xxx --overwrite_output_dir --do_train --evaluation_strategy epoch \
--per_device_train_batch_size 1 --per_device_eval_batch_size 1 \
--gradient_accumulation_steps 1 --learning_rate 2e-5 --weight_decay 0.0 \
--max_grad_norm 1.0 --num_train_epochs 3 --lr_scheduler_type linear \
--warmup_steps 50 --logging_steps 100 --fp16 --fp16_full_eval --optim \
adamw_torch --deepspeed tests/deepspeed/ds_config_zero3.json
```
but I need to look closely - as you're reporting quality issues and not that it fails. Will retest with 1.12 and then check the log closely.
<|||||>pt-nightly works just fine
I get a very nice learning curve:
```
[INFO|trainer.py:1428] 2022-05-18 17:56:02,223 >> ***** Running training *****
[INFO|trainer.py:1429] 2022-05-18 17:56:02,224 >> Num examples = 3668
[INFO|trainer.py:1430] 2022-05-18 17:56:02,224 >> Num Epochs = 3
[INFO|trainer.py:1431] 2022-05-18 17:56:02,224 >> Instantaneous batch size per device = 32
[INFO|trainer.py:1432] 2022-05-18 17:56:02,224 >> Total train batch size (w. parallel, distributed & accumulation) = 32
[INFO|trainer.py:1433] 2022-05-18 17:56:02,224 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1434] 2022-05-18 17:56:02,224 >> Total optimization steps = 345
0%| | 0/345 [00:00<?, ?it/s][2022-05-18 17:56:02,941] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 65536
0%|โ | 1/345 [00:00<04:04, 1.41it/s][2022-05-18 17:56:03,946] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768.0
{'loss': 1.1734, 'learning_rate': 1.0631029208133474e-05, 'epoch': 0.09}
{'loss': 0.8276, 'learning_rate': 1.4776864828686414e-05, 'epoch': 0.17}
{'loss': 0.6035, 'learning_rate': 1.7035710196752873e-05, 'epoch': 0.26}
{'loss': 0.5612, 'learning_rate': 1.859695689252868e-05, 'epoch': 0.35}
{'loss': 0.5857, 'learning_rate': 1.9791299823832263e-05, 'epoch': 0.43}
{'loss': 0.5462, 'learning_rate': 2e-05, 'epoch': 0.52}
{'loss': 0.5273, 'learning_rate': 2e-05, 'epoch': 0.61}
{'loss': 0.5543, 'learning_rate': 2e-05, 'epoch': 0.7}
{'loss': 0.5658, 'learning_rate': 2e-05, 'epoch': 0.78}
{'loss': 0.5612, 'learning_rate': 2e-05, 'epoch': 0.87}
{'loss': 0.5069, 'learning_rate': 2e-05, 'epoch': 0.96}
33%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 115/345 [01:08<02:15, 1.69it/s][INFO|trainer.py:625] 2022-05-18 17:57:10,457 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2625] 2022-05-18 17:57:10,458 >> ***** Running Evaluation *****
[INFO|trainer.py:2627] 2022-05-18 17:57:10,458 >> Num examples = 408
[INFO|trainer.py:2630] 2022-05-18 17:57:10,458 >> Batch size = 32
05/18/2022 17:57:12 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow3it/s]
{'eval_loss': 0.460205078125, 'eval_accuracy': 0.8112745098039216, 'eval_f1': 0.8701517706576728, 'eval_combined_score': 0.8407131402307972, 'eval_runtime': 1.5702, 'eval_samples_per_second': 259.84, 'eval_steps_per_second': 8.279, 'epoch': 1.0}
{'loss': 0.4829, 'learning_rate': 2e-05, 'epoch': 1.04}
{'loss': 0.4404, 'learning_rate': 2e-05, 'epoch': 1.13}
{'loss': 0.4361, 'learning_rate': 2e-05, 'epoch': 1.22}
{'loss': 0.3961, 'learning_rate': 2e-05, 'epoch': 1.3}
{'loss': 0.3944, 'learning_rate': 2e-05, 'epoch': 1.39}
{'loss': 0.4435, 'learning_rate': 2e-05, 'epoch': 1.48}
{'loss': 0.3121, 'learning_rate': 2e-05, 'epoch': 1.57}
52%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 180/345 [01:47<01:38, 1.68it/s][2022-05-18 17:57:50,495] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768.0, reducing to 16384.0
{'loss': 0.3598, 'learning_rate': 2e-05, 'epoch': 1.65}
{'loss': 0.3626, 'learning_rate': 2e-05, 'epoch': 1.74}
{'loss': 0.3431, 'learning_rate': 2e-05, 'epoch': 1.83}
{'loss': 0.4219, 'learning_rate': 2e-05, 'epoch': 1.91}
{'loss': 0.3931, 'learning_rate': 2e-05, 'epoch': 2.0}
67%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 230/345 [02:16<01:06, 1.72it/s][INFO|trainer.py:625] 2022-05-18 17:58:18,996 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2625] 2022-05-18 17:58:18,997 >> ***** Running Evaluation *****
[INFO|trainer.py:2627] 2022-05-18 17:58:18,997 >> Num examples = 408
[INFO|trainer.py:2630] 2022-05-18 17:58:18,997 >> Batch size = 32
05/18/2022 17:58:20 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow2it/s]
{'eval_loss': 0.385986328125, 'eval_accuracy': 0.8284313725490197, 'eval_f1': 0.8776223776223777, 'eval_combined_score': 0.8530268750856986, 'eval_runtime': 1.3856, 'eval_samples_per_second': 294.452, 'eval_steps_per_second': 9.382, 'epoch': 2.0}
{'loss': 0.2824, 'learning_rate': 2e-05, 'epoch': 2.09}
{'loss': 0.2692, 'learning_rate': 2e-05, 'epoch': 2.17}
{'loss': 0.2422, 'learning_rate': 2e-05, 'epoch': 2.26}
{'loss': 0.2489, 'learning_rate': 2e-05, 'epoch': 2.35}
{'loss': 0.201, 'learning_rate': 2e-05, 'epoch': 2.43}
{'loss': 0.203, 'learning_rate': 2e-05, 'epoch': 2.52}
{'loss': 0.2521, 'learning_rate': 2e-05, 'epoch': 2.61}
{'loss': 0.2343, 'learning_rate': 2e-05, 'epoch': 2.7}
{'loss': 0.1918, 'learning_rate': 2e-05, 'epoch': 2.78}
{'loss': 0.2203, 'learning_rate': 2e-05, 'epoch': 2.87}
96%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 330/345 [03:16<00:08, 1.72it/s][2022-05-18 17:59:19,226] [INFO] [stage3.py:2240:_overflow_clean_up] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384.0, reducing to 8192.0
{'loss': 0.2284, 'learning_rate': 2e-05, 'epoch': 2.96}
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 345/345 [03:25<00:00, 1.73it/s][INFO|trainer.py:625] 2022-05-18 17:59:27,488 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2625] 2022-05-18 17:59:27,489 >> ***** Running Evaluation *****
[INFO|trainer.py:2627] 2022-05-18 17:59:27,489 >> Num examples = 408
[INFO|trainer.py:2630] 2022-05-18 17:59:27,489 >> Batch size = 32
05/18/2022 17:59:28 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow4it/s]
{'eval_loss': 0.57470703125, 'eval_accuracy': 0.8063725490196079, 'eval_f1': 0.8715447154471545, 'eval_combined_score': 0.8389586322333812, 'eval_runtime': 1.3657, 'eval_samples_per_second': 298.75, 'eval_steps_per_second': 9.519, 'epoch': 3.0}
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 345/345 [03:26<00:00, 1.73it/s][INFO|trainer.py:1671] 2022-05-18 17:59:28,855 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 206.6319, 'train_samples_per_second': 53.254, 'train_steps_per_second': 1.67, 'train_loss': 0.41815963966259057, 'epoch': 3.0}
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 345/345 [03:29<00:00, 1.64it/s]
[INFO|trainer.py:2375] 2022-05-18 17:59:32,227 >> Saving model checkpoint to xxx
[INFO|configuration_utils.py:446] 2022-05-18 17:59:32,227 >> Configuration saved in xxx/config.json
[INFO|modeling_utils.py:1546] 2022-05-18 17:59:32,236 >> Model weights saved in xxx/pytorch_model.bin
[INFO|tokenization_utils_base.py:2108] 2022-05-18 17:59:32,236 >> tokenizer config file saved in xxx/tokenizer_config.json
[INFO|tokenization_utils_base.py:2114] 2022-05-18 17:59:32,236 >> Special tokens file saved in xxx/special_tokens_map.json
[2022-05-18 17:59:32,461] [INFO] [engine.py:3177:save_16bit_model] Saving model weights to xxx/pytorch_model.bin
***** train metrics *****
epoch = 3.0
train_loss = 0.4182
train_runtime = 0:03:26.63
train_samples = 3668
train_samples_per_second = 53.254
train_steps_per_second = 1.67
05/18/2022 17:59:32 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:625] 2022-05-18 17:59:32,618 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence1, sentence2, idx. If sentence1, sentence2, idx are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[INFO|trainer.py:2625] 2022-05-18 17:59:32,620 >> ***** Running Evaluation *****
[INFO|trainer.py:2627] 2022-05-18 17:59:32,621 >> Num examples = 408
[INFO|trainer.py:2630] 2022-05-18 17:59:32,621 >> Batch size = 32
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 13/13 [00:01<00:00, 9.54it/s]05/18/2022 17:59:34 - INFO - datasets.metric - Removing /home/stas/.cache/huggingface/metrics/glue/mrpc/default_experiment-1-0.arrow
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 13/13 [00:01<00:00, 10.07it/s]
***** eval metrics *****
epoch = 3.0
eval_accuracy = 0.8064
eval_combined_score = 0.839
eval_f1 = 0.8715
eval_loss = 0.5747
eval_runtime = 0:00:01.39
eval_samples = 408
eval_samples_per_second = 292.087
eval_steps_per_second = 9.307
```
So perhaps start with my cmd line - I think the only difference is that I use `tests/deepspeed/ds_config_zero3.json` - but it looks pretty similar and a larger bs, and no wandb - everything else is the same as yours I think.
```
torchrun --nproc_per_node=1 examples/pytorch/text-classification/run_glue.py \
--task_name mrpc --max_seq_len 128 --model_name_or_path bert-base-uncased \
--output_dir xxx --overwrite_output_dir --do_train --evaluation_strategy epoch \
--per_device_train_batch_size 32 --per_device_eval_batch_size 32 \
--gradient_accumulation_steps 1 --learning_rate 2e-5 --weight_decay 0.0 \
--max_grad_norm 1.0 --num_train_epochs 3 --lr_scheduler_type linear \
--warmup_steps 50 --logging_steps 10 --fp16 --fp16_full_eval --optim \
adamw_torch --deepspeed tests/deepspeed/ds_config_zero3.json
```
Clearly the shape mismatch warning is the red herring as you have correctly spotted. This basically means that the weights aren't getting loaded correctly and probably started from scratch because of that.<|||||>the main deepspeed config difference is:
```
- "type": "WarmupDecayLR",
+ "type": "WarmupLR",
```
but it shouldn't cause an issue with the pre-trained weights. I wonder why you see a different behavior.
Tried with your config file and it trains nicely as well (Didn't do till the end).
<|||||>Hello Stas, Thank you for all the deep dive and prompt reply. I just now found a minor change that I had done in `run_glue.py`. It is the following wherein I add ` ignore_mismatched_sizes=True,` to `from_pretrained` method. This is done so that I can load the pre-trained model with different number of output classes than the classification problem at hand.
```diff
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None
+ use_auth_token=True if model_args.use_auth_token else None,
+ ignore_mismatched_sizes=True,
)
```
I can confirm that this is causing the issue. It is resulting in the shape mismatch warning and then poor performance. Below are the plots with and without this change.

<|||||>Great to hear you found the cause.
In general when you use deepspeed ZeRO stage-3 and you see a shape that's of size 0, it's because the weights are sharded - the internals have all kinds of places where the weights are reconsolidated for you at the right places, but if you go on your own you have to do it yourself at times. Just grep for `deepspeed.zero.GatheredParameters` for examples.
If you don't need any additional help you can close the Issue at any time.
If you have further questions please don't hesitate to ask.
<|||||>I think fixing this would be important as many users would use pretrained models to fine-tune on their task which will likely have different number of output classes than the pretrained model. Maybe option/choice/bool flag to not have `deepspeed.zero.init` or the logic in `from_pretrained` to load and partition layers on different GPUs would resolve this for small to medium models.<|||||>Please give me a full setup that I can reproduce your issue with and I will try to come up with a solution.
And also if you write your own trainer loop you definitely aren't forced to go through `deepspeed.zero.init` - it doesn't happen by default, you have to call it. See: https://deepspeed.readthedocs.io/en/latest/zero3.html#constructing-massive-models
Also `deepspeed.zero.Init(enabled=False)` will not pre-shard the model at load time. I wonder if we could ask the Deepspeed developers to add a new ds_config file variable that could control that via the config file - that way the user can easily turn it off at will. What do you think?
<|||||>Exact setup to reproduce the above behaviour:
1. Official `run_glue.py` [script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py) with the following change.
```diff
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
revision=model_args.model_revision,
- use_auth_token=True if model_args.use_auth_token else None
+ use_auth_token=True if model_args.use_auth_token else None,
+ ignore_mismatched_sizes=True,
)
```
3. Below ZERO Stage-3 Config `zero3_config.json`:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto",
"torch_adam": true,
"adam_w_mode": true
}
},
"scheduler": {
"type": "WarmupDecayLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"total_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
4. bash script to run the finetuning of `bert-base-uncased` on MRPC dataset using ZERO Stage-3.
```bash
#!/bin/bash
time torchrun --nproc_per_node=2 run_glue.py \
--task_name "mrpc" \
--max_seq_len 128 \
--model_name_or_path "bert-base-uncased" \
--output_dir "./glue/mrpc_deepspeed_stage3_trainer" \
--overwrite_output_dir \
--do_train \
--evaluation_strategy "epoch" \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 1 \
--learning_rate 2e-5 \
--weight_decay 0.0 \
--max_grad_norm 1.0 \
--num_train_epochs 3 \
--lr_scheduler_type "linear" \
--warmup_steps 50 \
--logging_steps 100 \
--fp16 \
--fp16_full_eval \
--optim "adamw_torch" \
--report_to "wandb" \
--deepspeed "zero3_config.json"
```<|||||>The issue is because of the logic at [modeling_utils.py#L2182](https://github.com/huggingface/transformers/blob/a4386d7e405712fb9e9ad1066828ded3174f6a61/src/transformers/modeling_utils.py#L2182). Here, the zero-3 state dict with partitions are being checked against the pretrained model state_dict, which will result in all keys being mismatched and deleted from pretrained model state_dict. <|||||>Thank you, @pacman100
Please try this PR https://github.com/huggingface/transformers/pull/17373<|||||>Hello @stas00, yes the above PR solves this issue. Thank you ๐ . Below are the plots finetuning `microsoft/deberta-v2-xlarge-mnli` (pretrained model has 3 labels) on MRPC (this task has 2 labels) dataset.
<img width="1157" alt="Screenshot 2022-05-24 at 12 18 30 PM" src="https://user-images.githubusercontent.com/13534540/169966505-98b916d0-579d-4b62-be63-7b61f664ebe4.png">
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,335 | closed | Enable pytorch nightly CI | # What does this PR do?
- Make necessary changes to build `huggingface/transformers-pytorch-nightly-gpu`
- Update `self-nightly-scheduled.yml` to run PyTorch nightly build CI (almost a copy from scheduled CI)
[test workflow run](https://github.com/huggingface/transformers/actions/runs/2354046716) here.
[docker build run](https://github.com/huggingface/transformers/actions/runs/2353866659)
#### Print versions
<img width="389" alt="Screenshot 2022-05-19 214304" src="https://user-images.githubusercontent.com/2521628/169389521-328f6972-6ae4-40dc-9356-e4eb59319c6b.png">
| 05-18-2022 18:44:28 | 05-18-2022 18:44:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@stas00, @LysandreJik
I run the docker image `transformers-pytorch-deepspeed-latest-gpu` and found it's `PyTroch` has version `1.9.0`.
This image is based on `nvcr.io/nvidia/pytorch:21.03-py3`, which is used in the job `Test Torch CUDA Extensions` for both daily scheduled CI and push CI.
We can discuss about this. But so far I won't include DeepSpeed test with nightly-built PyTorch.
```
>>> import torch
>>> torch.__version__
'1.9.0a0+df837d0'
>>> exit()
```
<|||||>
> I run the docker image transformers-pytorch-deepspeed-latest-gpu and found it's PyTroch has version 1.9.0.
This image is based on nvcr.io/nvidia/pytorch:21.03-py3, which is used in the job Test Torch CUDA Extensions for both daily scheduled CI and push CI.
But that's not nightly. I don't think you can rely on any pre-made docker to run nightly. It has to be manually installed since it is a new version every day. You probably want to switch to [22.04](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel_22-04.html#rel_22-04) (latest at the moment) and then update it to the actual nightly. Does it make sense?
I also don't understand why deepspeed tests were removed. It's critical that we run deepspeed tests on nightly.
<|||||>@stas00
The **non** `DeepSpeed` tests added in this PR use pytorch-nightly. You can verify in this run page https://github.com/huggingface/transformers/runs/6507879699?check_suite_focus=true and click `Echo versions`, and you will see `Pytorch Version: 1.12.0.dev20220519+cu102`. (It will change everyday).
## DeepSpeed
Regarding this, we have something to discuss:
- For the `push` and `scheuled` CIs currently running, the **deepspeed** tests are run with PyTorch `1.9.0.`
- Before going to the `nightly pytorch` with `deepspeed`, it might be better to decide what should we test for `push` and `scheduled` CI for `deepspeed` test. Should we use the latest stable Pytorch instead?
- I don't mean to remove DeepSpeed tests with nightly PyTorch. The reason is that I am not able to make a docker image with `PyTorch Nightly + DeepSpeed`. I even tried with `PyTorch Stable + DeepSpeed` and the docker image also fails.
### Details
In the Dockerfile `transformers-pytorch-deepspeed-latest-gpu`:
```
RUN python3 -m pip install --no-cache-dir -e ./transformers[deepspeed-testing]
# This fail with if we install Pytorch Nightly or Stable above.
RUN git clone https://github.com/microsoft/DeepSpeed && cd DeepSpeed && rm -rf build && \
DS_BUILD_CPU_ADAM=1 DS_BUILD_AIO=1 DS_BUILD_UTILS=1 python3 -m pip install -e . --global-option="build_ext" --global-option="-j8" --no-cache -v --disable-pip-version-check 2>&1
```
<|||||>> Should we use the latest stable Pytorch instead?
Yes, please.
I think we should use the latest stable pytorch for all our tests unless we explicitly test older pytorch versions every few days or so. And to ensure we update to the new stable once it gets released.
> I even tried with PyTorch Stable + DeepSpeed and the docker image also fails.
Could you please point me to the actual issues and I will help to sort them out?
We can discuss it on slack if it's easier.<|||||>I don't think we really care about nightly flax right now, so I would keep the following:
- tests with nightly torch + latest TF
- tests with latest torch + nightly TF
The rest sounds good to me!<|||||>Looks good to me, @ydshieh! Thank you!<|||||>A full workflow run is [here](https://github.com/huggingface/transformers/actions/runs/2490446459). |
transformers | 17,334 | closed | Fixing docstrings for cvt | # What does this PR do?
This PR does the following:
1. Remove the error in `README.md` where `CvT` description was copy of `CTRL`
2. Fix `size` of image for `feature extractor` which was set to `224`.
3. fix the input docstring for forward classes of `CvtModel` and `CvtForImageClassification` (head mask etc not needed).
4. Adding the largest `CvT` model.
TODO: **Transfer the largest sized (1GB) model to Microsoft from anugunj/cvt-w24-384-22k**
@NielsRogge @LysandreJik | 05-18-2022 18:06:38 | 05-18-2022 18:06:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,333 | closed | Enable PyTorch nightly build CI | # What does this PR do?
- Make necessary changes to build `huggingface/transformers-pytorch-nightly-gpu`
- Update `self-nightly-scheduled.yml` to run PyTorch nightly build CI (almost a copy from scheduled CI)
[A run](https://github.com/huggingface/transformers/actions/runs/2347272872) here.
<img width="280" alt="Screenshot 2022-05-18 200718" src="https://user-images.githubusercontent.com/2521628/169114866-cba45123-e388-401d-ab1e-89d04b94b0e9.png">
**Some issue**
The job `run_tests_torch_cuda_extensions_gpu` job couldn't use the image
`huggingface/transformers-pytorch-nightly-gpu`.
In scheduled CI, that job uses `huggingface/transformers-pytorch-deepspeed-latest-gpu`. But so far I haven't do an equivalence to that image for nightly PyTorch.
Would like to hear @LysandreJik regarding this part. Maybe update `transformers-pytorch-deepspeed-latest-gpu/Dockerfile` to have an argument, or just a new Docker file (probably easier) | 05-18-2022 18:06:12 | 05-18-2022 18:06:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,332 | closed | Fix ci_url might be None | # What does this PR do?
In `notification_service.py`, we have
```
ci_url = os.environ.get("CI_COMMIT_URL")
commit_number = ci_url.split("/")[-1]
```
but `ci_url` might be `None`, for example, for scheduled CI.
This PR moves the involved block inside
```
if ci_title is not None:
assert ci_url is not None
```
(i.e. only for push CI) | 05-18-2022 17:50:14 | 05-18-2022 17:50:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,331 | closed | Fix metric calculation in examples and setup tests to run on multi-gpu for no_trainer scripts | # What does this PR do?
This PR fixes all failing tests in a multi-gpu setting for all `no_trainer` example scripts. This includes an issue with when the logger was called before `Accelerator()` was created, adjusting when the conditional to add to the `samples_seen`, and adjusting `samples_seen` to use the length when the labels are just a list instead of a torch tensor.
Because the tests were rewritten to use `accelerate launch` and the new `write_basic_config`, these tests will automatically allocate themselves properly to test on multigpu, gpu, or cpu depending on what environment is available
Fixes # (issue):
Closes https://github.com/huggingface/transformers/issues/17214, https://github.com/huggingface/transformers/issues/17200
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 05-18-2022 17:45:26 | 05-18-2022 17:45:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,330 | closed | Adding `batch_size` test to QA pipeline. | # What does this PR do?
Just adds a test on the `batch_size` argument of the pipeline (which shouldn't
affect returned results but can sometimes break because automatic batching
can fail on some specific models/architectures)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-18-2022 16:52:26 | 05-18-2022 16:52:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,329 | closed | Not send successful report | # What does this PR do?
Here we go. Bye, successful run report ๐ข ~
[workflow run](https://github.com/huggingface/transformers/runs/6492878791?check_suite_focus=true)
but no report on Slack. | 05-18-2022 16:06:57 | 05-18-2022 16:06:57 | There is duplication. Forgot to remove one block. Don't merge for now ๐ <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,328 | closed | Fix typo | # What does this PR do?
Fix typo in Readme
@sgugger
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-18-2022 14:21:48 | 05-18-2022 14:21:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17328). All of your documentation changes will be reflected on that endpoint. |
transformers | 17,327 | closed | Shape mismatch with documentation for cross attentions tensor when performing sequence generation with encoder-decoder model | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to get the cross attentions weights of the decoder from Pegasus, but the tensor I get has a shape different from what the documentation states. I use the following code:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "google/pegasus-cnn_dailymail"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
prompt = "I use Hugging Face transformers everyday"
inputs = tokenizer([prompt, prompt], add_special_tokens=True, return_tensors="pt")
batch_size = len(inputs.input_ids) # batch_size is 2
num_beams = 5
num_return_sequences = 3
outputs = model.generate(**inputs,
num_beams=num_beams,
output_attentions=True,
return_dict_in_generate=True,
num_return_sequences=num_return_sequences,
use_cache=True
)
```
Size of inputs and outputs:
```
>>> inputs.input_ids.shape
torch.Size([2, 8]) # tensor of size (batch_size, input_sequence_length)
>>> outputs.sequences.shape
torch.Size([6, 47]) # tensor of size (batch_size*num_return_sequences, generated_length)
```
This is what the documentation says regarding `cross_attentions` tensor of a `BeamSearchEncoderDecoderOutput`:
https://github.com/huggingface/transformers/blob/60ad73448c7fc0149082b539ce8c223e42783a35/src/transformers/generation_utils.py#L351-L356
When inspecting the `cross_attentions`:
```
>>>len(outputs.cross_attentions)
48 # it corresponds to the longest generated length of the 3 sequences -> OK with the doc
```
```
>>>len(outputs.cross_attentions[0])
16 # it corresponds to the number of layer of the decoder-> OK with the doc
```
```
>>>len(outputs.cross_attentions[0])
16 # it corresponds to the number of layer of the decoder-> OK with the doc
```
This is where it differs from the doc:
```
>>>cross_attentions[0][0].shape #cross attention for the 1st generated token and 1st decoder layer
torch.Size([10, 16, 1, 8]) # tensor of size (batch_size*beam_size, num_heads, 1, sequence_length) but doc says it should be
# tensor of size (batch_size, num_heads, generated_length, sequence_length)
```
```
>>>cross_attentions[1][0].shape #cross attention for the 2nd generated token and 1st decoder layer
torch.Size([10, 16, 1, 8]) # tensor of size (batch_size*beam_size, num_heads, 1, sequence_length) but doc says it should be
# tensor of size (batch_size, num_heads, generated_length, sequence_length)
```
Both the 1st and 3rd dimensions of the `cross_attentions` differ from the doc.
However when setting `use_cache=False` in `generate()` I obtain:
```
>>>cross_attentions[0][0].shape #cross attention for the 1st generated token and 1st decoder layer
torch.Size([10, 16, 1, 8]) # tensor of size (batch_size*beam_size, num_heads, generated, sequence_length)
```
```
>>>cross_attentions[1][0].shape #cross attention for the 2nd generated token and 1st decoder layer
torch.Size([10, 16, 2, 8]) # tensor of size (batch_size*beam_size, num_heads, generated_length, sequence_length)
```
Only the 1st dimension now differs from the doc, so it looks like using the cache enforces `generated_length` to 1
Is it possible to clarify this behavior? I assume the `cross_attentions` tensor and `decoder_attentions` should have the same specifications. Also, I think it would make sense to rename `sequence_length` to something like `encoder_input_sequence_length` for the `cross_attentions` spec and something like `decoder_input_sequence_length` for the `decoder_attentions` specs. At the moment, the doc implies that this is the same for both tensors.
### Expected behavior
```shell
Documentation should reflect the actual shape of cross_attentions tensor
```
| 05-18-2022 14:20:19 | 05-18-2022 14:20:19 | Thanks a lot @fgbelidji ! @patil-suraj do you want to give it a try? :-)<|||||>Yes, will look into it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Have this issue been fixed? |
transformers | 17,326 | closed | Fix bug in Wav2Vec2 pretrain example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
I fix a minor bug in [run_pretrain.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/wav2vec2/run_pretrain.py)
DataCollatorForWav2Vec2Pretraining in this script call **_compute_mask_indices** method, but give to wrong parameter(device)
so i edit from
```python
batch["mask_time_indices"] = _compute_mask_indices(
(batch_size, mask_indices_seq_length),
self.model.config.mask_time_prob,
self.model.config.mask_time_length,
device=batch["input_values"].device,
attention_mask=attention_mask,
min_masks=2,
)
```
to
```python
batch["mask_time_indices"] = _compute_mask_indices(
(batch_size, mask_indices_seq_length),
self.model.config.mask_time_prob,
self.model.config.mask_time_length,
attention_mask=attention_mask,
min_masks=2,
)
```
Fixes #17323
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | 05-18-2022 13:51:18 | 05-18-2022 13:51:18 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! |
transformers | 17,325 | closed | Remove notification_service_deprecated.py | # What does this PR do?
Remove `notification_service_deprecated.py`. | 05-18-2022 13:07:45 | 05-18-2022 13:07:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,324 | closed | [BC] Fixing usage of text pairs | # This contains a BC please read full description
# What does this PR do?
The BC is actually preventing users from misusing the pipeline since
users could have been willing to send text pairs and the pipeline would
instead understand the thing as a batch returning bogus results.
The correct usage of text pairs is preserved in this PR even when that
makes the code clunky.
Adds support for `{"text":..,, "text_pair": ...}` inputs for both dataset
iteration and more explicit usage to pairs.
Fixes #17305
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-18-2022 12:47:29 | 05-18-2022 12:47:29 | Pinging two persons, since this contains a backward breaking change (even if it was bugged usage) I would rather have more eyes than not here.
The linked issue contains more information that could be helpful too.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,323 | closed | There is a minor bug in run_pretrain.py for Wav2Vec2 example | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.27
- Python version: 3.9.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@patrickvonplaten
I found a minor bug in [run_pretrain.py](https://github.com/huggingface/transformers/tree/main/examples/research_projects/wav2vec2/run_pretrain.py)
```python
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
.
.
.
class DataCollatorForWav2Vec2Pretraining:
...
# sample randomly masked indices
batch["mask_time_indices"] = _compute_mask_indices(
(batch_size, mask_indices_seq_length),
self.model.config.mask_time_prob,
self.model.config.mask_time_length,
device=batch["input_values"].device, # this!
attention_mask=attention_mask,
min_masks=2,
)
return batch
```
_compute_mask_indices take device parameter, but in [modeling_wav2vec2.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_wav2vec2.py)
```python
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
```
device parameter does not exist, I think this hasn't been modified yet.
And thank you for the huggingface that makes good libraries!
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
@dataclass
class DataCollatorForWav2Vec2Pretraining:
...
# sample randomly masked indices
batch["mask_time_indices"] = _compute_mask_indices(
(batch_size, mask_indices_seq_length),
self.model.config.mask_time_prob,
self.model.config.mask_time_length,
device=batch["input_values"].device,
attention_mask=attention_mask,
min_masks=2,
)
return batch
```
### Expected behavior
```shell
@dataclass
class DataCollatorForWav2Vec2Pretraining:
...
# sample randomly masked indices
batch["mask_time_indices"] = _compute_mask_indices(
(batch_size, mask_indices_seq_length),
self.model.config.mask_time_prob,
self.model.config.mask_time_length,
attention_mask=attention_mask,
min_masks=2,
)
return batch
```
| 05-18-2022 12:43:27 | 05-18-2022 12:43:27 | Hey @ddobokki,
good observation! Would you like to open a PR to fix it? :-)<|||||>Ok! I will ๐ |
transformers | 17,322 | closed | Keep only Roberta's position_embeddings initialisation in modeling_roberta.py | # What does this PR do?
There was two position_embeddings initialisations, first one was left from Bert's code and second one is Roberta's tweaked version. Remove first one.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik | 05-18-2022 09:25:51 | 05-18-2022 09:25:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,321 | closed | Remove Bert's position_embeddings init in modeling_roberta.py | # What does this PR do?
Remove useless position_embeddings initialisation in modeling_roberta.
First one was from Bert's code and second one is the tweaked Roberta's one.
Removed first one.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik | 05-18-2022 09:19:43 | 05-18-2022 09:19:43 | |
transformers | 17,320 | closed | Fix a TF-T5 test | # What does this PR do?
`test_t5_decoder_model_past_large_inputs` pass incorrect args to `create_and_check_t5_decoder_model_past_large_inputs`.
Don't know why it works so far, but I got errors when working on another PR. | 05-18-2022 08:42:50 | 05-18-2022 08:42:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,319 | closed | automatically update config file for models | ### Feature request
automatically update config files for large number of models
### Motivation
For the ONNXConfig project https://github.com/huggingface/transformers/issues/16308 where onnxConfig need to be added to 90 models. Currently it is done manually, mostly copy paste. Can this be done automatically? Does this kind of config updates happen often and could this be done in a more automatic way?
### Your contribution
I'm interested in helping with the implementation | 05-18-2022 08:22:24 | 05-18-2022 08:22:24 | cc @lewtun @michaelbenayoun <|||||>Hi @Ruihua-Fang,
Due to the variety of models we support, I think it would be hard to have a very generic class that handles all cases.
That being said we can still work on making the base classes the most generic possible so that adding model specific ONNX configs becomes easier.<|||||>Hi Michael, thanks for the clarification and I am closing this for now. |
transformers | 17,318 | closed | Accepting real pytorch device as arguments. | # What does this PR do?
Fix #17290
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-18-2022 07:42:37 | 05-18-2022 07:42:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,317 | closed | M1 MacOS: "OSError: Can't load config for 'bert-base-uncased'" | ### System Info
```shell
WARNING:tensorflow:From /Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/commands/env.py:52: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
Metal device set to: Apple M1 Pro
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
2022-05-18 09:30:52.149441: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-05-18 09:30:52.149855: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
- `transformers` version: 4.19.2
- Platform: macOS-12.3-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Setup steps
* Following MacOS installation of tensorflow for M1 chips:
https://developer.apple.com/metal/tensorflow-plugin/
* Create python=3.9 environment: `conda create -n test python=3.9`
* Install `tensorflow-deps` using conda: `conda install -c apple tensorflow-deps`
* Install `tensorflow-macos` using pip: `python -m pip install tensorflow-macos`
* Output from installing `tensorflow-macos`:
```
Successfully installed absl-py-1.0.0 astunparse-1.6.3 cachetools-5.1.0 certifi-2021.10.8 charset-normalizer-2.0.12 flatbuffers-2.0 gast-0.5.3 google-auth-2.6.6 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 idna-3.3 importlib-metadata-4.11.3 keras-2.8.0 keras-preprocessing-1.1.2 libclang-14.0.1 markdown-3.3.7 oauthlib-3.2.0 opt-einsum-3.3.0 protobuf-3.20.1 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.27.1 requests-oauthlib-1.3.1 rsa-4.8 tensorboard-2.8.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-macos-2.8.0 termcolor-1.1.0 tf-estimator-nightly-2.8.0.dev2021122109 typing-extensions-4.2.0 urllib3-1.26.9 werkzeug-2.1.2 wrapt-1.14.1 zipp-3.8.0
```
* Install `tensorflow-metal` using pip: `python -m pip install tensorflow-metal`
* Output from installing tensorflow-metal: `Successfully installed six-1.15.0 tensorflow-metal-0.4.0`
And then: `pip install transformers`
Output:
`Successfully installed filelock-3.7.0 huggingface-hub-0.6.0 packaging-21.3 pyparsing-3.0.9 pyyaml-6.0 regex-2022.4.24 tokenizers-0.12.1 tqdm-4.64.0 transformers-4.19.2`
## Script to reproduce
I used the most basic example in Huggingface (mentioned [here](https://huggingface.co/bert-base-uncased)):
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='bert-base-uncased')
```
That throws the following error:
```
Traceback (most recent call last):
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/configuration_utils.py", line 601, in _get_config_dict
resolved_config_file = cached_path(
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/utils/hub.py", line 282, in cached_path
output_path = get_from_cache(
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/utils/hub.py", line 470, in get_from_cache
os.makedirs(cache_dir, exist_ok=True)
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/Users/unaigaraymaestre/.cache/huggingface'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 541, in pipeline
config = AutoConfig.from_pretrained(model, revision=revision, _from_pipeline=task, **model_kwargs)
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/models/auto/configuration_auto.py", line 680, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/configuration_utils.py", line 553, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/Users/unaigaraymaestre/miniforge3/envs/test/lib/python3.9/site-packages/transformers/configuration_utils.py", line 641, in _get_config_dict
raise EnvironmentError(
OSError: Can't load config for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing a config.json file
```
**Note that I also tried installing Pytorch with CPU setup and the same happened, it's not related to Tensorflow for M1 chips.**
### Expected behavior
```shell
Model (pipeline) should load properly by automatically downloading the `bert-base-uncased`.
I've tried with many models, none work
Thank you in advance!
```
| 05-18-2022 07:42:05 | 05-18-2022 07:42:05 | I got this error because of the way I installed python in MacOS. **It seems that if you don't execute the script with sudo powers then the script is not able to write to disk and thus it fails at downloading the model.**
I couldn't find a better way than using sudo, since giving full access to disk to Python in the Security & Privacy settings didn't work |
transformers | 17,316 | closed | Updating the docs for `max_seq_len` in QA pipeline | # What does this PR do?
Fixes #17241
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 05-18-2022 07:37:07 | 05-18-2022 07:37:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,315 | closed | Add OnnxConfig for SqueezeBert iss17314 | # What does this PR do?
<!--
As part of #16308, this PR adds OnnxConfig for SqueezeBert.
-->
## Who can review?
@lewtun @LysandreJik
Anyone in the community is free to review the PR once the tests have passed.
| 05-18-2022 05:28:04 | 05-18-2022 05:28:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I run pytest on those 4 failed tests in CircleCI shown above on my local dev, no error message was found, all says skipped except the following one says passed
tests/models/pegasus/test_modeling_pegasus.py::PegasusStandaloneDecoderModelTest::test_sample_generate
<|||||>Hi @lewtun,
running the following test did not show any errors, it output the following; 6 skipped, 200 deselected, 7 warnings
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "squeezebert"
|
transformers | 17,314 | closed | implement onnx config for SqueezeBert | ### Feature request
see https://github.com/huggingface/transformers/issues/16308
### Motivation
see https://github.com/huggingface/transformers/issues/16308
### Your contribution
added onnx config for SqueezeBert | 05-18-2022 01:54:21 | 05-18-2022 01:54:21 | Hi @lewtun , @LysandreJik here are a few things about the testing results
1. pytest test_onnx_v2.py:
3 passes, 204 skipped, 8 warnings, no errors
2. RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -s -k "squeezebert"
gave warning "No GPU/TPU found, falling back to CPU"
then complained "Could not load dynamic library 'libcuda.so.1' "
and final message: "7 failed, 200 deselected, 7 warnings"
system info: ubuntu 20.04, pytorch: 1.11.0+cu102
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,313 | closed | Adding GroupViT Models | # What does this PR do?
This PR implements paper [GroupViT: Semantic Segmentation Emerges from Text Supervision](https://arxiv.org/abs/2202.11094), the model is converted from [official implementation](https://github.com/NVlabs/GroupViT).
- [x] Inference accuracy matched
- [x] Complete docstring and model cards
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge
| 05-17-2022 23:58:55 | 05-17-2022 23:58:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>CI somehow failed due to connection issue. Could you please restart it?<|||||>Hi @NielsRogge
I have resolved all the comments.
Regarding your concern, I think it's a little bit hard to combine them since `GroupViTVisionLayer` follows `ViT` and `GroupViTTextEncoderLayer` follows `CLIP`. <|||||>Btw, I still don't know why PR doc build failed. Any idea?<|||||>@patil-suraj are you fine with the fact that the vision and text encoder use separate classes (`GroupViTVisionLayer` and `GroupViTTextEncoderLayer` respectively), even though they do the same thing? The only reason we keep `GroupViTVisionLayer` is because the vision encoder implementation is copied from ViT.
So either we keep the two separate classes with `# Copied from` statements, or we remove the copied from and just create a single `GroupViTEncoderLayer` class (as is done in CLIP).
Pinging @mishig25 regarding the build_documentation CI issue
<|||||>Sorry to only reply now.
> are you fine with the fact that the vision and text encoder use separate classes (GroupViTVisionLayer and GroupViTTextEncoderLayer respectively), even though they do the same thing? The only reason we keep GroupViTVisionLayer is because the vision encoder implementation is copied from ViT.
If `GroupViTVisionLayer` and `GroupViTTextEncoderLayer` are exactly similar then IMO it's better to just have one `GroupViTEncoderLayer`, this will make the code much more readable. I am not in favor of adding extra modules just for `#copied from..` statememts.<|||||>Yes same, so @xvjiarui you can remove the copied from statements from the vision encoder in favor of simpler code. <|||||>Awesome work, thanks a lot!! Merging :) |
transformers | 17,312 | closed | [tests] fix copy-n-paste error | I noticed a wrong comment in a few tests - bad copy-n-paste - fixing it.
@sgugger | 05-17-2022 23:58:02 | 05-17-2022 23:58:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,311 | closed | [Generation] Fix Transition probs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #15869
This PR fixes incorrectly computed transition probabilties for beam search.
In beam search it's common to want to know the transition probability between tokens -> see: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/15?u=patrickvonplaten
Currently there is a bug for every beam search that ends before `max_length` as discovered by #15869 .
This PR fixes this behavior and adds a couple of tests.
๐จ **There is a tiny breaking change** in the output format of `beam_indices`. Instead of returning a `tuple(tuple(int))` a `torch.LongTensor` is returned. Since the feature was broken before (`beam_indices` were incorrect), this is hardly a breaking change and necessary to make the functionality more userfriendly and robustu๐จ
@patil-suraj @gante , it would be super nice if you could do an in-depth review here also to understand this feature.
**Note:** people do seem to be interested in output scores for generation as can be seen by the large number of views here: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175
Feel free to drop any question in the PR if you don't understand something or something is unclear - thanks :pray:
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-17-2022 22:18:09 | 05-17-2022 22:18:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'll talk about this feature in more-detail in a tutorial<|||||>Hello,
Regarding the follow up to these 3 posts: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/23, https://discuss.huggingface.co/t/retrieving-probability-over-tokens-during-beam-search/21217, https://discuss.huggingface.co/t/get-top-k-tokens-for-each-time-step-instead-of-the-highest-probability-token/14032:
Can you show us, step by step, how to retrieve the probability for each token generated for each prediction in a `transformers.generation_utils.BeamSearchEncoderDecoderOutput`?
Much appreciated.<|||||>Hi @adamkhakhar -- it is in our plans to write a tutorial for that :) It probably won't happen in the next 1 or 2 months, but stay tuned ๐ <|||||>Hi @patrickvonplaten and @gante; what is the perspective on the tutorial?<|||||>No news yet :)<|||||>Thanks for the quick response @gante. I will stay tuned. Can you tell me whether you can expect the first generated sequence to always have the highest probability?<|||||>@MotzWanted yeah, if you use `num_return_sequences >1` with beam search, the output sequences are sorted by their score (from highest to lowest)<|||||>But if I replicate this [showcase](https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/26) of @patrickvonplaten to compute the sequence probabilities of the output scores, I don't get that output sequences are sorted by their probability (from highest to lowest). Can you explain to me why this is so @gante?<|||||>@MotzWanted my reply was given in the context of this thread, which is related to `beam_search`. Any call with `do_sample=True`, as in the example you linked, will lose the sorting property.
A working example:
```python
import torch
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
gpt2 = AutoModelForCausalLM.from_pretrained("gpt2", return_dict_in_generate=True)
tokenizer = AutoTokenizer.from_pretrained("gpt2")
input_ids = tokenizer("Today is a nice day", return_tensors="pt").input_ids
generated_outputs = gpt2.generate(input_ids, num_return_sequences=3, num_beams=3, output_scores=True)
print(generated_outputs.sequences_scores)
``` |
transformers | 17,310 | closed | [Fix-copies] Correct main fix-copies | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Copies are not correct on main at the moment
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-17-2022 22:12:02 | 05-17-2022 22:12:02 | Fixes https://app.circleci.com/pipelines/github/huggingface/transformers/40462/workflows/c13c81f0-5db9-4556-8340-f64dc8871e26/jobs/458163
Not sure why it didn't show up on the PR: https://github.com/huggingface/transformers/pull/16441<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,309 | open | UNETR: Transformers for 3D Medical Image Segmentation | ### Model description
I would like to add a new model:
Proposed in the paper: [UNETR: Transformers for 3D Medical Image Segmentation](https://arxiv.org/abs/2103.10504)
UNEt TRansformers (UNETR) utilize a transformer as the encoder to learn sequence representations of the input volume and effectively capture the global multi-scale information, while also following the successful "U-shaped" network design for the encoder and decoder. The transformer encoder is directly connected to a decoder via skip connections at different resolutions to compute the final semantic segmentation output.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Model Implementation: https://github.com/Project-MONAI/research-contributions/tree/master/UNETR
Pretrained Model: https://drive.google.com/file/d/1kR5QuRAuooYcTNLMnMj80Z9IgSs8jtLO/view?usp=sharing (Based on BTCV dataset) | 05-17-2022 19:03:42 | 05-17-2022 19:03:42 | Hello. What is the status of the implementation? I would like to contribute to it.<|||||>Hey @Puranjay-del-Mishra, to the best of my knowledge nobody has started working on it. We'd be very happy for you to take a stab at adding it!
You can follow the tutorial here: [adding a new model](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model).
We especially recommend following the [`add-new-model-like`](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command) command and guide.
If you have not contributed to transformers yet, we also recommend reading the [contributing guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).<|||||>Sure! @LysandreJik
I'll go through it and give it a shot. Thanks.
<|||||>Hey @Puranjay-del-Mishra @LysandreJik I was supposed to submit a PR last week but I came down with health problems.
I will be sending a PR by the weekend.<|||||>Hey @pri1311 , go ahead with the PR. All the best.<|||||>I'm gonna try this out. Appreciate it.<|||||>Hi @NielsRogge,
Can I have a shot at implementing this model?<|||||>Yes, sure! Do you need some help?<|||||>Thanks! I'll get back to you if I have queries<|||||>Hello @NielsRogge. I have been following all the steps depicted in the guide https://huggingface.co/docs/transformers/add_new_model. I have already done all previous step to create a PR. At this moment I have a fork on my github of the whole transformer-HuggingFace project and I have created my "draft" copying VIT by using the command "transformers-cli add-new-model-like". After that, I created a draft pull request from my dev-fork-branch to my main-fork-branch and I tried to include you as a reviewer, but It was not possible. Am I missing some steps? Should the pull request be done directly from my dev-fork-brach to some branch in the real repository?
Attaching snapshot of the problem:

<|||||>Hi @NielsRogge and @LysandreJik,
I have been working on this task for the last few weeks and my code is doing the forward pass properly. Now I am implementing the tokenizer but I have a doubt. In the original repository they have created many functions to transform input images. Can I include this function/library as a requirement for the HuggingFace tokenizer or they must be implemented from scratch?
Many thanks<|||||>Hi,
UNETR is a vision model so it probably doesn't require a tokenizer? You probably want to create an image processor for this model, is that right?
In that case, image processors should be implemented to support minimal inference. They should perform the exact same transformations as the original implementation to prepare data for the model for inference. For computer vision models, this typically involves resizing to a particular size + normalization.
An example of an image processor can be found here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py<|||||>Thank you for the answer @NielsRogge.
When I was talking about the tokenizer I was meaning in fact the image_processor. When you check how the original repository implements the model, you realize they are using some transformations not implemented in Hugging Face library. These transormations normalize, filter and resize the 3d image in particular ways, with an slightly complex hierarchy of functions that can not be implemented with the current functions you can find in the "image_processing_utils.py"


As far as I can see there are three options to implement this part in the Hugging Face code:
- Use exactly the same functions they use in the original project (importing libraries of the monai project) https://github.com/Project-MONAI/MONAI
- Copy/paste the code (of the monai project) in the image_processing_utils.py and addapt the style and names to make it more legible.
- Implement from scratch the whole code. This could be time-consuming and pretty hard to obtain same results as in the original code.
What is the recommended option?<|||||>Thanks for the nice suggestions! I'll ping @amyeroberts for this, as she's currently working on refactoring our image processing pipelines.<|||||>Thank you Niels.
Please let me know when you have some info. I'll be working in the refactor of the UNETR decoder since the forward pass is using currrently a dependency of the monai project (original project) as well. <|||||>Discussed this offline with @amyeroberts, here's what she responded:
Iโd use the third party for now (with usual `xxx_is_available` checks) and wrap inside the image processor e.g.
import thirdparty
```
class MyImageProcessor:
def transform_1(self, image, *args, **kwargs):
image = thirdparty.transform_1(image, *args, **kwargs)
...
```
so that we can remove easily if needs be.
Looking at the MONAI library:
Torch is required. This is fine for implementing the first model, but shouldnโt be necessary for our TF model users. If the model turns out to be popular it would be good to remove this dependancy so we can port easily. Most of the transforms listed are compositions of standard logic we already have e.g. CropForeground would only require us implementing logic to calculate the bounding box.<|||||>@caleb-vicente Thanks for all your work so far adding this model โค๏ธ
Adding to Niels comment above:
Regarding your suggestions, option 1 is the one I would go for: importing specific functionality from the MONAI project. I completely agree we don't want to reinvent the wheel! We already use third party packages for certain processing e.g. [pytesseract for the LayoutLM models](https://github.com/huggingface/transformers/blob/main/src/transformers/models/layoutlmv2/image_processing_layoutlmv2.py). Like the LayoutLM models, [we can add MONAI as an optional dependency](https://github.com/huggingface/transformers/blob/722bf7efcce72e60412f75d6775af7b03041d8c8/src/transformers/models/layoutlmv2/image_processing_layoutlmv2.py#L42).
Regarding transforms in the screenshot above, one thing to consider is the image processors don't perform augmentation, they are responsible for transforming the data so that it can be fed into the model i.e. the `UterImageProcessor` shouldn't have the random operations like `RandFlipd`.
In the snippet:
```
class MyImageProcessor:
def transform_1(self, image, *args, **kwargs):
image = thirdparty.transform_1(image, *args, **kwargs)
...
```
there's also the consideration about input types. All of the current functions take in and return numpy arrays and it should be possible to disable any of the transforms e.g. `do_resize=False`. As far as I can tell, MONAI will accept both torch and numpy, but always returns torch arrays. This is OK for a first implementation before removing the torch dependency as long as the ability to disable any of the transforms still applies.
Let me know if there's any other questions you have regarding this :) <|||||>Hello @NielsRogge and @amyeroberts,
Thank you so much for the answers. Please find a few comments below:
- I will implement the optional dependency with the monai library.
- For the first implementation I will use functions as they are in the library. For next iterations I could simplify some of them using work already done in the Hugging Face library.
- About data augmentation I will review it again to see if I can find any of those in MONAI inference phase. In this case the function RandFlipd is used only in training mode in the Notebook from which I took the snapshot (Sorry for the confusion).
- I will add a layer on top of MONAI's dependencies so that everything works with numpy arrays if necessary. Additionally the possibility to decide will be included.
I will keep you updated about the progess or any doubt :) |
transformers | 17,308 | closed | Support compilation via Torchdynamo, AOT Autograd, NVFuser | # What does this PR do?
Adding support for TorchDynamo compilation with AOT Autograd and nvfuser backends. Detailed context available at - https://github.com/huggingface/transformers/pull/17204
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
--------------------
## TODO:
setup pt-nightly CI to run the tests in this PR, instructions:
```
# install torch-nightly
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch-nightly
# install functorch (and reinstall after `git pull` later if need to sync up)
git clone https://github.com/pytorch/functorch
cd functorch
rm -rf build
pip install -e .[aot]
cd ..
git clone https://github.com/pytorch/torchdynamo
cd torchdynamo
pip install -r requirements.txt
python setup.py develop
```
@ydshieh is adding this in this PR: https://github.com/huggingface/transformers/pull/17335 in commit: https://github.com/huggingface/transformers/pull/17335/commits/52e7021c6a1c8e2b2f749c6ce8daf078c6785c3e
| 05-17-2022 18:44:37 | 05-17-2022 18:44:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@kevinstephano could you please take a quick look at this PR? Thanks!<|||||>> LGTM, thanks for iterating! @stas00 are you okay with the changes?
I primarily would like to hold off on merging this just yet to hear from @Chillee (PyTorch) and may be @csarofeen (NVIDIA) to think what other options we might want down the road and design the key/values better.
e.g. questions
- do we have to go through TorchDynamo, or can we go directly through aot Autograd see: https://github.com/huggingface/transformers/pull/15264
- should we give users an option to choose other fusers besides nvfuser?
- is it always "driver -> fuser" combo so perhaps the value should have 2 parts: `--compile torchdynamo:nvfuser`, `--compile aotautograd:fuserxxx` (and then the key needs to be renamed and the driver moved into the value - and that way we end up with just one entry point and lots of flexibility on the different combos. not sure on the best key name.
So ideally collect all the possible combos and then we could see how to best organize those.
but I added that the current API proposed in this PR is experimental, so we could go with it and change it at will later.<|||||>> @kevinstephano could you please take a quick look at this PR? Thanks!
Looks good to me.<|||||>> * do we have to go through TorchDynamo, or can we go directly through aot Autograd see: [[Kernel Fusion] training benchmarks of AOTAutograd (multiple models)ย #15264](https://github.com/huggingface/transformers/pull/15264)
> * should we give users an option to choose other fusers besides nvfuser?
> * is it always "driver -> fuser" combo so perhaps the value should have 2 parts: `--compile torchdynamo:nvfuser`, `--compile aotautograd:fuserxxx` (and then the key needs to be renamed and the driver moved into the value - and that way we end up with just one entry point and lots of flexibility on the different combos. not sure on the best key name.
For the first question on TorchDynamo I'll leave @chillee to give an opinion here.
Second point: I personally think nvFuser is going to be your best bet. We're trying to move torch script to be nvFuser by default: https://github.com/pytorch/pytorch/pull/77579 so likely nvFuser is a good bet. Dynamo is looking at supporting multiple backends but I believe that will be more automated of a thing and shouldn't require you worrying about it.
For the last point I think again @chillee is the one to ask. I think AOTAutograd is moving or has moved to nvFuser by default? Don't know for sure here, don't know what Dynamo is planning/looking for as options.<|||||>> do we have to go through TorchDynamo, or can we go directly through aot Autograd
IMO, going through TorchDynamo is the right option here. As mentioned in the previous PR, using AOTAutograd is somewhat risky, since we can't guarantee correctness. So, I don't think it's the right option to provide as a "default" trainer.
If users want to apply AOTAutograd by themselves then I think they should feel free to do so, but I'm not convinced we should provide it as an option integrated into HF.
> should we give users an option to choose other fusers besides nvfuser?
Yes, I think it's reasonable. For example, we also have a TensorRT integration with TorchDynamo that has some fairly good numbers. However, as @csarofeen says, NVFuser is definitely what I'd recommend as the "default" for this PR - if we have other backends it'll just be a different flag.
> is it always "driver -> fuser" combo so perhaps the value should have 2 parts
I think TorchDynamo + AOTAutograd are essentially "constants" here. It's plausible that in the future there will be other graph-capture paths (such as if we want to export models), but the UX for that will be significantly different (i.e. it won't be a seamless "always work" thing).
So I think it's fine to have fuser be the only thing that changes.<|||||>Thank you for your commentary, @Chillee and @csarofeen
> if we have other backends it'll just be a different flag.
That's the whole point of me starting this discussion - we don't want to have additional flags. We have too many already. That's why I was thinking that perhaps the flag should indicate some sort of non-implementation specific name like `--fusion` or `--compiler` or ??? and then the value(s) can define the specific path, so perhaps this PR's original cmd arg can be converted to:
```
--fusion torchdynamo:nvfuser
--fusion torchdynamo:eager
```
which makes it easy to add other combos in the future w/o needing to change the cmd arg api.
which fits the current code of this PR:
https://github.com/huggingface/transformers/blob/28f80ec046ece4fe01e7936c6c2f861d532c7d90/src/transformers/trainer.py#L2197-L2200
Does any of this resonate at all? And if it does what would be the apt generic naming for the example I used `--fusion` (currently `--torchdynamo` key) - perhaps `--autofusion`, `--autooptimize`, else?
<|||||>@stas00 ah sorry, I mispoke - by "another flag", I meant "another value for the config option". I think something like this would be better.
```
--fusion nvfuser
--fusion eager
```
(btw, I think "debug" might be a better name than "eager"? I think it's kinda confusing to have a fusion option called "eager" haha. Or perhaps we should just remove it as an option - it's only useful for debugging bugs).
From our side, I think the main option is just going to be torchdynamo. So I think `--fusion nvfuser` and `--fusion eager` is probably sufficient.<|||||>> (btw, I think "debug" might be a better name than "eager"? I think it's kinda confusing to have a fusion option called "eager" haha. Or perhaps we should just remove it as an option - it's only useful for debugging bugs).
I think "eager" is good because that's what you pass to `torchdynamo` - it'd be easy to document that it doesn't do any fusing and just provides the default behavior.
> From our side, I think the main option is just going to be torchdynamo. So I think `--fusion nvfuser` and `--fusion eager` is probably sufficient.
so what you're proposing is that `torchdynamo` is going to be implied as the driver for `nvfuser` or `eager` and then in the future there might be other drivers besides `torchdynamo`?
So currently then we are discussing 2 options:
```
--fusion nvfuser
--fusion eager
```
which imply:
```
--fusion torchdynamo:nvfuser
--fusion torchdynamo:eager
```
perhaps I should not bother to future proof this flag? <|||||>> so what you're proposing is that torchdynamo is going to be implied as the driver for nvfuser or eager and then in the future there might be other drivers besides torchdynamo?
I think it's unlikely that in the (foreseeable) future there will be other drivers besides torchdynamo with a similar UX. So imo, there's not a significant reason to try to future proof this flag now - i don't think it'd be that hard to change while preserving BC in the future either.<|||||>ok, so then let's keep the original proposal `--torchdynamo <nfuser|eager>`, right?<|||||>@stas00 This PR is ready for another round of review. Let me know what you think.<|||||>1. The memory test consistently hangs for me:
```
$ pytest tests/trainer/test_trainer.py -k torchdynamo_memory -sv
```
nothing useful in the output.
Traceback:
```
$ py-spy dump --pid 530235
Thread 530235 (idle): "MainThread"
backward (torch/autograd/__init__.py:173)
backward (torch/_tensor.py:399)
_backward (functorch/_src/monkey_patching.py:97)
training_step (transformers/trainer.py:2263)
test_torchdynamo_memory (tests/trainer/test_trainer.py:1668)
_callTestMethod (unittest/case.py:633)
run (unittest/case.py:676)
__call__ (unittest/case.py:736)
runtest (_pytest/unittest.py:327)
pytest_runtest_call (_pytest/runner.py:166)
_multicall (pluggy/_callers.py:39)
_hookexec (pluggy/_manager.py:80)
__call__ (pluggy/_hooks.py:265)
<lambda> (_pytest/runner.py:259)
from_call (_pytest/runner.py:338)
call_runtest_hook (_pytest/runner.py:258)
call_and_report (_pytest/runner.py:219)
runtestprotocol (_pytest/runner.py:130)
pytest_runtest_protocol (_pytest/runner.py:111)
_multicall (pluggy/_callers.py:39)
_hookexec (pluggy/_manager.py:80)
__call__ (pluggy/_hooks.py:265)
pytest_runtestloop (_pytest/main.py:347)
_multicall (pluggy/_callers.py:39)
_hookexec (pluggy/_manager.py:80)
__call__ (pluggy/_hooks.py:265)
_main (_pytest/main.py:322)
wrap_session (_pytest/main.py:268)
pytest_cmdline_main (_pytest/main.py:315)
_multicall (pluggy/_callers.py:39)
_hookexec (pluggy/_manager.py:80)
__call__ (pluggy/_hooks.py:265)
main (_pytest/config/__init__.py:164)
console_main (_pytest/config/__init__.py:187)
<module> (pytest:8)
Thread 530372 (idle): "Thread-4"
wait (threading.py:306)
wait (threading.py:558)
run (tqdm/_monitor.py:60)
_bootstrap_inner (threading.py:932)
_bootstrap (threading.py:890)
Thread 530390 (active)
_call_impl (torch/nn/modules/module.py:1130)
_fn (torchdynamo/eval_frame.py:74)
backward (functorch/_src/aot_autograd.py:185)
_fn (torchdynamo/eval_frame.py:74)
apply (torch/autograd/function.py:253)
```
I tried rebuilding everything and it still hangs. env details below
I can't even Ctrl-C `pytest` - have to `kill` it
2. Once we figure out how to make the test work I need to see how fast it runs to potentially `@slow` decorate it - which we do for slow tests.
3. We need to instrument the nightly CI to install all the requirements to run this test. I'm just waiting to confirm how to best approach it.
-----------------
build env:
PyTorch version: 1.12.0.dev20220518
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 21.10 (x86_64)
GCC version: (Ubuntu 10.3.0-11ubuntu1) 10.3.0
Clang version: 13.0.0-2
CMake version: version 3.21.3
Libc version: glibc-2.34
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.32-051532-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA GeForce GTX 1070 Ti
Nvidia driver version: 510.47.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] functorch==0.3.0a0+76976db
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.2
[pip3] torch==1.12.0.dev20220518
[pip3] torchaudio==0.12.0.dev20220518
[pip3] torchdynamo==0.2.0
[pip3] torchvision==0.13.0.dev20220518
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] functorch 0.3.0a0+76976db dev_0 <develop>
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h7f8727e_0
[conda] mkl_fft 1.3.1 py38hd3c417c_0
[conda] mkl_random 1.2.2 py38h51133e4_0
[conda] numpy 1.21.2 pypi_0 pypi
[conda] pytorch 1.12.0.dev20220518 py3.8_cuda11.3_cudnn8.3.2_0 pytorch-nightly
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch 1.12.0.dev20220404+cu115 pypi_0 pypi
[conda] torchaudio 0.12.0.dev20220404+cu115 pypi_0 pypi
[conda] torchdynamo 0.2.0 dev_0 <develop>
[conda] torchvision 0.13.0.dev20220404+cu115 pypi_0 pypi
<|||||>@anijain2305, are you up to doing one more PR with docs? https://huggingface.co/docs/transformers/main/en/performance
1. add HF Trainer usage example
2. add examples of how a user can do it directly
I guess with the new layout the docs would go here:
https://github.com/huggingface/transformers/blob/main/docs/source/en/perf_train_gpu_one.mdx<|||||>@stas00 Yes, I can do docs as well. Let me take a look and I will come back where to put the section.<|||||>Also as I updated in the OP, @ydshieh is instrumenting the nightly CI to install the prerequisites for this test in this PR: https://github.com/huggingface/transformers/pull/17335 in commit: https://github.com/huggingface/transformers/pull/17335/commits/52e7021c6a1c8e2b2f749c6ce8daf078c6785c3e
<|||||>pinging about the docs, @anijain2305 - thank you!
almost nobody will use your work unless you document it in user-facing docs. so you're the ones who really want to add these docs, I'd think... |
transformers | 17,307 | closed | 404 Errors on Loading Artifacts due to mispellings could suggest a model/tokenizer/dataset to the API user in the error message. | ### Feature request
When loading a model/tokenizer/config/Datasets with Auto* (or even other loader classes). It would be fun to autosuggest a model from HF hub if you make a typo in the 404 error message from the API.
Feel free to close this if you think its silly and freviolous.
An implementation could use edit distance of input string to models or tokenizer... or perform a search with that string using the model hub search API.
### Motivation
Sometimes folks misspell or forget the name of a model/tokenizer/config, it might be nice to suggest the correction in the error message!
### Your contribution
Sure thing, yes I could make a PR, if this is something other people would want. | 05-17-2022 18:40:52 | 05-17-2022 18:40:52 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,306 | closed | Fix -1e4 as attn mask | # What does this PR do?
Fix the issues regarding `-1e4` as attention mask.
Fix #17215 #17121 #14859 | 05-17-2022 16:45:47 | 05-17-2022 16:45:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Generally, this looks good to me. I'd prefer though to not factor out a one-liner into a function (even if we have to add the one-liner 100+ times). It's not good for readability to have to jump to `modeling_utils.py` and the code saved is not worth it for a one-liner.
Also, I'd advocate to make three separate PRs (one for PT, one for TF, one for Flax). Think it should be both easier to maintain the PRs as well as review them.
A first test should then be that all slow tests pass. After that it would indeed be nice if we could run some fine-tuning for the most important models (BERT on GLUE, GPT2 on causal LM, T5 on translation maybe). Maybe also not even necessary to verify that everything is correct with a training run if the slow tests all pass <|||||>Hi,
@patrickvonplaten:
- I removed the new function.
- I have to modify `FlaxT5Attention` otherwise the PT/Flax T5 equivalence tests will fail.
@sgugger:
- since there is no more new function `mask_value()`, so no more `device` issue. There is one place I need to use tensor and device though:
https://github.com/huggingface/transformers/blob/195ef42e0be974e8c019e9d5f03070f65365c721/src/transformers/models/gpt2/modeling_gpt2.py#L202-L205
Would this be a problem for model parallelism for big model inference? It is `attn_weights.device` instead of `self.dtype` though.<|||||>@ydshieh Using the weight device is perfectly fine, thanks for checking!<|||||>Cool, exciting!<|||||>Hi, @patrickvonplaten @patil-suraj @sgugger @LysandreJik
This PR is ready for review.
- Only dealing with PyTorch models: but need to change `FlaxT5` too to make the test pass.
- In general, change to `torch.finfo(correct-dtype).min` instead of `-10000`, `-1e9` etc.
- In particular, changes in `modeling_utils.py`
- Verified the change by training a T5 from scratch as well as finetuning the `t5-small` checkpoint<|||||>@sgugger @patrickvonplaten: sorry to call you after the approval, there is something more to discuss after I saw #17437 by @younesbelkada.
- First I need to remove the use of `float("-inf")` as done in [this commit](https://github.com/huggingface/transformers/pull/17306/commits/9fa9d9e8ac4aec9df3c42b3c6e6b178271230247)
- It's better not to use `inf`: it is hard to track if there is any arithmetic ops that will produce `NaN`.
- Second, there are still some issues when using `torch.finfo(dtype).min`, especially when running in `fp16`, see the code snippet below.
Basically:
- `torch.finfo(torch.float16).min + (-16.0) = -inf`. (or anything < `-16.0`)
- In some cases, we get all 0s as attention mask:
- for example, `[pad_token, token_1]` will give an attention mask `[[0, 0], [0, 1]]` (due to causal mask + padding).
- this mask becomes `[[-65504, -65504], ...]` after using `torch.finfo(dtype).min` (for `fp16`)
- This might give `[[-inf, -inf], ...]` when combining an `atten_scores` with values < -16.0
- Then, we get `NaN` after softmax.
~~- `torch`'s `softmax` can't work with fp16 input on CPU~~
- ~~RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Half'~~
### Suggestions
~~- Cast `fp16` to `fp32` just before `softmax`, so it can run on CPU~~
- Perform some other processing before `softmax` to avoid `NaN` and non-sense output, especially avoid `[-inf, -inf]` or `[-inf, - large_value]` as input to `softmax`
- Change attn probability to `[0, 0, .. 0]` if the input is all large negative value `torch.finfo(dtype).min`
### Code Snippet
```
import torch
from torch import nn
# device = "cpu" --> not working with softmax on float16 (RuntimeError: "softmax_lastdim_kernel_impl" not implemented for 'Half')
device = "cuda"
dtype = torch.float16
mask_value = torch.finfo(dtype).min
attn_mask = torch.tensor([mask_value, mask_value], dtype=dtype)
attn_scores_0 = torch.tensor([-0, -0], dtype=dtype)
attn_scores_1 = torch.tensor([-4, -16], dtype=dtype)
attn_scores_2 = torch.tensor([-18, -16], dtype=dtype)
final_attn_scores_0 = attn_scores_0 + attn_mask # --> [-65504, -65504]
final_attn_scores_1 = attn_scores_1 + attn_mask # --> [-65504, -inf]
final_attn_scores_2 = attn_scores_2 + attn_mask # --> [-inf, -inf]
attn_prob_0 = nn.functional.softmax(final_attn_scores_0.to(device), dim=-1) # --> [0.5, 0.5], but non-sense!!
attn_prob_1 = nn.functional.softmax(final_attn_scores_1.to(device), dim=-1) # --> [1, 0], but non-sense!!
attn_prob_2 = nn.functional.softmax(final_attn_scores_2.to(device), dim=-1) # --> [nan, nan], very bad!
print(final_attn_scores_0)
print(final_attn_scores_1)
print(final_attn_scores_2)
print(attn_prob_0)
print(attn_prob_1)
print(attn_prob_2)
```<|||||>Nothing should be done in FP16 on CPU, softmax is not the only operation that is not implemented in Half on CPU.<|||||>> Nothing should be done in FP16 on CPU, softmax is not the only operation that is not implemented in Half on CPU.
OK, thank you! Still want to hear your opinion on other points when you have more time<|||||>I'm personally leaning toward implementing an util that preprocesses the attention mask before the softmax as you suggested @ydshieh, but curious to see others opinion.<|||||>I think we can leave the discussion regarding `softmax` and `attention score processing` to another thread and PR.
After removing `float("-inf")` in this PR, it's less likely to get all `-inf` in a particular sequence.
(It's sill likely to happen during training, so better to have a process done after this PR).
So I would prefer to merge the current version once @LysandreJik is happy with this PR.<|||||>@michaelbenayoun
I have tried to keep your change
```
mask = torch.full((tgt_len, tgt_len), torch.tensor(float("-inf")))
```
with my own, so it becomes
```
mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min))
```
If you encounter any problem after this PR is merged, don't hesitate to ping me.<|||||>@michaelbenayoun I need you help ๐
https://github.com/huggingface/transformers/blob/575a8c0a8e2bf6491d7ef0f932fb6caf8a7712b1/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L135
I need to change `torch.tensor(float("-inf")))` to `torch.tensor(torch.finfo(dtype of an input).min))`
but I can't understand what the input it is here. Is it `g` or is it `self`.
From the line below
```
masked_fill(g, output, r_mask, g.op("Constant", value_t=torch.tensor(0, dtype=torch.uint8)))
```
I guess it might be the second argument, so I should use `self`, but it looks strange to me ๐ข <|||||>@ydshieh About the first change you mention, I don't think it will break anything, and I will try to apply the same changes when I add support for new model architectures.
About the symbolic function, you are right I think. Basically `self` is `input` here, maybe we should change the name of the parameter to make things clearer?<|||||>> @ydshieh About the first change you mention, I don't think it will break anything, and I will try to apply the same changes when I add support for new model architectures.
>
> About the symbolic function, you are right I think. Basically `self` is `input` here, maybe we should change the name of the parameter to make things clearer?
Great, thanks for the answer! We can change the name, but not very urgent.
<|||||>Ran GPU non-slow tests (single/multi GPU) - results are fine.<|||||>Will merge today (as been approved by 3 core maintainers), if there is no further comment on
https://github.com/huggingface/transformers/pull/17306#pullrequestreview-994918858<|||||>As discussed on Slack, will wait the release of Bloom, and not merge now. |
transformers | 17,305 | closed | Pipeline inference with text pair is broken | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.15.0-27-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
### Reproduction
Basically, the pipeline for text classification does not handle well input pairs that must be separated by [SEP] token.
For example, for glue's mnli dataset, we have:
```python
premise = 'The new rights are nice enough'
hypothesis = 'Everyone really likes the newest benefits '
```
Whether we pass
* `pipeline([[premise, hypothesis]], padding=True, truncation=True)`
* or `pipeline(" ".join([premise, hypothesis]), padding=True, truncation=True)`
the pipeline output is wrong.
## Detailed reproduction
If necessary, install transformers in the dev version (`pip uninstall transformers && git clone https://github.com/huggingface/transformers.git && cd transformers && pip install -e .`).
Replace https://github.com/huggingface/transformers/blob/1f13ba818e0e3b780cf9155242e2c83a27fdfa9a/src/transformers/pipelines/text_classification.py#L132-L134
by
```python
def preprocess(self, inputs, **tokenizer_kwargs) -> Dict[str, GenericTensor]:
return_tensors = self.framework
tokenized_inps = self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs)
print("tokenized_inps", tokenized_inps)
return tokenized_inps
```
to be able to see what are the tokenized inputs in the pipeline.
Then run
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
from datasets import load_dataset
tokenizer = AutoTokenizer.from_pretrained("roberta-large-mnli")
model = AutoModelForSequenceClassification.from_pretrained("roberta-large-mnli")
pipe = pipeline(task="text-classification", tokenizer=tokenizer, model=model)
raw_datasets = load_dataset("glue", "mnli")
txt1 = raw_datasets["validation_matched"][0]["premise"]
txt2 = raw_datasets["validation_matched"][0]["hypothesis"]
inputs = [txt1, txt2]
txt = " ".join(inputs)
res = pipe(txt, padding=True, truncation=True)
print(res)
"""Output:
tokenized_inps {'input_ids': tensor([[ 0, 133, 92, 659, 32, 2579, 615, 7632, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...}
[{'label': 'NEUTRAL', 'score': 0.7983464002609253}]
NOTE: theses input_ids correspond to:
'<s>The new rights are nice enough Everyone really likes the newest benefits </s>'
"""
```
We can see that separating the premise and hypothesis by a space is a very bad idea as there is no [SEP] token between the two.
Now run:
```python
from transformers import BatchEncoding
data = raw_datasets["validation_matched"][0:1]
tokenized_inps = tokenizer(data["premise"], data["hypothesis"], padding=True, truncation=True)
tokenized_inps = BatchEncoding(tokenized_inps, tensor_type="pt")
print(tokenized_inps)
print(tokenizer.decode(tokenized_inps["input_ids"][0]))
"""Output:
{'input_ids': tensor([[ 0, 133, 92, 659, 32, 2579, 615, 2, 2, 11243, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...}
<s>The new rights are nice enough</s></s>Everyone really likes the newest benefits </s>
"""
```
Here, the `tokenizer` takes a `text=premise` and `text_pair=hypothesis`, and we see as expected SEP tokens between the two.
Other possibility with the pipeline:
```python
txt1 = raw_datasets["validation_matched"][0]["premise"]
txt2 = raw_datasets["validation_matched"][0]["hypothesis"]
inputs = [txt1, txt2]
res = pipe([inputs], padding=True, truncation=True)
print(res)
"""Outputs:
tokenized_inps {'input_ids': tensor([[ 0, 133, 92, 659, 32, 2579, 615, 2, 1],
[ 0, 11243, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...}
[{'label': 'NEUTRAL', 'score': 0.8978187441825867}]
Note that now input_ids is 2D! The decoding gives:
<s>The new rights are nice enough</s><pad>
<s>Everyone really likes the newest benefits </s>
"""
```
There is a [CLS] token inserted in the middle, most likely this is not desirable. In fact, when we run the pipeline on several examples from the dataset, all are classified as neutral and wrong.
## Hacky solution
Use
```python
txt1 = raw_datasets["validation_matched"][0]["premise"]
txt2 = raw_datasets["validation_matched"][0]["hypothesis"]
inputs = [txt1, txt2]
tokenized_inps = pipe.preprocess([inputs])
res = pipe.forward(tokenized_inps)
res = pipe.postprocess(res)
print(res)
"""Output:
tokenized_inps {'input_ids': tensor([[ 0, 133, 92,659, 32, 2579, 615, 2, 2, 11243, 269, 3829, 5, 8946, 1795, 1437, 2]]), 'attention_mask': ...}
{'label': 'NEUTRAL', 'score': 0.9636728167533875}
We get the right input_ids, and the score is the same as with manually using tokenizer + model, yay!
"""
```
which gives the same proba as with using the tokenizer and model separately.
To me, the issue lies in two facts:
* It is very wrong to join two sentences with a space (as suggested in the doc https://huggingface.co/tasks/text-classification ) since we loose the info that they are different sentence.
* In case we pass the data as `pipeline([[premise, hypothesis]])`, it could be that there is some funny stuff happening in https://github.com/huggingface/transformers/blob/1f13ba818e0e3b780cf9155242e2c83a27fdfa9a/src/transformers/pipelines/pt_utils.py#L111
### Expected behavior
Pipeline for text-classification with text pair should output the same result than manually using tokenizer + model + softmax.
| 05-17-2022 14:22:41 | 05-17-2022 14:22:41 | Hi @fxmarty ,
Text pair was indeed never supported by the pipeline, so it's not tested against. We could definitely add support.
Without any code change you could do :
```python
res = pipe([[[txt, txt_pair]]] padding=True, truncation=True)
```
Then we could also have a fix within the pipeline (and properly crash when malformed input instead of here a silent error).
The true culprit is two difference parts of magics conflicting.
```python
tokenizer([txt, txt_pair])
```
Is guessing `txt` and `txt_pair` are actually two different texts, and outputs batched input_ids.
```python
tokenizer([[txt, txt_pair]])
```
Is guessing the first list is the batch, and the second list therefore MUST be a pair of texts.
It interacts with `pipeline` magic, which also supposes that lists are batches so
```python
pipe([[txt, txt_pair]])
```
Is understood as a list of inference to run on, and gives only `[txt, txt_pair]` to the underlying tokenizer. which in terms treats them as a batch (which is not what we want in this case).
Since `pipeline.preprocess` is only really supposed to preprocess one item at a time (any sort of list is handled by the parent class) we can definitely change the call to the tokenizer appropriately into `tokenizer(text=txt, text_pair=txt_pair)` so will yield the correct output.
Question @lhoestq : Do `datasets` have a consistent way of expressing pairs of text so that we could maybe express the whole thing as
```python
for output in pipe(load_dataset('glue', 'mnli'))):
print(output)
```
akin to what we did with ASR and the `Audio` column ? If it's not consistent, then the proposed fix should be enough.
@fxmarty is my explanation clear about what's happening ? I am going to propose a fix so that we support text pairs (which is not supported by every model/tokenizer out there but still pretty useful)
Cheers.<|||||>> Question @lhoestq : Do datasets have a consistent way of expressing pairs of text so that we could maybe express the whole thing as
No it doesn't. It could be pairs of question/answer, of sentence1/sentence2, of language1/language2 or any column names.<|||||>@Narsil Thanks a lot for your detailed explanation, makes sense! |
transformers | 17,304 | closed | Fix dummy creation script | # What does this PR do?
The `check_dummies` script was not adapted to the recent changes in the main init. As a result `make fix-copies` stopped creating dummy objects. By some miracle, none were missing, but this PR fixes the script (tested locally after adding new objects). | 05-17-2022 13:51:20 | 05-17-2022 13:51:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,303 | closed | [Test] Fix W2V-Conformer integration test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes `tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py::Wav2Vec2ConformerModelTest::test_save_load_fast_init_to_base`
and doc test:
`transformers.models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTraining.forward`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-17-2022 13:49:07 | 05-17-2022 13:49:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,302 | closed | [Speech Model] Add Emformer | # What does this PR do?
This PR adds the Emformer model: an auto-regressive ASR model with an option for RNN-Transducer decoding.
This model shows promising result for real-time speech recognition and there are 3 pretrained checkpoints available via the torchaudio library:
https://pytorch.org/audio/main/tutorials/online_asr_tutorial.html
Original `torchaudio` implementation: https://github.com/pytorch/audio/blob/main/torchaudio/models/emformer.py
Paper: https://arxiv.org/abs/2010.10759
RNN-Transducer details for reference: https://lorenlugosch.github.io/posts/2020/11/transducer/
| 05-17-2022 13:47:43 | 05-17-2022 13:47:43 | @anton-l could you leave some comments in the code so that I know where I should take a look for the model design here?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17302). All of your documentation changes will be reflected on that endpoint.<|||||>@anton-l do you want to fix the failling tests or should I review already?<|||||>@patrickvonplaten I've left a comment about the last major failing test above, but the rest is ready for review :)
Also I'm not sure what happened to the sentencepiece tests here https://app.circleci.com/pipelines/github/huggingface/transformers/41150/workflows/91d3a0c9-f633-4895-9169-3179f759ced6/jobs/468492, could you take a look please?<|||||>> @patrickvonplaten I've left a comment about the last major failing test above, but the rest is ready for review :) Also I'm not sure what happened to the sentencepiece tests here https://app.circleci.com/pipelines/github/huggingface/transformers/41150/workflows/91d3a0c9-f633-4895-9169-3179f759ced6/jobs/468492, could you take a look please?
See: https://huggingface.slack.com/archives/C01NE71C4F7/p1653573450540219?thread_ts=1653570610.579129&cid=C01NE71C4F7
Rebase to main should solve the issue<|||||>@patrickvonplaten looks like the remaining failed tests are unrelated <|||||>Taking this PR over! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Actually closing this for now as I'm not sure anymore whether this should go into `main`. We should maybe eventually think about a new speech library on top of Transformers<|||||>If anyone from the community is interested in taking over this PR, please feel free to do so! |
transformers | 17,301 | closed | [Tests] Fix opt integration test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes failing circle ci: `tests/models/opt/test_modeling_opt.py::OPTModelIntegrationTests::test_inference_no_head`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-17-2022 13:29:17 | 05-17-2022 13:29:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Let's see whether this fixes the CI |
transformers | 17,300 | closed | Fix tests of mixed precision now that experimental is deprecated | null | 05-17-2022 12:33:48 | 05-17-2022 12:33:48 | Thanks, @Rocketknight1
I saw we have a few
```
tf.keras.mixed_precision.experimental.Policy
```
in `src/transformers/training_args_tf.py`. Maybe we also need to update these places? (I am OK if you prefer to do it in another PR, if the change is indeed necessary)<|||||>@ydshieh I'll fix them here too!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>
Doesn't this mean TF<=2.8 will fail in these lines?<|||||>(@Rocketknight1 )<|||||>For push/scheduled CI, I think it is fine, as the docker image is built with
```
RUN python3 -m pip install --no-cache-dir -U torch tensorflow
```
(right ..?)
I am going to print TF version in CI jobs.
Update: The latest docker image is built with 2.9
```
2022-05-17T01:34:05.0014817Z #10 15.23 Collecting tensorflow>=2.3
2022-05-17T01:34:05.0016257Z #10 15.24 Downloading tensorflow-2.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (511.7 MB)
```<|||||>Yes, CI should be fine :D but users with older versions (which we support) will experience errors, right? Or is it backwards compatible?<|||||>You are right! At least for `src/transformers/training_args_tf.py`.
I am not very sure what's our policy regarding the test backward compatible (regarding the change in `tests/utils/test_modeling_tf_core.py`) |
transformers | 17,299 | closed | Add CvT | # What does this PR do?
Co-authored-by: AnugunjNaman <[email protected]>
This PR adds CvT (Convolutional Vision Transformer) by Microsoft Research.
I just cleaned up the branch of @AnugunjNaman.
To do:
- [x] make @AnugunjNaman co-author of this PR | 05-17-2022 12:12:00 | 05-17-2022 12:12:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,298 | closed | Add PR author in CI report + merged by info | # What does this PR do?
As title. The result looks like
<img width="502" alt="Screenshot 2022-05-17 093737" src="https://user-images.githubusercontent.com/2521628/168756091-6969528d-a7c7-492e-89f4-ae3649fb10bd.png">
| 05-17-2022 07:38:19 | 05-17-2022 07:38:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,297 | closed | Error while finetuning XLM-RoBERTa on Tensorflow-Keras | ## env:
tensorflow 2.8.0
keras 2.8.0
transformers 4.18.0
python 3.8
## code
### data loader
```python
def gen_dataset_iter(model_config):
data_files = {"train": model_config.train_path, "validation": model_config.dev_path, "test": model_config.test_path}
src_data = load_dataset("csv", data_files=data_files)
def preprocess_function(examples):
return model_config.tokenizer(examples["content"], truncation=True)
tokenized_data = src_data.map(preprocess_function, batched=True)
data_collator = DataCollatorWithPadding(tokenizer=model_config.tokenizer, return_tensors="tf")
train_iter = tokenized_data["train"].to_tf_dataset(
columns=["attention_mask", "input_ids"],
label_cols=["label"],
shuffle=True,
batch_size=model_config.batch_size,
collate_fn=data_collator,
)
val_iter = tokenized_data["validation"].to_tf_dataset(
columns=["attention_mask", "input_ids"],
label_cols=["label"],
shuffle=False,
batch_size=model_config.batch_size,
collate_fn=data_collator,
)
test_iter = tokenized_data["test"].to_tf_dataset(
columns=["attention_mask", "input_ids"],
label_cols=["label"],
shuffle=False,
batch_size=model_config.batch_size,
collate_fn=data_collator,
)
return train_iter, val_iter, test_iter
```
### model
```python
model = TFAutoModelForSequenceClassification.from_pretrained(self.model_config.model_path, num_labels=self.model_config.num_classes)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = [tf.keras.metrics.SparseCategoricalAccuracy(), tf.keras.metrics.Precision(), tf.keras.metrics.Recall()]
self.model.compile(optimizer="adam", loss=loss, metrics=metrics)
stop_callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=3)
best_model_callback = tf.keras.callbacks.ModelCheckpoint(self.model_config.save_path, monitor="val_loss", verbose=1, save_best_only=True, save_weights_only=True)
call_funs = [stop_callback, best_model_callback]
self.model.fit(self.train_iter, epochs=self.model_config.num_epochs, verbose=2, validation_data=self.val_iter, callbacks=call_funs)
```
## error
```shell
File "/home/hellotalk/work/content_detect/models/model.py", line 70, in train
self.model.fit(self.train_iter, epochs=self.model_config.num_epochs, verbose=2, validation_data=self.val_iter, callbacks=call_funs)
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1147, in autograph_handler
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1008, in train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/engine/compile_utils.py", line 459, in update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/utils/metrics_utils.py", line 70, in decorated
update_op = update_state_fn(*args, **kwargs)
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/metrics.py", line 178, in update_state_fn
return ag_update_state(*args, **kwargs)
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/metrics.py", line 1403, in update_state **
return metrics_utils.update_confusion_matrix_variables(
File "/home/hellotalk/software/miniconda3/envs/py3.8/lib/python3.8/site-packages/keras/utils/metrics_utils.py", line 619, in update_confusion_matrix_variables
y_pred.shape.assert_is_compatible_with(y_true.shape)
ValueError: Shapes (256, 20) and (256, 1) are incompatible
``` | 05-17-2022 07:14:49 | 05-17-2022 07:14:49 | Hey @tdr1991 ๐ The error seems to come from `Precision` and `Recall`, which are designed for binary classification -- see this StackOverflow thread: https://stackoverflow.com/questions/59305514/tensorflow-how-to-use-tf-keras-metrics-in-multiclass-classification
(Closing the issue, since it is not related to `transformers`. Feel free to reopen if you find a `transformers`-related issue :) ) |
transformers | 17,296 | closed | about the opt model KeyError: 'opt' | ### System Info
```shell
I just copy the code from the model card on the https://huggingface.co/facebook/opt-30b, then there is an error, can you help me to find what's the problem of it?
Traceback (most recent call last):
File "D:\SentenceRewrite\opt.py", line 4, in <module>
model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda()
File "D:\Anaconda\envs\torch\lib\site-packages\transformers\models\auto\auto_factory.py", line 382, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "D:\Anaconda\envs\torch\lib\site-packages\transformers\models\auto\configuration_auto.py", line 517, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "D:\Anaconda\envs\torch\lib\site-packages\transformers\models\auto\configuration_auto.py", line 266, in __getitem__
raise KeyError(key)
KeyError: 'opt'
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda()
# the fast tokenizer currently does not work correctly
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
prompt = "Hello, I'm am conscious and"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
generated_ids = model.generate(input_ids)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
### Expected behavior
```shell
KeyError: 'opt' how to solve it?
```
| 05-17-2022 07:02:20 | 05-17-2022 07:02:20 | I met the same problem. Have you fixed this?<|||||>>
You can download the latest version of transformers, have a good day.<|||||>@DericZhao I installed the latest version, I am still getting the error. Any help?<|||||>I met the same problem. Any solutions?<|||||>me too!my transformers version is 4.4.1,I still have the problem.<|||||>`pip install --upgrade transformers`<|||||>Check the versions of python(>=3.8 required) and pytorch , they may not match transformers |
transformers | 17,295 | closed | ValueError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 1024. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape for ReformerForSequenceClassification | `
from transformers import ReformerTokenizer, ReformerForSequenceClassification, ReformerConfig
model_name = "google/reformer-crime-and-punishment"
tokenizer = ReformerTokenizer.from_pretrained(model_name)
def input_id_maker(dataf, tokenizer):
input_ids = []
lengths = []
for i in progressbar.progressbar(range(len(dataf['text']))):
sen = dataf['text'].iloc[i]
sen = tokenizer.tokenize(sen)#, add_prefix_space=True)
if(len(sen) > 1024):
sen = sen[len(sen)-1024:]
encoded_sent = tokenizer.convert_tokens_to_ids(sen)
input_ids.append(encoded_sent)
lengths.append(len(encoded_sent))
input_ids = pad_sequences(input_ids, maxlen=1024, value=0, dtype="long", truncating="pre", padding="post")
return input_ids, lengths
train_input_ids, train_lengths = input_id_maker(train_set, tokenizer)
validation_input_ids, validation_lengths = input_id_maker(validation_set, tokenizer)
def att_masking(input_ids):
attention_masks = []
for sent in input_ids:
att_mask = [int(token_id > 0) for token_id in sent]
attention_masks.append(att_mask)
return attention_masks
train_attention_masks = att_masking(train_input_ids)
validation_attention_masks = att_masking(validation_input_ids)
train_labels = train_set['label'].to_numpy().astype('int')
validation_labels = validation_set['label'].to_numpy().astype('int')
train_inputs = train_input_ids
validation_inputs = validation_input_ids
train_masks = train_attention_masks
validation_masks = validation_attention_masks
train_inputs = torch.tensor(train_inputs)
train_labels = torch.tensor(train_labels)
train_masks = torch.tensor(train_masks)
validation_inputs = torch.tensor(validation_inputs)
validation_labels = torch.tensor(validation_labels)
validation_masks = torch.tensor(validation_masks)
batch_size = 6
train_data = TensorDataset(train_inputs, train_masks, train_labels)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size = batch_size)
validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels)
validation_sampler = RandomSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size = batch_size)
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
model = model_class.from_pretrained(model_name, num_labels=2)
model.to(device)
lr = 2e-6
max_grad_norm = 1.0
epochs = 3
num_total_steps = len(train_dataloader)*epochs
num_warmup_steps = 1000
warmup_proportion = float(num_warmup_steps) / float(num_total_steps) # 0.1
optimizer = AdamW(model.parameters(), lr=lr, correct_bias=True)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = num_warmup_steps, num_training_steps = num_total_steps)
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
seed_val = 2212
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
loss_values = []
for epoch_i in range(0, epochs):
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
t0 = time.time()
total_loss = 0
model.train()
for step, batch in enumerate(train_dataloader):
if step % 40 == 0 and not step == 0:
print(' Batch {:>5,} of {:>5,}. '.format(step, len(train_dataloader)))
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
model.zero_grad()
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
loss = outputs[0]
total_loss += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
avg_train_loss = total_loss / len(train_dataloader)
loss_values.append(avg_train_loss)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print("")
print("Running Validation...")
t0 = time.time()
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
for batch in validation_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)
logits = outputs[0]
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_accuracy += tmp_eval_accuracy
nb_eval_steps += 1
# Report the final accuracy for this validation run.
print(" Accuracy: {0:.2f}".format(eval_accuracy/nb_eval_steps))
print("")
print("Training complete!")
`
### After this getting error "ValueError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 1024. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape." | 05-17-2022 06:21:33 | 05-17-2022 06:21:33 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still. getting the same problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, thanks for opening an issue @ShubhamKumarNigam ! We try to keep the github issues for bugs/feature requests.
For user code, we recommend using the forum instead, where you're more likely to have a community member help you out on your issue.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,294 | closed | [T5] Fix init in TF and Flax for pretraining | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16749
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-16-2022 23:03:59 | 05-16-2022 23:03:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,293 | closed | fix for 17292 | Fixes #17292
## Before submitting
- [N/A] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [N/A] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [N/A] Did you write any new necessary tests? Tested locally
| 05-16-2022 22:41:34 | 05-16-2022 22:41:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,292 | closed | Misleading error when from_pretrained fails, says there are flax weights when there aren't | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.14.0-1031-oem-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): 2.9.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Tiny bug in the from_pretrained code that checks if flax weights are present.
```
from transformers import AutoModel, AutoConfig
config = AutoConfig.from_pretrained("roberta-base")
model = AutoModel.from_pretrained("some_empty_dir", config=config)
```
### Expected behavior
```shell
Should return:
EnvironmentError(
f"Error no file named {WEIGHTS_NAME}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME + '.index'} or "
f"{FLAX_WEIGHTS_NAME} found in directory {pretrained_model_name_or_path}."
)
Instead returns:
EnvironmentError(
f"Error no file named {WEIGHTS_NAME} found in directory {pretrained_model_name_or_path} but "
"there is a file for Flax weights. Use `from_flax=True` to load this model from those "
"weights."
)
```
| 05-16-2022 22:28:15 | 05-16-2022 22:28:15 | |
transformers | 17,291 | closed | Fix CodeParrot training script | This PR fixes some features in the training script of CodeParrot:
* use `Pytorch` implementation of `AdamW` instead of `transformers` implementation
* add shuffling of the sequences in the batches
* fix error in weight decay for LayerNorm
* change the tracked loss to the average over batches instead of the main worker loss + compute average over accumulated steps manually for wandb/tensorboard plot
* update requirements
cc @lvwerra | 05-16-2022 22:27:24 | 05-16-2022 22:27:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,290 | closed | Accepting torch device objects in the Pipeline init | ### Feature request
Currently Pipeline init only takes an integer as a device argument in the constructor. It would make it a little easier to interface with and integrate with existing code if it also took a pytorch device object.
### Motivation
It's frustrating not be to be able to use the model.device/self.device object when instantiating the pipeline object
### Your contribution
I feel like I could do this and I'm happy to take a whack, but I haven't contributed to Transformers yet | 05-16-2022 18:12:18 | 05-16-2022 18:12:18 | cc @Narsil <|||||>@courtneysprouse you're entirely correct, there's no real reason why we don't accept those.
The reason why we want to be able to accept `int` is to support seamless `TF` and `PyTorch` (which don't use the same conventions for devices) but the pipeline abstracts that away. But if you want to use native objects, you should always be able to for sure.<|||||>Awesome! Thank you so much! |
transformers | 17,289 | closed | Better error in the Auto API when a dep is missing | # What does this PR do?
As reported in #17266, the error message when an auto API is used to load a class that can't be loaded because a dep is missing is not helpful (it actually got worse since #17250).
This PR addresses the problem by returning the associated dummy class when the right class can't be loaded, which means the subsequent call to `from_pretrained` fails with a helpful error message. For instance, the sample given in #17266 will now error with:
```
ConvNextFeatureExtractor requires the PIL library but it was not found in your environment. You can install it with pip:
`pip install pillow`
```
Fixes #17266 | 05-16-2022 17:56:15 | 05-16-2022 17:56:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,288 | closed | Make TrainerHyperParameterSigOptIntegrationTest slow test | # What does this PR do?
As discussed offline, make `TrainerHyperParameterSigOptIntegrationTest` a slow test. | 05-16-2022 17:50:00 | 05-16-2022 17:50:00 | Add @sgugger for a double check :-)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,287 | closed | Improved Documentation for Encoder Decoder models |
# What does this PR do?
This PR improves the documentation of encoder decoder model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
Issues [link](https://github.com/huggingface/transformers/issues/16135)
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
-->
| 05-16-2022 17:03:14 | 05-16-2022 17:03:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Very nice! Thanks for taking the initiative here @Threepointone4. Left some suggestions to make the text a bit clearer :-) Let me know what you think!
These make sense, Thanks for the edits.
<|||||>@patrickvonplaten what are the next steps in this ? Let me know if i have to add anything.<|||||>@Threepointone4, I commited all the suggestions after having read your comment. I think we are close to merging this PR now :-)
Could you please in a last step add this file to our documentation tests? Those ensure that the code actually runs correctly.
All you have to do is to add the name of the doc file to this file: https://github.com/huggingface/transformers/blob/38ddab10da90e64297a37c0719ed9309e693317a/utils/documentation_tests.txt#L10
For more information on the doc tests you can read this document:
https://github.com/huggingface/transformers/tree/main/docs#testing-documentation-examples<|||||>At the moment it seems like the doc tests would fail with the following error message:
```
109 ... return_tensors="pt",
110 ... ).input_ids
111
112 >>> labels = tokenizer(
113 ... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2
metres ( 17 ft ) and is the second tallest free - standing structure in paris.",
114 ... return_tensors="pt",
115 ... ).input_ids
116
117 >>> # the forward function automatically creates the correct decoder_input_ids
118 >>> loss = model(input_ids=input_ids, labels=labels).loss
UNEXPECTED EXCEPTION: ValueError("Make sure to set the decoder_start_token_id attribute of the model's configuration.")
```
Could you try to correct it? :-)<|||||>> At the moment it seems like the doc tests would fail with the following error message:
>
> ```
> 109 ... return_tensors="pt",
> 110 ... ).input_ids
> 111
> 112 >>> labels = tokenizer(
> 113 ... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2
> metres ( 17 ft ) and is the second tallest free - standing structure in paris.",
> 114 ... return_tensors="pt",
> 115 ... ).input_ids
> 116
> 117 >>> # the forward function automatically creates the correct decoder_input_ids
> 118 >>> loss = model(input_ids=input_ids, labels=labels).loss
> UNEXPECTED EXCEPTION: ValueError("Make sure to set the decoder_start_token_id attribute of the model's configuration.")
> ```
>
> Could you try to correct it? :-)
Hey @Threepointone4,
Thanks a lot for having added the doc to the doc tests. Could you quickly check that they work as expected? E.g. I'm currently getting the above error when running the docs. Thanks!<|||||>> > At the moment it seems like the doc tests would fail with the following error message:
> > ```
> > 109 ... return_tensors="pt",
> > 110 ... ).input_ids
> > 111
> > 112 >>> labels = tokenizer(
> > 113 ... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2
> > metres ( 17 ft ) and is the second tallest free - standing structure in paris.",
> > 114 ... return_tensors="pt",
> > 115 ... ).input_ids
> > 116
> > 117 >>> # the forward function automatically creates the correct decoder_input_ids
> > 118 >>> loss = model(input_ids=input_ids, labels=labels).loss
> > UNEXPECTED EXCEPTION: ValueError("Make sure to set the decoder_start_token_id attribute of the model's configuration.")
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Could you try to correct it? :-)
>
> Hey @Threepointone4,
>
> Thanks a lot for having added the doc to the doc tests. Could you quickly check that they work as expected? E.g. I'm currently getting the above error when running the docs. Thanks!
@patrickvonplaten
I am currently running this cmd
` pytest --doctest-modules docs/source/en/model_doc/encoder-decoder.mdx -sv --doctest-glob="*.mdx"
`
Is this proper way to do that? I am getting different error, so just double checking .<|||||>Hey @Threepointone4,
Yes that's the correct command, but note that you need to run this command before-hand:
```python utils/prepare_for_doc_test.py src docs```
and then you can run:
```pytest --doctest-modules docs/source/en/model_doc/encoder-decoder.mdx -sv --doctest-glob="*.mdx"```
After the command you should run:
```python utils/prepare_for_doc_test.py src docs --remove_new_line``` once more
to re-convert the example doc strings correctly :-)
What's easier however is to do the following, add the following code into a `doc_test` file:
```
#!/usr/bin/env bash
doc_file=${1}
python utils/prepare_for_doc_test.py src docs &>/dev/null
pytest -sv --doctest-modules ${doc_file} --doctest-continue-on-failure --doctest-glob="*.mdx"
python utils/prepare_for_doc_test.py src docs --remove_new_line &>/dev/null
```
and then run:
`doc_test <path/to/python/file>` -> this will automatically prepare the doc tests before hand :-)<|||||>@patrickvonplaten Sorry for the delay.
I have ran the cmd's you have shared and was able to reproduce the error in my local.
```
model.config.decoder_start_token_id = tokenizer.cls_token_id
model.config.pad_token_id = tokenizer.pad_token_id
```
These are changes i need to add right ? I will re-base also with original repository. <|||||>@patrickvonplaten what are the next steps in this ? Let me know if i have to add anything.
<|||||>Hey @Threepointone4,
It sadly looks a bit like the git history was messed up in this PR (maybe a `git rebase` was incorrectly used?)<|||||>@patrickvonplaten I accidentally git rebase with the branch and main separately and pushed it. Can that be an issue ?
I was waiting for your input on this. <|||||>I see! Sorry it's quite hard to recover the PR from this :sweat_smile: Could you maybe copy-paste the files that were changed to a new PR and close this one? :-)<|||||>> I see! Sorry it's quite hard to recover the PR from this sweat_smile Could you maybe copy-paste the files that were changed to a new PR and close this one? :-)
Sure @patrickvonplaten , I will do that it will be easier.
Link for the new PR : [link](https://github.com/huggingface/transformers/pull/17815)<|||||>Thanks a lot! |
transformers | 17,286 | closed | Add Visual Question Answering (VQA) pipeline | # What does this PR do?
Add Visual Question Answering (VQA) pipeline, as described in #17208. The pipeline currently defaults to [ViLT](https://huggingface.co/docs/transformers/model_doc/vilt), which is also the only model it supports for now.
It also adds all the necessary class such as `AutoModelForVisualQuestionAnswering`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @LysandreJik @Narsil @mishig25
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-16-2022 16:12:31 | 05-16-2022 16:12:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi,
Can you first make sure that your branch is up to date with the main branch? You can achieve that as follows:
```
git remote add upstream https://github.com/huggingface/transformers.git
git fetch upstream
git rebase upstream/main
```<|||||>Hi @sijunhe ,
Thanks a lot, I think for the last test (consistency), usually `make fixup` (make sure you have the right versions with `pip install -e .[dev]` if you want. It works for me most fo the time (don't include files you didn't touch I think).<|||||>> Looks like a solid PR ! Thank you for this.
>
> The main thing I would think is actually remove some of the flexibility introduced here (modeling QA pipeline I think). As it exists mostly as legacy and is usually preventing some modifications later on because we can't do backward compatibilty breaking.
>
> Aiming for a pure simple `pipe(image=image, question=question)` and `pipe({"image": image, "question": question})` should be enough and not need a full blown class to support.
>
> The main reason for `{"image": image, "question": question}` is to support datasets which return single item and the reason for `pipe(image=image, question=question)` is just simpler for simple Python code (more natural than the dict let's say :)).
>
> This will natively support `List[{"image": image, "question": question}]` which is handled by the parent class already (and as already mentionned datasets).
>
> The code could look like something like
>
> ```python
> def __call__(self, image, question=None, **kwargs):
> if isinstance(image, (PIL.Image...)) and question is not None:
> # Nice pure Python support.
> inputs_ = {"image": image, "question": question}
> else:
> # Suppose this is correct dict or list.
> inputs_ = image
> ```
>
> What do you think ?
Thanks for the elaborate reply! I agree that the class is not necessary. This is my first PR so I wasn't sure what kind of inputs needed to be handled and I was mostly following the QA pipeline. :)
Hopefully all the tests pass now and we can land this soon!<|||||>This PR is fine for me !
I will let a core maintainer approve this.<|||||>@LysandreJik kindly pinging you for a final review<|||||>Hmm seems like the CI is flaky and it is failing now after I committed the trivial docstring change suggested by @NielsRogge <|||||>> Hmm seems like the CI is flaky and it is failing now after I committed the trivial docstring change suggested by @NielsRogge
Don't hesitate to rebase on `main` too, might have been fixed within the code itself since .<|||||>Hey folks, seems like the last remaining issue was the use of community models as the default in the pipeline and the conclusion was to "specify a revision". However, I am not sure how to specify a version of the model. Any suggestions here? @patrickvonplaten @LysandreJik <|||||>> issue was the use of community models as the default in the pipeline and the conclusion was to "specify a revision". However, I am not sure how to specify a version of the model. Any suggestions here? @patrickvonplaten @LysandreJik
Hey @sijunhe,
Very sorry to keep you waiting here!
I'm in favor of merging this PR as is and to open a follow-up PR that specifies a revision for all default pipeline models (happy to take care of this early next week)
@LysandreJik @sgugger would this be ok for you? <|||||>@NielsRogge also good for you?<|||||>Thanks folks. I think we are ready to merge!<|||||>Thanks again for all you work on this!<|||||>@sijunhe great work! I will start the work on the widgets ๐ <|||||>@sijunhe the widgets are live on the hub! https://huggingface.co/dandelin/vilt-b32-finetuned-vqa
<img width="639" alt="Screenshot 2022-07-29 at 15 49 21" src="https://user-images.githubusercontent.com/11827707/182323267-e7eab74e-5d88-46e2-8ce6-b3409d21926d.png">
|
transformers | 17,285 | closed | TF: all models can run in Graph mode | ### Feature request
Our models are executed in Eager mode by default. Eager mode is more permissive than Graph mode, and we have models that don't work in Graph mode at the moment. This (self) feature request is being added to bring visibility to the problem, link issues, and track progress.
### Motivation
It is a requirement for several downstream uses, like XLA-accelerated forward passes or TF serving.
### Your contribution
Adding a general test to ensure all existing and new models are compatible with graph mode. I will also attempt to fix as many related issues as I can. | 05-16-2022 15:42:04 | 05-16-2022 15:42:04 | cc @Rocketknight1 and @ydshieh -- link related issues as you see them plz :D <|||||>Strong +1 on this, since compiling in graph mode is needed for all the other downstream things we want to do (TF Serving, tf.js, etc.)<|||||>Link a potentially related issue as early as possible, as I have a poor memory capability
https://github.com/huggingface/transformers/pull/16886#issuecomment-1113448810<|||||>[#17233](https://github.com/huggingface/transformers/issues/17233)<|||||>> [#17233](https://github.com/huggingface/transformers/issues/17233)
To add another info: I found we have something like the following in `TFWav2Vec2Encoder`
```
# add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
dropout_probability = np.random.uniform(0, 1)
if training and (dropout_probability < self.config.layerdrop): # skip the layer
continue
```
I think we should avoid this, right?<|||||>@ydshieh it's undesirable, but I think TF can handle it (with potential retracing if the parameters change). Will have to double-check, though<|||||>I do have a general fear with things like `LayerDrop`, which is that XLA cannot accept any data-dependent computation paths. In other words, you cannot have a scenario where a layer is only run if a random number, generated by the GPU for each sample/batch, is over a threshold value. You can only implement something like this by running the layer every time with a residual connection, and multiplying the layer outputs by 0 if it is going to be "dropped". Doing this, of course, has no performance benefit at all.
In code like the above, the number is generated by `numpy`, which will be run once at the point of graph tracing. Therefore, that layer will either be skipped *always* or *never*. Correct code would use `tf.random` instead, which will insert the random generation into the graph correctly, but then the `LayerDrop` would cause XLA tracing to fail, and I'm not sure about regular Graph mode tracing.<|||||>> you cannot have a scenario where a layer is only run if a random number
`tf.cond` should be able to handle it, I believe ๐ค As you mentioned, with further changes as well. This is going to be fun!
Bonus: the just-released TF 2.9 compiles the forward pass with XLA when `Model.compile(jit_compile=True)` is called, so we might be able to get cool performance numbers out of sorting this on as many models as we can ๐ <|||||>Ah, I'm completely wrong - I was confusing two of the XLA requirements. `tf.cond()` will solve it all, I'm sorry!<|||||>Hey has this been fixed? <|||||>Hey @ahmedlone127 ๐ It hasn't been fixed yet.<|||||>Okay thanks :) <|||||>#18153 Ensures all our models can be saved as `SavedModel`, with the exception of CLIP. I'm closing this issue as having a `SavedModel` implies this issue is solved
Kudos to @amyeroberts for smashing it! |
transformers | 17,284 | closed | Bug fix: move tensors to GPU in GeneralizedRCNN.inference() | # What does this PR do?
This PR fixes a bug in `research_projects/lxmert`.
Currently, visual feature extraction using `GeneralizedRCNN` fails when `GeneralizedRCNN` is moved onto GPU
because some intermediate outputs in `GeneralizedRCNN.inference()` are generated and left on CPU.
I fixed the bug by adding `.to(<current_device>)` to the involved intermediate outputs.
I wonder if this PR requires addition of tests because `research_projects/lxmert` does not seems to have tests, so could you please inform me about proper action?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-16-2022 15:17:45 | 05-16-2022 15:17:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17284). All of your documentation changes will be reflected on that endpoint.<|||||>Pinging @eltoto1219 as the author of that research project!<|||||>@LysandreJik @eltoto1219 Thank you! Please let me know if there is anything required to go a step further.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,283 | open | GPT-neo generate is ignoring passed position ids | ### System Info
```shell
python version: 3.9
transformers version: 4.18
```
### Who can help?
@patil-suraj @patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
text = "hi there: "
modelname = "EleutherAI/gpt-neo-125M"
model = AutoModelForCausalLM.from_pretrained(modelname)
tokenizer = AutoTokenizer.from_pretrained(modelname)
inputs = tokenizer(text, return_tensors ="pt")
inputs["position_ids"] = inputs["attention_mask"].cumsum(-1)
print(inputs)
a = model.generate(**inputs)
inputs["position_ids"] = inputs["position_ids"] + 10
print(inputs)
b = model.generate(**inputs)
# a and b should be different because the position ids are different
print(a)
print(b)
```
the result
```
a = tensor([[5303, 612, 25, 220, 220, 220, 220, 220, 220, 220, 220, 220,
220, 220, 220, 220, 220, 220, 220, 220]])
b = tensor([[5303, 612, 25, 220, 220, 220, 220, 220, 220, 220, 220, 220,
220, 220, 220, 220, 220, 220, 220, 220]])
```
### Expected behavior
```shell
The outputs a and b should (almost always) be different because different position ids should be passed to the model's forward function, resulting in different activations. The issue seems to be here: https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L699
Specifically, the else statement is such that if both an attention mask and position ids are passed, the position ids are erased. In such a scenario, the default position ids in the model's forward function (https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L537) are used, rather than the passed-in position ids.
Proposed fix: remove the `else` block on line 699 and unindent the `if past` block.
```
| 05-16-2022 15:12:44 | 05-16-2022 15:12:44 | Hey @SwordShieldMouse,
note that `position_ids` does not have a huge influence on the output. It is not very surprising to me that the generated output ids (that we generated by taking the `argmax(...)` logits) are the same. It essentially just means that different positions ids still lead to the model outputting the same logit as the highest logit. It doesn't mean that the logits are the same. E.g. if you execute the following code, you see that the logit outputs `a` and `b` differ.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
text = "hi there: "
modelname = "EleutherAI/gpt-neo-125M"
model = AutoModelForCausalLM.from_pretrained(modelname)
tokenizer = AutoTokenizer.from_pretrained(modelname)
inputs = tokenizer(text, return_tensors ="pt")
inputs["position_ids"] = inputs["attention_mask"].cumsum(-1)
print(inputs)
a = model(**inputs).logits
inputs["position_ids"] = inputs["position_ids"] + 10
print(inputs)
b = model(**inputs).logits
print("a", a.abs().sum())
print("b", b.abs().sum())
```<|||||>Yes, I agree that that the logits are different in your example because the model `forward` indeed takes in the position ids. My point is that when the model `forward` is called through the `generate` function, the position IDs are not passed. Please [see here](https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L693). If both an attn mask and position ids are passed, the position ids are set to `None`. The resulting dict, with `None` position ids, is passed to the model forward [here](https://github.com/huggingface/transformers/blob/4710702837a9262e730b798a30c0609e322d02ed/src/transformers/generation_utils.py#L1675).
It's true that one shouldn't expect `generate` to output different results all the time. However, for a paper I'm working on atm that involves modifying position ids, and whose code would have been too long to put here, changing the position ids results in the same generation 100% of the time for every sample. It's only when I remove the `else` block on line 699 and unindent the `if past` block that I get different results for different position ids, for every sample.<|||||>@patrickvonplaten Here is a modified snippet with my proposed change to the code, which indeed gives different generation results.
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
modelname = "EleutherAI/gpt-neo-125M"
model = AutoModelForCausalLM.from_pretrained(modelname)
tokenizer = AutoTokenizer.from_pretrained(modelname)
def prepare_inputs_for_generation(input_ids, past=None, **kwargs):
token_type_ids = kwargs.get("token_type_ids", None)
# only last token for inputs_ids if past is defined in kwargs
if past:
input_ids = input_ids[:, -1].unsqueeze(-1)
if token_type_ids is not None:
token_type_ids = token_type_ids[:, -1].unsqueeze(-1)
attention_mask = kwargs.get("attention_mask", None)
position_ids = kwargs.get("position_ids", None)
if attention_mask is not None and position_ids is None:
# create position_ids on the fly for batch generation
position_ids = attention_mask.long().cumsum(-1) - 1
position_ids.masked_fill_(attention_mask == 0, 1)
if past:
position_ids = position_ids[:, -1].unsqueeze(-1)
# else:
# position_ids = None
return {
"input_ids": input_ids,
"past_key_values": past,
"use_cache": kwargs.get("use_cache"),
"position_ids": position_ids,
"attention_mask": attention_mask,
"token_type_ids": token_type_ids,
}
texts = [
"what is your name?",
"2 + 2 = ",
"what is the capital of france?",
"astnhoeruchau2918uh93u",
"how many humans are there in the world?",
"my favourite colour is ",
"hi there: "
]
res = []
for text in texts:
inputs = tokenizer(text, return_tensors ="pt")
inputs["position_ids"] = inputs["attention_mask"].cumsum(-1)
# print(inputs)
a = model.generate(**inputs)
inputs["position_ids"] = inputs["position_ids"] + 10
# print(inputs)
b = model.generate(**inputs)
res.append((a == b).all())
# implement the fix
model.prepare_inputs_for_generation = prepare_inputs_for_generation
fixed_res = []
for text in texts:
inputs = tokenizer(text, return_tensors ="pt")
inputs["position_ids"] = inputs["attention_mask"].cumsum(-1)
# print(inputs)
a = model.generate(**inputs)
inputs["position_ids"] = inputs["position_ids"] + 10
# print(inputs)
b = model.generate(**inputs)
fixed_res.append((a == b).all())
# these are different but should be the same!
print(res)
print(fixed_res)
```
Results
```
res = [tensor(True), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True)]
fixed_res = [tensor(False), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True), tensor(True)]
```
The difference is slight, but exists.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Bump.
On Thu., Jun. 16, 2022, 11:02 a.m. github-actions[bot], <
***@***.***> wrote:
> This issue has been automatically marked as stale because it has not had
> recent activity. If you think this still needs to be addressed please
> comment on this thread.
>
> Please note that issues that do not follow the contributing guidelines
> <https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md>
> are likely to be ignored.
>
> โ
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17283#issuecomment-1157763956>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACMZS2BOVYUJBONE7562IM3VPM6YBANCNFSM5WB2QOEQ>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>Hey @SwordShieldMouse,
Sorry for answering so late! Upon having taken a second look, you're right those two lines should not exist - i.e. we should ideally delete them completely - great job spotting the bug and sorry for the misunderstanding before - I now see what is meant!
Would you mind opening a PR to fix it? Otherwise happy to do so myself :-)<|||||>Great!
I'm busy for the rest of the week so I could do it next week, but I'm happy
if you want to get it done this week :)
On Mon, 20 Jun 2022 at 18:19, Patrick von Platen ***@***.***>
wrote:
> Hey @SwordShieldMouse <https://github.com/SwordShieldMouse>,
>
> Sorry for answering so late! Upon having taken a second look, you're right
> those two lines should not exist - i.e. we should ideally delete them
> completely - great job spotting the bug and sorry for the misunderstanding
> before - I now see what is meant!
>
> Would you mind opening a PR to fix it? Otherwise happy to do so myself :-)
>
> โ
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17283#issuecomment-1160685261>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACMZS2EZU7EKBTBCEEYD2T3VQCRZBANCNFSM5WB2QOEQ>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
--
Alan C.
<|||||>Awesome - happy to wait a week! More than happy though to take over the issue if you find that you won't find time the next week(s) :-)<|||||>Hi Patrick,
Sorry for getting back to you late. It turns out I won't have time after
all :( Would you be able to do the fix?
On Wed., Jun. 22, 2022, 12:12 a.m. Patrick von Platen, <
***@***.***> wrote:
> Awesome - happy to wait a week! More than happy though to take over the
> issue if you find that you won't find time the next week(s) :-)
>
> โ
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/17283#issuecomment-1162450396>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACMZS2AH7OOHB6KX67EPFILVQJD4DANCNFSM5WB2QOEQ>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
<|||||>Hey @SwordShieldMouse,
Started a PR here: https://github.com/huggingface/transformers/pull/18048 - it's actually much more work then I thought so will take a while maybe (cc @gante)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>(being worked on) |
transformers | 17,282 | closed | [Tests] Fix slow opt tests | Fixes failing circle ci tests:
```
tests/models/opt/test_modeling_opt.py::OPTEmbeddingsTest::test_logits
tests/models/opt/test_modeling_opt.py::OPTModelIntegrationTests::test_inference_no_head
``` | 05-16-2022 15:01:02 | 05-16-2022 15:01:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, will review that ASAP (~1-2h) |
transformers | 17,281 | closed | Add Deformable DETR | # What does this PR do?
This PR implements [Deformable DETR](https://github.com/fundamentalvision/Deformable-DETR), which improves the original [DETR](https://huggingface.co/docs/transformers/model_doc/detr) using a new "deformable attention" module.
This model requires a custom CUDA kernel (hence it can only be run on GPU). Other than that, the API is entirely the same as DETR.
Models are on the [hub](https://huggingface.co/models?other=deformable_detr). | 05-16-2022 14:12:40 | 05-16-2022 14:12:40 | Addressed most comments. I would like to have:
- [ ] @Narsil reviewing the initialization of the model using the custom CUDA kernel
- [ ] @LysandreJik (and possibly @Narsil) help me out regarding making the CI green for a model that only runs on GPU. Should we define a custom CI job for this particular model?
- [x] @NouamaneTazi will take care of the remaining comments regarding clearer variable names/docstrings, as he has a detailed understanding of this model.<|||||>> @LysandreJik (and possibly @Narsil) help me out regarding making the CI green for a model that only runs on GPU. Should we define a custom CI job for this particular model?
We have a `require_torch_gpu` decorator. Would it help in that case? We could add it to the model tester as a whole, if the model needs GPU to run.<|||||>@Narsil there's an issue with the pipeline tests, I added `DeformableDetrForObjectDetection` to the object detection mapping, but this model requires the custom CUDA kernel to be run.
Also, CircleCI reports the following:
```
Traceback (most recent call last):
File "utils/check_repo.py", line 764, in <module>
check_repo_quality()
File "utils/check_repo.py", line 753, in check_repo_quality
check_models_are_in_init()
File "utils/check_repo.py", line 305, in check_models_are_in_init
for module in get_model_modules():
File "utils/check_repo.py", line 267, in get_model_modules
modeling_module = getattr(model_module, submodule)
File "/home/circleci/.local/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 866, in __getattr__
value = self._get_module(name)
File "/home/circleci/.local/lib/python3.7/site-packages/transformers/utils/import_utils.py", line 883, in _get_module
) from e
RuntimeError: Failed to import transformers.models.deformable_detr.modeling_deformable_detr because of the following error (look up to see its traceback):
[Errno 2] No such file or directory: '/home/circleci/.local/lib/python3.7/site-packages/transformers/models/deformable_detr/custom_kernel/vision.cpp'
```
I might need some help with this.<|||||>> @Narsil there's an issue with the pipeline tests, I added DeformableDetrForObjectDetection to the object detection mapping, but this model requires the custom CUDA kernel to be run.
The generic tests will always run the model on CPU, so the best way is to discard this model from the test.
Doing `if isinstance(pipeline.models, Deformable...): self.skipTest("This model requires a custom CUDA kernel and is NOT implemented for CPU")` should be enough IMO (we know how to update later when needed).
I would also add a slow GPU test that tries to use the pipeline directly if that's OK for the CI.
```
@require_gpu
@require_torch
@slow
def test_slow(self):
pipe = pipeline(model="hf-internal-testing/....", device=0)
out = pipe(....)
self.assertEqual(out, {....})
```
Does that make sense ? If it's hard to have a GPU test (not sure we ever call those anyway for pipelines, no @LysandreJik then we can do without but even if it's not auto tested there's value in creating the test IMO (it will run on local machines that try to run the test)<|||||>As for the missing file, It's probably because the `setup.py` doesn't properly include the file when installing `transformers`.
I don't really have good pointers for that since you seem to have added the correct line. The main advice would be to do
`python -m build` and looking at the output to check that the proper `.cpp`, `.h` `.cuh` are properly included in the build folder. (Installing from source with `pip install -e .` won't work as it always copy all the files I think so you won't see how the built version fails, maybe it does I am unsure)<|||||>OK, so looking at why the custom kernel fails to build:
```
_ ERROR collecting tests/models/deformable_detr/test_modeling_deformable_detr.py _
src/transformers/utils/import_utils.py:893: in _get_module
return importlib.import_module("." + module_name, self.__name__)
/usr/local/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1006: in _gcd_import
???
<frozen importlib._bootstrap>:983: in _find_and_load
???
<frozen importlib._bootstrap>:967: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:677: in _load_unlocked
???
<frozen importlib._bootstrap_external>:728: in exec_module
???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
???
src/transformers/models/deformable_detr/modeling_deformable_detr.py:49: in <module>
MSDA = load_cuda_kernels()
src/transformers/models/deformable_detr/load_custom.py:45: in load_cuda_kernels
"-D__CUDA_NO_HALF2_OPERATORS__",
../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1156: in load
keep_intermediates=keep_intermediates)
../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1367: in _jit_compile
is_standalone=is_standalone)
../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1438: in _write_ninja_file_and_build_library
verify_ninja_availability()
../.local/lib/python3.7/site-packages/torch/utils/cpp_extension.py:1494: in verify_ninja_availability
raise RuntimeError("Ninja is required to load C++ extensions")
E RuntimeError: Ninja is required to load C++ extensions
```
This occurs quite often. The build is missing `ninja`.
Try adding `pip install ninja` to the CircleCI job workflow and see if it solves the problem. Please ping me if it doesn't.<|||||>Additionally, if we start having custom cuda kernels that are enabled by default we must include `ninja` in our main python dependencies in `setup.py`.<|||||>so installing ninja did the trick of overcoming the initial hurdle. as commented above - if we make it work it should go into `setup.py`'s dependencies and not the job file - but for now this is good enough while we figure out how to make it work.
Now it's failing:
```
E OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
```
because CircleCI is cpu-only and doesn't have `cuda` installed by default.
Basically your custom cuda kernel requires `cuda` installed to build. You don't have to have a gpu to build it, but it needs to be installed.
@ydshieh, do you by chance know if we are planning to get `cuda` installed on CircleCI? it's easy to do via `apt` directly from nvidia with .deb packages. Except it's not fast if it's reinstalled on every job run.
@NielsRogge, does this model work on CPU at all? i.e. is there a fallback to non-custom kernel in the absense of GPUs? If it is then the code should be modified to verify if there is a CUDA environment available and if it's not available not to load the custom kernel and everything will just work.
<|||||>The model only runs on GPU and requires the custom kernel. The authors do provide a CPU version [here](https://github.com/fundamentalvision/Deformable-DETR/blob/11169a60c33333af00a4849f1808023eba96a931/models/ops/functions/ms_deform_attn_func.py#L41), but it's for "debugging and testing purposes only".<|||||>The current CircleCI jobs use the docker image `circleci/python:3.7`. If we decide to install `cuda`, I think we can build a custom docker image based on it.<|||||>If it is not too much work to make running on both CPU/GPU work (considering the authors provide some implementation), I would advocate doing it - also mainly for "debugging and testing purposes only".<|||||>> If it is not too much work to make running on both CPU/GPU work (considering the authors provide some implementation), I would advocate doing it - also mainly for "debugging and testing purposes only".
Hmm I looked into the code, the problem is that their CPU version doesn't accept 2 arguments (`level_start_index` and `im2col_step`) which the CUDA version has, and are required for correct computation. Hence, I don't think it's possible to have a CPU version of it in the library. The authors also explicitly [indicate](https://github.com/fundamentalvision/Deformable-DETR/blob/11169a60c33333af00a4849f1808023eba96a931/models/ops/src/cpu/ms_deform_attn_cpu.cpp#L26) that the layer isn't implemented on CPU.<|||||>1. OK, so if the CPU version is not the same then we won't be testing the actual modeling code - not a good idea. let's stick to testing the actual GPU modeling code.
2. You're setting a new precedent with this model, @NielsRogge - so we need to decide how to deal with such models, so let's bring @LysandreJik and @sgugger to this discussion - I wonder if we should perhaps discuss this in a separate RFC Issue since it will probably impact other similar models in the future.
But we need:
a. the modeling files not fail on `import` in an environment that lacks `cuda` installed- so probably either using the earlier suggestion of moving the model loading into `__init__` (less ideal) or using `try/except` and recovering gracefully if cuda env is not availble.
b. the tests for such model should all be decorated with `@require_torch_gpu` - so it might be tricky with common tests - I wonder if perhaps decorating the test class with `@require_torch_gpu` would do the trick.
c. the testing will have to happen on our CI that has GPUs. which means no "real-time" testing.<|||||>> b. the tests for such model should all be decorated with @require_torch_gpu - so it might be tricky with common tests - I wonder if perhaps decorating the test class with @require_torch_gpu would do the trick.
I've done this as seen here: https://github.com/NielsRogge/transformers/commit/ec61d727615d9cff93df59adfd3dd40091401658.<|||||>Pinging @Narsil regarding excluding this model from the pipeline tests.<|||||>Hi @NielsRogge ,
The best location to do this is in `tests/pipelines/test_pipelines_xxxx.py` and simply add some logic in `get_test_pipeline` function.
But the tests currently seem to be passing, so is this really necessary ?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>PR is ready for review, by adding the model to the mappings this happens:
```
ERROR tests/pipelines/test_pipelines_feature_extraction.py - RecursionError: ...
ERROR tests/pipelines/test_pipelines_object_detection.py - RecursionError: ma...
!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!
```<|||||>@sgugger that didn't seem to fix the recursion error.<|||||>I never said it would.
Since you asked so nicely, I investigated and foudn the fix. I don't seem to have the rights to push on your branch so I made a PR [here](https://github.com/NielsRogge/transformers/pull/42).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge Shouldn't we re-open ? This closing was slightly agressive wasn't it ?<|||||>Yes, PR should be close to merge. Hoping to merge this week.
PS: CPU implementation is added, model doesn't require GPU anymore :D <|||||>Hi @NielsRogge . I am following the finetuning notebook for [DETR object detection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb).
You have mentioned that DeformableDETR follows mostly same API. But I noticed that model based on `DeformableDetrForObjectDetection` doesn't automatically add +1 to number classes.
Also, for the Feature-Extractor, I am confused whether we should opt for as per [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/deformable_detr#transformers.DeformableDetrForObjectDetection.forward.example) to use `AutoImageProcessor` or `DeformableDetrFeatureExtractor` instead.
To add further, I was wondering if we could add in the augmentation that the original paper follows from the official Repo. I managed to add augmentation based on functions available in official repo for Deformable-DETR. But not sure of the correctness. |
transformers | 17,280 | closed | [ConvNeXT] Fix drop_path_rate | # What does this PR do?
As pointed out by #16699, the drop path rate attribute of `ConvNextConfig` wasn't implemented correctly.
This PR fixes that, for both the PyTorch and Tensorflow implementations. | 05-16-2022 12:48:15 | 05-16-2022 12:48:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger ok for you? |
transformers | 17,279 | closed | Change `config.encoder_ffn_dim` to `config.decoder_ffn_dim` for decoder impl | Change `config.encoder_ffn_dim` to `config.decoder_ffn_dim` for decoder.
These typos are detected during loading flax from pytorch checkpoint where `encoder_ffn_dim` != `decoder_ffn_dim`
# What does this PR do?
This PR fix typos, these typos is critical!!!
@patrickvonplaten @patil-suraj
| 05-16-2022 10:23:29 | 05-16-2022 10:23:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,278 | closed | `LayoutXLMProcessor` returns unexpected `offset_mapping` | ### System Info
```shell
- `transformers` version: 4.19.1
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
1. docker run -it --rm --entrypoint bash python
2. python3 -m pip install pip --upgrade
3. python3 -m pip install torch tensorflow pillow transformers
4.
from transformers import LayoutXLMProcessor
from PIL import Image
import numpy as np
processor = LayoutXLMProcessor.from_pretrained("microsoft/layoutxlm-base", apply_ocr=False)
out = processor(
text=["Hello", "there", "7.0", "Koningshof", "General", "Kenobi"],
images=[Image.new(mode='RGB',size=(200, 200))],
boxes=[[1,2,3,4] for _ in range(6)],
return_offsets_mapping=True
)
reverted = processor.tokenizer.convert_ids_to_tokens(out.input_ids)
print(reverted)
is_start_of_word = np.asarray(out.offset_mapping)[:, 0] == 0
print(list(zip(reverted, is_start_of_word)))
```
prints
```[('<s>', True), ('โHello', True), ('โthere', True), ('โ', True), ('7.0', True), ('โKoning', True), ('s', False), ('hof', False), ('โGeneral', True), ('โKen', True), ('obi', False), ('</s>', True)]```
### Expected behavior
```shell
The token `"7.0"` gets converted to `('โ', True), ('7.0', True)`.
I would expect that the token `"7.0"` stays one token `('โ7.0', True)`
or that it gets converted to `('โ', True), ('7.0', False)`
Questions:
- It could very well be that this _is_ expected behavior. Is it?
- If not, what conversion is LayoutXLM expecting? What is the correct way to convert?
```
| 05-16-2022 09:57:17 | 05-16-2022 09:57:17 | @NielsRogge Pick me pick me pick me!<|||||>Pinging @SaulLu here as she might have a better clue regarding the tokenization. For context, LayoutXLM uses the same tokenization as XLMRoBERTa. <|||||>Hi @fredo838 ,
Thank you very much for the detailed issue! Quite a few things seem to come into play here.
@NielsRogge , do you know if it is expected that LayoutXLM adds prefix space automatically? I think it is but it's best to be sure.
If so, I think the tokenization of `"7.0" -> ['โ', '7.0']` with `LayoutXLM` is correct: the original tokenizer chosen by `LayoutXLM` is a trained model with sentencepiece with the `split_by_number: true` setting.
On the other hand, I agree that the offsets seem odd. To simplify things a bit (to help fix things in the future), here is a snippet that shows the behavior:
```python
tokenizer = XLMRobertaTokenizerFast.from_pretrained("microsoft/layoutxlm-base")
texts = [" 7.0", "7.0", " hello", "hello"]
encoding = tokenizer(texts, return_offsets_mapping=True, add_special_tokens=False)
for text, input_ids, offsets in zip(texts, encoding.input_ids, encoding.offset_mapping):
print(
repr(text),
tokenizer.convert_ids_to_tokens(input_ids),
offsets,
[text[start:end] for start,end in offsets]
)
# ' 7.0' ['โ', '7.0'] [(1, 2), (1, 4)] ['7', '7.0'] <- looks weird
# '7.0' ['โ', '7.0'] [(0, 1), (0, 3)] ['7', '7.0'] <- looks weird
# ' hello' ['โhell', 'o'] [(1, 5), (5, 6)] ['hell', 'o'] <- looks good
# 'hello' ['โhell', 'o'] [(0, 4), (4, 5)] ['hell', 'o'] <- looks good
```
To advance on the resolution of the problem, I think nevertheless that we should discuss this on the side of the [tokenizers](https://github.com/huggingface/tokenizers) repo because the offsets are calculated by the backend tokenizer which is an instance of this library. I still have in mind 2 issues ( https://github.com/huggingface/tokenizers/issues/852 and https://github.com/huggingface/tokenizers/issues/843) that were related to offsets but it seems to me that this is a new case, would you like to open an issue on the tokenizer repo too?<|||||>https://github.com/huggingface/tokenizers/issues/1006 I posted the issue in the tokenizers repo |
transformers | 17,277 | closed | Issue 17128 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17128
## Before submitting
- [N/A] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Here's the [link](https://github.com/huggingface/transformers/issues/17128)
- [N/A] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests? I didn't write a custom test. Ran the following commands run to ensure local tests pass
1. `RUN_PIPELINE_TESTS=yes python -m unittest discover -s tests/pipelines -p "test_pipelines_question_answering.py" -t . -v -f `
2. `python -m unittest discover -s . -p "test_tokenization_wav2vec2.py" -t . -v -f`
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik
| 05-16-2022 09:36:11 | 05-16-2022 09:36:11 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_17277). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you, Narsil. I don't have write access. So, please merge to `main`.<|||||>I will ping a wait for a core maintainer second eye.
@LysandreJik can you get a look ?<|||||>Look good @mygithubid1! I see there are some blank line changes in the `tokenization_utils_base.py`. Could you revert those?<|||||>Please ignore this. My mistake. |
transformers | 17,276 | closed | Remove next sentence prediction from supported ONNX tasks | # What does this PR do?
This PR removes the `next-sentence-prediction` feature that was added in https://github.com/huggingface/transformers/pull/17029 as part of the MobileBERT ONNX export.
It turns out that the `forward()` method of MobileBERT and BERT includes `kwargs`, which is not supported with PyTorch's ONNX exporter. Since this feature is unlikely to be used for inference, the simplest solution is to remove it.
With this change, the ONNX slow tests all pass. | 05-16-2022 09:34:59 | 05-16-2022 09:34:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,275 | closed | Fixes #17128 . | VisibleDeprecationWarning is addressed by specifying dtype=object when creating numpy array.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-16-2022 08:59:47 | 05-16-2022 08:59:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,274 | closed | Add pipeline for cross-modal / uni-modal ranking | ### Feature request
Given queries and keys, the proposed pipeline returns a ranked list of keys that are most similar to each respective query.
This pipeline should support uni-modal and cross-modal retrieval, i.e.
- Text-to-Text
- Text-to-Image
- Image-to-Text
- Image-to-Image
Prominent use cases would be:
- Using BERT family of models to perform text-to-text retrieval
- Using multi-modal models such as CLIP to perform any of the retrieval methods above. There can be multiple ranking methods for different multi-modal models, for instance
- For VILT, we can use [CLS] pooled image-text matching score for ranking
- For CLIP, we can use logits_per_modality for cross-modal similarity score for ranking
- For ALBEF (https://github.com/huggingface/transformers/issues/17224), we have a two-stage (coarse-to-fine) ranking (image-text similarity -> [CLS] pooled image-text matching score)
### Motivation
I was looking for a use-case for CLIP for cross-modal retrieval, but the current pipeline for CLIP does not seem to support cross-modal retrieval. I believe there is a demand for this pipeline.
### Your contribution
- I can help with the implementation once we polish the parameter definitions and outputs! | 05-16-2022 08:03:37 | 05-16-2022 08:03:37 | I think there are some image-text retrieval capabilities already, such as [ViltForImageAndTextRetrieval](https://huggingface.co/docs/transformers/model_doc/vilt#transformers.ViltForImageAndTextRetrieval). But these can only work for a toy example set of queries and keys due to its interaction-based nature. A true retrieval (cross-model or not) would probably need `datasets` and `faiss`. I think that may be too complicated for for a `pipeline`?<|||||>- I believe there is no unified pipeline for cross-modal search that can be applied to different models.
- Thank you for clarifying. What I had in mind was more of a ranking than a retrieval since I do not expect to have index search baked into the pipeline.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,273 | closed | How to input word2vec embeddings to gpt2 model? | Hi,
I am working on the huggingface gpt2 model. I have a word2vec model trained on a dataset (dimensions similar to gpt2, 768). Now I want to input these embeddings to gpt2. I understand I must use inputs_embeds to input the embeddings but I am little unclear about how exactly to do it. Any source or help would be appreciated. | 05-16-2022 07:54:14 | 05-16-2022 07:54:14 | Hi @tejaravi675 ๐ Yes, you can pass embeddings to the model as you described (through `inputs_embeds`), but the embeddings will be unknown to the model, as it was not trained with them. In essence, you will have to build a script to finetune the model using your embeddings (for GPT-2, with the causal language modeling task). You can see a few examples [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) (pytorch) and [here](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling) (tensorflow).
As per our [issues guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md), we reserve GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค I'm closing this issue, but feel free to reopen with queries that fit the criteria I described.<|||||>Hi, I understand that I have to finetune the model with my embeddings. My problem is I am unable to understand how exactly to code (Not very handy with complex coding). So I wanted to know if there were any examples which I can refer to to understand the finetuning part and understand how to add my word embeddings to gpt2 model.<|||||>The examples I linked above are the closest we have to the task you are describing, but they require some modification to run your use case :) Sadly, we don't have the capacity to further help you with your task -- try in the forums, maybe some other user tried to do a similar thing. |
transformers | 17,272 | closed | Fix wrong PT/TF categories in CI report | # What does this PR do?
Current `notification_service.py` has
```
if re.search("_tf_", line):
model_results[model]["failed"]["TensorFlow"][artifact_path["gpu"]] += 1
```
which will put all `test_pt_tf_model_equivalence` under `TensorFlow` even it is from the PT (cross) tests, like
```
tests/models/albert/test_modeling_albert.py::AlbertModelTest::test_pt_tf_model_equivalence
```
This PR fixes this issue. | 05-16-2022 06:24:09 | 05-16-2022 06:24:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,271 | closed | Add TFData2VecVision for semantic segmentation | This PR introduces `TFData2VecVisionForSemanticSegmentation` which takes the `TFData2VecVisionMainLayer` and appends the necessary layers for performing semantic segmentation along with loss computation (first one in this line?).
**Notes**
* Thanks to @Rocketknight1 who implemented the adaptive average pooling layer.
* Currently the model saving the tests (2 tests) are failing as soon as `TFData2VecVisionForSemanticSegmentation` class is introduced to `tests/models/test_modeling_tf_data2vec_vision.py`. Without that class, the test runs as expected. I would appreciate any help.
* As per discussed over Slack, [this class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L882) should never have been subclassed from `nn.ModuleList`. It is currently leading a few idiosyncracies on the TF side (mainly related to naming of the layers). Once that is sorted out we can again revisit this `TFData2VecVisionForSemanticSegmentation` class and make the amends if needed. Happy to take the charge then.
* I ran the tests locally with the following command: `RUN_SLOW=1 python -m pytest tests/models/data2vec/test_modeling_tf_data2vec_vision.py`.
Here's the trace of the errors from running tests:
```
model = model_class(config)
model(self._prepare_for_class(inputs_dict, model_class)) # Model must be called before saving.
# Let's load it from the disk to be sure we can use pretrained weights
with tempfile.TemporaryDirectory() as tmpdirname:
> model.save_pretrained(tmpdirname, saved_model=False)
tests/test_modeling_tf_common.py:693:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/modeling_tf_utils.py:1513: in save_pretrained
self.save_weights(output_model_file)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/keras/utils/traceback_utils.py:67: in error_handler
raise e.with_traceback(filtered_tb) from None
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/group.py:149: in create_dataset
dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/dataset.py:142: in make_new_dset
dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl)
h5py/_objects.pyx:54: in h5py._objects.with_phil.wrapper
???
h5py/_objects.pyx:55: in h5py._objects.with_phil.wrapper
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E ValueError: Unable to create dataset (name already exists)
h5py/h5d.pyx:87: ValueError
...
outputs = model(self._prepare_for_class(inputs_dict, model_class))
with tempfile.TemporaryDirectory() as tmpdirname:
> model.save_pretrained(tmpdirname, saved_model=False)
tests/test_modeling_tf_common.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
src/transformers/modeling_tf_utils.py:1513: in save_pretrained
self.save_weights(output_model_file)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/keras/utils/traceback_utils.py:67: in error_handler
raise e.with_traceback(filtered_tb) from None
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/group.py:149: in create_dataset
dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)
../../.local/bin/.virtualenvs/hf/lib/python3.8/site-packages/h5py/_hl/dataset.py:142: in make_new_dset
dset_id = h5d.create(parent.id, name, tid, sid, dcpl=dcpl)
h5py/_objects.pyx:54: in h5py._objects.with_phil.wrapper
???
h5py/_objects.pyx:55: in h5py._objects.with_phil.wrapper
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E ValueError: Unable to create dataset (name already exists)
h5py/h5d.pyx:87: ValueError
-------------------------------
```
Additionally, here's a little code for testing the segmentation class:
```py
from PIL import Image
import tensorflow as tf
from src.transformers.models.data2vec.modeling_tf_data2vec_vision import (
TFData2VecVisionForSemanticSegmentation
)
from transformers import BeitFeatureExtractor
def prepare_img():
image = Image.open("./tests/fixtures/tests_samples/COCO/000000039769.png")
return image
feature_extractor = BeitFeatureExtractor.from_pretrained(
"facebook/data2vec-vision-base-ft1k"
)
model = TFData2VecVisionForSemanticSegmentation.from_pretrained(
"facebook/data2vec-vision-base",
)
image = prepare_img()
inputs = feature_extractor(images=image, return_tensors="tf")
batch_size, num_channels, height, width = inputs["pixel_values"].shape
inputs["labels"] = tf.zeros((batch_size, height, width))
outputs = model(**inputs)
print(outputs.logits.shape)
print(outputs.loss.shape)
```
@Rocketknight1 @sgugger | 05-16-2022 06:21:12 | 05-16-2022 06:21:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks a lot for your PR! Note that on the pyramid pooling class, even if we change the PyTorch class to not subclass `ModuleList` anymore, it will still need to keep the same weight names, otherwise compatibility with any checkpoint on the Hub will be broken.
Absolutely. <|||||>@Rocketknight1 a gentle ping ๐<|||||>Ah, I'm sorry! Will review it by tomorrow.<|||||>Hi, I just took a look over this! I suspect the issue with the tests is that there's something like a layer name collision when saving. In h5 files, weights are saved as 'datasets' , so this error is telling us that the weights are not uniquely named - the same 'dataset' name is being written to twice during saving, which means two layers share the same name.<|||||>Yes, I suspected something similar but couldn't figure out where the duplicate is coming from. Do you have any suggestions?
@Rocketknight1 <|||||>I suspect the issue is most likely related to the implementation of AdaptiveAvgPool I wrote - the practice of precomputing a constant sparse matrix like that is non-standard, and TF might be trying to save that Tensor somehow. Can you try replacing it with a 'dummy' layer that has the same output shape and seeing if the error goes away? If so, I can work on a different implementation for the layer - I have some ideas that I think will improve performance a lot, and they might also resolve the problem too.<|||||>> Can you try replacing it with a 'dummy' layer that has the same output shape and seeing if the error goes away?
Sure. I will do it and get back. <|||||>@Rocketknight1 this is what I did:
https://github.com/sayakpaul/transformers/blob/f9292cf2c47baf7eb264c98c6189ae503930130f/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py#L1102
Same issue. <|||||>@sayakpaul I used post-mortem debugging to isolate this - just add this to `TFData2VecVisionModelTest`:
```
def test_save_load(self):
try:
super().test_save_load()
except:
import pdb
pdb.post_mortem()
```
Then run the tests with `pytest --capture=no`. This will break into a debugger at the point of failure, and you can step up to the calling frame with `(u)p`.
From there, I can tell that the offending array has name `kernel:0` with shape `(1, 1, 32, 32)`, though I couldn't figure out exactly where it was. Is there a 1x1 conv2D in your code that maps 32 filters to 32 filters?<|||||>> From there, I can tell that the offending array has name kernel:0 with shape (1, 1, 32, 32), though I couldn't figure out exactly where it was. Is there a 1x1 conv2D in your code that maps 32 filters to 32 filters?
There are multiple 1x1 convs, yes. <|||||>> Then run the tests with pytest --capture=no. This will break into a debugger at the point of failure, and you can step up to the calling frame with (u)p.
Could you elaborate a bit more here? I have added the `pdb` snippet into the model tester code. Then I ran `RUN_SLOW=1 python -m pytest --capture=no tests/models/data2vec/test_modeling_tf_data2vec_vision.py`. I do get the pdb prompt and I get to `-> super().test_save_load()` as the oldest frame.
@Rocketknight1 <|||||>@sayakpaul I stepped up to the frame of `dsid = dataset.make_new_dset(group, shape, dtype, data, name, **kwds)`. This let me inspect the variable `name` and the `group`, but I didn't understand `h5py` well enough to figure out the exact weight causing the issue.<|||||>@Rocketknight1 I looked into the layers with `kernel_size=1` and tried to fix their names to use something that's suffixed with identifiers. You can find the commit [here](https://github.com/sayakpaul/transformers/commit/8ccf88bf6bcc054307faf58e9ca2b21e04c6e60b).
It still didn't resolve the issue. The only potential suspect I could find is the following. There are two layers namely `classifier` in `TFData2VecVisionForSemanticSegmentation` that are added via `TFData2VecVisionUperHead` and `TFData2VecVisionFCNHead` respectively.
Thoughts? <|||||>Update:
With @Rocketknight1's help, I was able to resolve the current test failure (commit [here](https://github.com/sayakpaul/transformers/blob/fix/tf-data2vec-seg/src/transformers/models/data2vec/modeling_tf_data2vec_vision.py)). But I have run into two more failures which I am currently discussing with @Rocketknight1. He's on vacation. Once he gets back, hopefully, will be able to report back with updates. |
transformers | 17,270 | closed | Fix missing job action button in CI report | # What does this PR do?
Current CI reports lack the `GitHub Action Job` button, due to the recent changes in workflow files:
- `models/bert` -> `models_bert` (was done in artifact names, but not in the matrix)
- `[single|multi]-gpu-docker` -> `[single|multi]-gpu` (was done in `notification_service.py`, but not in scheduled CI workflow)
This PR fixes the issues by:
- Let the workflow files use `single-gpu` and `multi-gpu` as matrix and artifact names. Only adds `-docker` in `runs-on:` for scheduled CI.
- Add `model.replace('models_', 'models/'`) at a proper place in `notification_service.py` | 05-16-2022 06:20:16 | 05-16-2022 06:20:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,269 | closed | Use the PR URL in push CI report | # What does this PR do?
In the push CI report, change the URL from the (merged) commit page to the PR page (if that commit comes from a PR). | 05-16-2022 06:07:38 | 05-16-2022 06:07:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,268 | closed | Swin Transformer V2 | ### Model description
[Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/pdf/2111.09883.pdf)
repo origin: [Swin Transformer V2](https://github.com/microsoft/Swin-Transformer#updates)
repo timm: [Swin Transformer V2](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/swin_transformer_v2.py)
all the model the pretrain is ready
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
repo origin: [Swin Transformer](https://github.com/microsoft/Swin-Transformer#updates) | 05-16-2022 05:03:22 | 05-16-2022 05:03:22 | Marking this as a good first issue as Swin v2 only adds a couple of small design improvements compared to Swin v1.
One could use the [add new-model-like](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command) feature to copy all Swin files, and then implement Swin v2 by tweaking these files. <|||||>Hey! Can I give it a shot?<|||||>@NielsRogge I would like to add this model. <|||||>Hi :) sure, maybe you can give me your email addresses such that we can set up a Slack channel for coordination.<|||||>> Hi :) sure, maybe you can give me your email addresses such that we can set up a Slack channel for coordination.
Here is mine : [email protected]<|||||>Hey @NielsRogge, I'd like to help out as well. My email is [email protected]<|||||>my email is [email protected]<|||||>Thanks, I'll create one. You should receive an invite later today<|||||>Hi all is this work complete, I'd love to help if possible. |
transformers | 17,267 | closed | ### System Info | ### Cursor colour
```shell
I'm running transformers installed directly from `ee393c0`.
```
### Who can help?
@NielsRogge @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
import transformers
feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
If I don't have pillow installed, the second line fails with:
AttributeError: module transformers.models.convnext has no attribute ConvNextFeatureExtractor
If I then run `pip install pillow`, everything works as expected.
### Expected behavior
```shell
The feature extractor should be loaded successfully.
```
__Originally posted by @eric-mitchell in https://github.com/huggingface/transformers/issues/17266__ | 05-16-2022 00:59:40 | 05-16-2022 00:59:40 | |
transformers | 17,266 | closed | Loading facebook/regnet-y-040 FeatureExtractor fails mysteriously unless pillow is installed | ### System Info
```shell
I'm running transformers installed directly from `ee393c0`.
```
### Who can help?
@NielsRogge @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import transformers
feature_extractor = transformers.AutoFeatureExtractor.from_pretrained("facebook/regnet-y-040")
If I don't have pillow installed, the second line fails with:
AttributeError: module transformers.models.convnext has no attribute ConvNextFeatureExtractor
If I then run `pip install pillow`, everything works as expected.
### Expected behavior
```shell
The feature extractor should be loaded successfully.
```
| 05-16-2022 00:06:18 | 05-16-2022 00:06:18 | Will look into this today. There should be a way to indicate the missing dependencies when the object is not found by using our dummy objects. |
transformers | 17,265 | closed | OSError Directory not empty error in Trainer.py on checkpoint replacement | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 4
- Using distributed or parallel set-up in script?: deepspeed
```
### Who can help?
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Create txt file of sentences.
Ran run_clm.py with following parameters:
deepspeed --num_gpus=4 run_clm.py --deepspeed ds_config_gptj6b.json --model_name_or_path EleutherAI/gpt-j-6B --train_file Jesus_sayings.txt --do_train --fp16 --overwrite_cache --evaluation_strategy=steps --output_dir ~/gpt-j/finetuned --num_train_epochs 5 --eval_steps 1 --gradient_accumulation_steps 32 --per_device_train_batch_size 1 --use_fast_tokenizer False --learning_rate 5e-06 --warmup_steps 10 --save_total_limit 2 --save_steps 1 --save_strategy steps --tokenizer_name gpt2
Error traceback:
```
[INFO|modeling_utils.py:1546] 2022-05-15 18:25:49,903 >> Model weights saved in /home/ubuntu/gpt-j/finetuned/checkpoint-3/pytorch_model.bin
[INFO|tokenization_utils_base.py:2108] 2022-05-15 18:25:49,911 >> tokenizer config file saved in /home/ubuntu/gpt-j/finetuned/checkpoint-3/tokenizer_config.json
[INFO|tokenization_utils_base.py:2114] 2022-05-15 18:25:49,917 >> Special tokens file saved in /home/ubuntu/gpt-j/finetuned/checkpoint-3/special_tokens_map.json
[2022-05-15 18:26:00,522] [INFO] [engine.py:3177:save_16bit_model] Saving model weights to /home/ubuntu/gpt-j/finetuned/checkpoint-3/pytorch_model.bin
[2022-05-15 18:26:26,263] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: /home/ubuntu/gpt-j/finetuned/checkpoint-3/global_step3/zero_pp_rank_0_mp_rank_00_model_states.pt
[2022-05-15 18:27:44,462] [INFO] [engine.py:3063:_save_zero_checkpoint] zero checkpoint saved /home/ubuntu/gpt-j/finetuned/checkpoint-3/global_step3/zero_pp_rank_0_mp_rank_00_optim_states.pt
[INFO|trainer.py:2424] 2022-05-15 18:27:46,523 >> Deleting older checkpoint [/home/ubuntu/gpt-j/finetuned/checkpoint-1] due to args.save_total_limit
Traceback (most recent call last):
File "run_clm.py", line 575, in <module>
main()
File "run_clm.py", line 523, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1320, in train
return inner_training_loop(
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1634, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1805, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1964, in _save_checkpoint
self._rotate_checkpoints(use_mtime=True, output_dir=run_dir)
File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2425, in _rotate_checkpoints
shutil.rmtree(checkpoint)
File "/usr/lib/python3.8/shutil.py", line 718, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/usr/lib/python3.8/shutil.py", line 659, in _rmtree_safe_fd
onerror(os.rmdir, fullname, sys.exc_info())
File "/usr/lib/python3.8/shutil.py", line 657, in _rmtree_safe_fd
os.rmdir(entry.name, dir_fd=topfd)
OSError: [Errno 39] Directory not empty: 'global_step1'
4%|โโโ | 3/70 [21:59<8:11:00, 439.71s/it]
[2022-05-15 18:27:50,264] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78507
[2022-05-15 18:27:50,265] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78508
[2022-05-15 18:27:50,265] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78509
[2022-05-15 18:27:50,266] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 78510
[2022-05-15 18:27:50,267] [ERROR] [launch.py:184:sigkill_handler] ['/usr/bin/python3', '-u', 'run_clm.py', '--local_rank=3', '--deepspeed', 'ds_config_gptj6b.json', '--model_name_or_path', 'EleutherAI/gpt-j-6B', '--train_file', 'Jesus_sayings.txt', '--do_train', '--fp16', '--overwrite_cache', '--evaluation_strategy=steps', '--output_dir', '/home/ubuntu/gpt-j/finetuned', '--num_train_epochs', '5', '--eval_steps', '1', '--gradient_accumulation_steps', '32', '--per_device_train_batch_size', '1', '--use_fast_tokenizer', 'False', '--learning_rate', '5e-06', '--warmup_steps', '10', '--save_total_limit', '2', '--save_steps', '1', '--save_strategy', 'steps', '--tokenizer_name', 'gpt2'] exits with return code = 1
```
### Expected behavior
```shell
Should delete old checkpoint without error.
Workaround:
Changed trainer.py line 2425 to
shutil.rmtree(checkpoint, ignore_errors=True)
```
This causes program to run without error but leaves behind ghost checkpoint directories with no content. Though these are gradually pruned.
```
| 05-15-2022 19:38:52 | 05-15-2022 19:38:52 | Thanks for the report! That sounds like a reasonable fix. Do you want to make a PR with it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>What's the status of this? Is there a workaround without editing the source?<|||||>No PR was raised to fix it, you should go ahead if you want to contribute :-) |
transformers | 17,264 | closed | Problem with Adding LayerNorm after BART's Encoder for Summarization | ### System Info
```shell
- `transformers` version: 4.19.1
- Platform: Linux-5.4.0-109-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: tried both distributed and 1gpu. I also tried deepspeed and full 32 precision.
```
### Who can help?
@patrickvonplaten @patil-suraj
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to add additional layers/encoders after the BARTEncoder that involves all the self attention and layernorm layers, and after debugging I find that whenever I call the layernorm, the model cannot give reasonable rouge at test time. Here is the minimal reproduction code.
1. I used the `examples/pytorch/summarization/run_summarization.py`. The changes I make (which I think are harmless is commenting the version requirement and calling my own Model BARTForConditionalGenerationTest (which I am pasting below). So the change is `model = BARTForConditionalGenerationTest.from_pretrained(` instead of `model = AutoModelForSeq2SeqLM.from_pretrained(`
2. The testing model adds the self attention+layernorm module, which I copied directly from [BartEncoderLayer](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/models/bart/modeling_bart.py#L284):
```
import torch
import torch.nn as nn
from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
from dataclasses import dataclass
from typing import Optional, Tuple
from transformers.models.bart.modeling_bart import (
BartForConditionalGeneration,
BartModel,
BartDecoder,
BartEncoder,
BartAttention,
shift_tokens_right,
_expand_mask,
)
from transformers.activations import ACT2FN
from transformers.modeling_outputs import (
Seq2SeqModelOutput,
Seq2SeqLMOutput,
BaseModelOutput,
)
class BARTModelTest(BartModel):
def __init__(self, config):
super().__init__(config)
# additional layer to showcase the layernorm issue
self.embed_dim = config.d_model
self.self_attn = BartAttention(
embed_dim=self.embed_dim,
num_heads=config.encoder_attention_heads,
dropout=config.attention_dropout,
)
self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
self.dropout = config.dropout
self.activation_fn = ACT2FN[config.activation_function]
self.activation_dropout = config.activation_dropout
self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)
self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)
self.final_layer_norm = nn.LayerNorm(self.embed_dim)
self.post_init()
def forward(
self,
input_ids=None,
attention_mask=None,
decoder_input_ids=None,
decoder_attention_mask=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
encoder_outputs=None,
past_key_values=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
use_cache=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
# different to other models, Bart automatically creates decoder_input_ids from
# input_ids if no decoder_input_ids are provided
if decoder_input_ids is None and decoder_inputs_embeds is None:
if input_ids is None:
raise ValueError(
"If no `decoder_input_ids` or `decoder_inputs_embeds` are "
"passed, `input_ids` cannot be `None`. Please pass either "
"`input_ids` or `decoder_input_ids` or `decoder_inputs_embeds`."
)
decoder_input_ids = shift_tokens_right(
input_ids, self.config.pad_token_id, self.config.decoder_start_token_id
)
output_attentions = (
output_attentions
if output_attentions is not None
else self.config.output_attentions
)
output_hidden_states = (
output_hidden_states
if output_hidden_states is not None
else self.config.output_hidden_states
)
use_cache = use_cache if use_cache is not None else self.config.use_cache
return_dict = (
return_dict if return_dict is not None else self.config.use_return_dict
)
if encoder_outputs is None:
encoder_outputs = self.encoder(
input_ids=input_ids,
attention_mask=attention_mask,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
# NEW: Pass to another self attention
hidden_states = encoder_outputs.last_hidden_state
residual = hidden_states
_attention_mask = _expand_mask(attention_mask, hidden_states.dtype)
hidden_states, attn_weights, _ = self.self_attn(
hidden_states=hidden_states,
attention_mask=_attention_mask,
)
hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
# Problematic LayerNorm Layer
hidden_states = self.self_attn_layer_norm(hidden_states)
residual = hidden_states
hidden_states = self.activation_fn(self.fc1(hidden_states))
hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
hidden_states = self.fc2(hidden_states)
hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
# Problematic LayerNorm Layer
hidden_states = self.final_layer_norm(hidden_states)
encoder_outputs.last_hidden_state = hidden_states
# decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_outputs.last_hidden_state,
encoder_attention_mask=attention_mask,
head_mask=decoder_head_mask,
cross_attn_head_mask=cross_attn_head_mask,
past_key_values=past_key_values,
inputs_embeds=decoder_inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
if not return_dict:
return decoder_outputs + encoder_outputs
return Seq2SeqModelOutput(
last_hidden_state=decoder_outputs.last_hidden_state,
past_key_values=decoder_outputs.past_key_values,
decoder_hidden_states=decoder_outputs.hidden_states,
decoder_attentions=decoder_outputs.attentions,
cross_attentions=decoder_outputs.cross_attentions,
encoder_last_hidden_state=encoder_outputs.last_hidden_state,
encoder_attentions=encoder_outputs.attentions,
)
class BARTForConditionalGenerationTest(BartForConditionalGeneration):
def __init__(self, config):
super().__init__(config)
self.model = BARTModelTest(config)
# Initialize weights and apply final processing
self.post_init()
```
notice the lines I start with the comment `# NEW`
3. Running this on XSum with just one gpu:
```
python run_summarization.py --fp16 \
--dataset_name xsum --do_train \
--model_name facebook/bart-base \
--tokenizer_name facebook/bart-base \
--do_eval --evaluation_strategy steps --eval_steps 10 --predict_with_generate \
--per_device_train_batch_size 64 --per_device_eval_batch_size 16 \
--gradient_accumulation_steps 1 \
--learning_rate 3e-05 --weight_decay 0.01 --label_smoothing 0.1 \
--max_source_length 512 --max_target_length 64 \
--logging_step 100 --max_steps 5000 \
--warmup_steps 0 --save_steps 1000 \
--output_dir test_layernorm --max_eval_samples 10 --max_train_samples 1000 --max_predict_samples 100
```
I stop this after 30 steps.
---- Results ----
1. Running this with original `AutoModelForSeq2SeqLM`
```
{'eval_loss': 3.429733991622925, 'eval_rouge1': 35.3788, 'eval_rouge2': 11.958, 'eval_rougeL': 28.7712, 'eval_rougeLsum': 28.8147, 'eval_gen_len': 19.6, 'eval_runtime': 0.4073, 'eval_samples_per_second': 24.552, 'eval_steps_per_second': 2.455, 'epoch': 0.62}
0%|โ | 20/5000 [00:10<40:09, 2.07it/s][INFO|trainer.py:2590] 2022-05-15 14:54:19,166 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:54:19,166 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:54:19,166 >> Batch size = 16
05/15/2022 14:54:19 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.320158004760742, 'eval_rouge1': 30.3056, 'eval_rouge2': 10.7887, 'eval_rougeL': 28.2016, 'eval_rougeLsum': 28.0782, 'eval_gen_len': 19.8, 'eval_runtime': 0.3998, 'eval_samples_per_second': 25.01, 'eval_steps_per_second': 2.501, 'epoch': 1.25}
1%|โ | 30/5000 [00:15<41:45, 1.98it/s][INFO|trainer.py:2590] 2022-05-15 14:54:24,528 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:54:24,528 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:54:24,528 >> Batch size = 16
05/15/2022 14:54:24 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.2896971702575684, 'eval_rouge1': 30.415, 'eval_rouge2': 8.1278, 'eval_rougeL': 27.7237, 'eval_rougeLsum': 27.6498, 'eval_gen_len': 20.0, 'eval_runtime': 0.3894, 'eval_samples_per_second': 25.681, 'eval_steps_per_second': 2.568, 'epoch': 1.88}
```
2. Running with my model but commenting out the two lines that calls the layernorms (i.e. `hidden_states = self.self_attn_layer_norm(hidden_states)` and `hidden_states = self.final_layer_norm(hidden_states)`)
```
{'eval_loss': 3.460312604904175, 'eval_rouge1': 32.4359, 'eval_rouge2': 9.7464, 'eval_rougeL': 27.5792, 'eval_rougeLsum': 27.4135, 'eval_gen_len': 19.1, 'eval_runtime': 1.0524, 'eval_samples_per_second': 9.502, 'eval_steps_per_second': 0.95, 'epoch': 0.62}
0%|โ | 20/5000 [00:12<46:20, 1.79it/s][INFO|trainer.py:2590] 2022-05-15 14:57:13,684 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:57:13,684 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:57:13,684 >> Batch size = 16
05/15/2022 14:57:14 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.37113881111145, 'eval_rouge1': 29.4708, 'eval_rouge2': 7.4381, 'eval_rougeL': 24.7256, 'eval_rougeLsum': 24.5516, 'eval_gen_len': 19.9, 'eval_runtime': 0.7387, 'eval_samples_per_second': 13.538, 'eval_steps_per_second': 1.354, 'epoch': 1.25}
1%|โ | 30/5000 [00:18<47:48, 1.73it/s][INFO|trainer.py:2590] 2022-05-15 14:57:20,076 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:57:20,076 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:57:20,076 >> Batch size = 16
05/15/2022 14:57:20 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 3.33235239982605, 'eval_rouge1': 33.9623, 'eval_rouge2': 11.8778, 'eval_rougeL': 30.1785, 'eval_rougeLsum': 30.1524, 'eval_gen_len': 19.7, 'eval_runtime': 0.7438, 'eval_samples_per_second': 13.444, 'eval_steps_per_second': 1.344, 'epoch': 1.88}
```
3. Running my model with the layernorms:
```
{'eval_loss': 9.264244079589844, 'eval_rouge1': 8.4575, 'eval_rouge2': 0.0, 'eval_rougeL': 7.8523, 'eval_rougeLsum': 7.8706, 'eval_gen_len': 20.0, 'eval_runtime': 0.7076, 'eval_samples_per_second': 14.133, 'eval_steps_per_second': 1.413, 'epoch': 0.62}
0%|โ | 20/5000 [00:11<45:57, 1.81it/s][INFO|trainer.py:2590] 2022-05-15 14:58:27,171 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:58:27,172 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:58:27,172 >> Batch size = 16
05/15/2022 14:58:27 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 8.134066581726074, 'eval_rouge1': 14.0672, 'eval_rouge2': 1.2222, 'eval_rougeL': 12.6982, 'eval_rougeLsum': 13.1708, 'eval_gen_len': 18.3, 'eval_runtime': 0.7573, 'eval_samples_per_second': 13.205, 'eval_steps_per_second': 1.32, 'epoch': 1.25}
1%|โ | 30/5000 [00:17<47:47, 1.73it/s][INFO|trainer.py:2590] 2022-05-15 14:58:33,581 >> ***** Running Evaluation *****
[INFO|trainer.py:2592] 2022-05-15 14:58:33,581 >> Num examples = 10
[INFO|trainer.py:2595] 2022-05-15 14:58:33,581 >> Batch size = 16
05/15/2022 14:58:34 - INFO - datasets.metric - Removing /home/davidwan/.cache/huggingface/metrics/rouge/default/default_experiment-1-0.arrow | 0/1 [00:00<?, ?it/s]
{'eval_loss': 7.54071569442749, 'eval_rouge1': 5.2054, 'eval_rouge2': 0.0, 'eval_rougeL': 5.0935, 'eval_rougeLsum': 5.1303, 'eval_gen_len': 11.5, 'eval_runtime': 0.7393, 'eval_samples_per_second': 13.526, 'eval_steps_per_second': 1.353, 'epoch': 1.88}
```
### Expected behavior
```shell
I expect the model to still work in a reasonable way (generating summaries), but in my own code and custom data, I do see the loss goes down and to a similar loss value than without using layernorm (which you can also see here) but the ROUGE score during evaluation is always around 3 (or a nonsensical value that does not improve). I think what I am doing here is essentially just add another BartEncoderLayer?
Any help would be appreciated! Thank you!
```
| 05-15-2022 19:04:44 | 05-15-2022 19:04:44 | From what I can see, here the weights of self_attn, and feed forward layers are randomly initialised, as your code changes the model structure, these weights won't bee loaded from pre-trained model. Which could explain this. Also `self.encoder` already performs attention, why do you add another attention layer after encoder ?
And lastly, since this is a more general question and not a bug, I would suggest to post it [on the forum](https://discuss.huggingface.co/). Thanks !<|||||>Hi Suraj, thank you for the comment!
You are right in that the self_attn, feedforward, and the layernorms are newly intialized, but I expect them to be trained and updated and get similar performance. As you can see in my second run where I have the self_attn and feedforward (but no layernorm), it is updating correctly and achieving similar performance than without these additions (regular BART). However, only adding the layernorm to it makes the model unusable (the third run), which I believe might be a bug and not a general question (unless I am missing some crucial part).<|||||>@meetdavidwan note that we sadly don't have the time to answer issues that include customized model architectures. We need to limit ourselves to the officially provided implementations only sadly. The forum is the best way of getting help here I think :-) |
transformers | 17,263 | closed | docs(transformers): fix typo | [goal] - fix typo transformers docs | 05-15-2022 18:14:38 | 05-15-2022 18:14:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,262 | closed | Spanish translation of the files sagemaker.mdx and image_classification.mdx | # What does this PR do?
Adds the Spanish version of [sagemaker.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/en/sagemaker.mdx) and [image_classification.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/tasks/image_classification.mdx) to [transformers/docs/source/es](https://github.com/huggingface/transformers/tree/master/docs/source/es)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/15947 (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@omarespejel @osanseviero @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-15-2022 15:36:43 | 05-15-2022 15:36:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your contribution!<|||||>Muchas gracias @SimplyJuanjo for the PR! ๐ค Please let me know if you wish to translate another one. |
transformers | 17,261 | closed | TF - Fix convnext classification example | # What does this PR do?
Fixes what probably was copy-paste mistake. As visible [here](https://huggingface.co/docs/transformers/main/en/model_doc/convnext#transformers.TFConvNextForImageClassification.call.example) -- the example doesn't run atm, it does with the fix. | 05-15-2022 12:28:52 | 05-15-2022 12:28:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 17,260 | closed | Missing of token_type_ids parameter in OPTForCausalLM.forward | ### Feature request
According to OPT's [document](https://huggingface.co/docs/transformers/model_doc/opt#transformers.OPTForCausalLM)
the OPTForCausalLM's forward method is missing a parameter `token_type_ids`.
I notice that other GPT variants almost have the `token_type_ids` in their forward method
Please upgrade it for our community
### Motivation
None
### Your contribution
None | 05-15-2022 07:20:44 | 05-15-2022 07:20:44 | Hello @Tuan-Lee-23!
@patrickvonplaten, @younesbelkada or @ArthurZucker can give more details, but we aim to follow the original implementation as closely as possible. The original implementation does not leverage token type IDs and nor do the checkpoints, so there was no need to implement them for OPT.
Token Type IDs are not a required parameter for several models, so this is not an isolated case.<|||||>I'm sorry that I didn't notice about the original implementation of OPT
@LysandreJik Thank you for your clarification <|||||>First of all the issues of non attendance or lack of participation is due to the situation with the hack/ bot whatever it os limits my screen time talk time communications all at will. Sorry but Iโm no coder and came here to get hrlelp to learn how to take care of my own <|||||>Retaliation sucks |
transformers | 17,259 | closed | TROCR truncating output string | ### System Info
```shell
- `transformers` version: 4.19.1
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.8.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: (False)
- Using distributed or parallel set-up in script?: (False)
```
### Who can help?
@NielsRogge Hi, first of all , thank you for providing such a comprehensive port. I am in general impressed with the output of the model , and I haven't yet tuned it, just testing however I have an issue with long Input strings while testing TROCR, I have already searched the documentation but could find a parameter for inference. Example 1:

Output would be: 'AUGENMENISKUSHORIZONTAL- / LAPPEN- /R'
I assumed there might be a max_lenght ~32 so I tried

Output would be :'SUPERCALIFRAGISLISTICEXPIALIDOCIOUSFANTAST'
Can you help me out here? Thanks in advance
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
using script:
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
#device = "cuda" if torch.cuda.is_available() else "cpu"
print_processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-printed')
print_model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-printed')#.to(device)
def ocr_print_image(src_img):
pixel_values = print_processor(images=src_img, return_tensors="pt").pixel_values
generated_ids = print_model.generate(pixel_values)
return print_processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
handwriting1 = Image.open(r'test_image.jpg')
ocr_print_image(handwriting1)
### Expected behavior
```shell
I would like to have an output that is not 'capped' or truncated as it seems. I realize this is edge cases however especially example one is real world and can occur often.
Thanks again.
```
| 05-15-2022 06:39:59 | 05-15-2022 06:39:59 | The [generate](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method provides an argument called `max_length` which specifies the max number of tokens to generate.
Note that generation stops when the end-of-sequence token is generated.<|||||>> The [generate](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate) method provides an argument called `max_length` which specifies the max number of tokens to generate.
>
> Note that generation stops when the end-of-sequence token is generated.
@NielsRogge Thank you very much !
Cheers
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,258 | closed | run_clm.py exits with error -9 on checkpoint restart | ### System Info
```shell
- `transformers` version: 4.20.0.dev0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.6.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 2
- Using distributed or parallel set-up in script?: deepspeed
```
### Who can help?
@patil-suraj @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I was running run_clm.py on my own custom data on GPT-J6B. I ran out of disk space, and restarted from latest checkpoint. All seemed to restart appropriately and then the script crashed with an error -9. No error message. Full log attached.
```
Using /home/ubuntu/.cache/torch_extensions/py38_cu111 as PyTorch extensions root...
No modifications detected for re-loaded extension module utils, skipping build step...
Loading extension module utils...
Time to load utils op: 0.00033783912658691406 seconds
[INFO|deepspeed.py:449] 2022-05-15 00:12:20,447 >> Attempting to resume from finetuned/checkpoint-22
[2022-05-15 00:12:51,292] [INFO] [engine.py:2754:_get_all_zero_checkpoint_state_dicts] successfully read 2 ZeRO state_dicts for rank 0
[2022-05-15 00:12:57,785] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97277
[2022-05-15 00:12:57,786] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97278
[2022-05-15 00:12:57,786] [ERROR] [launch.py:184:sigkill_handler] ['/usr/bin/python3', '-u', 'run_clm.py', '--local_rank=1', '--deepspeed', './Finetune_GPTNEO_GPTJ6B/finetuning_repo/ds_config_gptj6b.json', '--model_name_or_path', 'EleutherAI/gpt-j-6B', '--train_file', 'Jesus_sayings.txt', '--do_train', '--fp16', '--overwrite_cache', '--evaluation_strategy=steps', '--output_dir', 'finetuned', '--num_train_epochs', '5', '--eval_steps', '1', '--gradient_accumulation_steps', '32', '--per_device_train_batch_size', '1', '--use_fast_tokenizer', 'False', '--learning_rate', '5e-06', '--warmup_steps', '10', '--save_total_limit', '2', '--save_steps', '2', '--save_strategy', 'steps', '--tokenizer_name', 'gpt2'] exits with return code = -9
```
This is running on two 48gb GPUS using deepspeed. Was training without problem until crash and then on restart got the error.
Original Command:
```
deepspeed --num_gpus=2 run_clm.py --deepspeed \
./Finetune_GPTNEO_GPTJ6B/finetuning_repo/ds_config_gptj6b.json \
--model_name_or_path EleutherAI/gpt-j-6B --train_file Jesus_sayings.txt \
--do_train --fp16 --overwrite_cache --evaluation_strategy=steps --output_dir \
finetuned --num_train_epochs 5 --eval_steps 1 --gradient_accumulation_steps 32 \
--per_device_train_batch_size 1 --use_fast_tokenizer False --learning_rate \
5e-06 --warmup_steps 10 --save_total_limit 2 --save_steps 2 --save_strategy \
steps --tokenizer_name gpt2
```
ds_config_gpt6b.json
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
[full_traceback_run_CLM_error.txt](https://github.com/huggingface/transformers/files/8694097/full_traceback_run_CLM_error.txt)
"initial_scale_power": 12,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": false
},
"offload_param": {
"device": "cpu",
"pin_memory": false
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
[full_traceback_run_CLM_error.txt](https://github.com/huggingface/transformers/files/8694101/full_traceback_run_CLM_error.txt)
### Expected behavior
```shell
Should restart and continue training.
```
| 05-15-2022 00:43:11 | 05-15-2022 00:43:11 | Looks like the error was thrown by DeepSpeed reloading the checkpoint, so maybe your issue would be better suited in their repo? Also cc @stas00 for information.<|||||>Based on your report I don't think it has anything to do with Deepspeed.
```
[2022-05-15 00:12:57,785] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97277
[2022-05-15 00:12:57,786] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 97278
```
It looks like `cgroups` or oom killer or whatever resource control system you use killed the launcher process, which killed the other processes. Check `dmesg` to see if you get a report of a process killed by kernel or a helper util.
Do you have enough CPU memory to load the checkpoint? It's possible that there was more CPU RAM at the start and then it got reduced when you restarted?
You could activate `--skip_memory_metrics 0` with just a few steps and get stats on how much CPU memory your script is using in each stage, then comparing to your free CPU memory.
Adding swap memory often can help with the situation. Let me know if you want instructions for that.
How much CPU memory do you have on this host? Do you use it for training only or is it part of a desktop that you use for other things.<|||||>@stas00 So this is a cloud system used just for finetuning. It has 200gb of Ram, 2 48gb GPUs. But I set up a ram tracker and it hits 100% when it fails. So that's clearly the problem. Is the swap you are talking about different than a normal swap file in ubuntu?
<|||||>As I suspected. Those resource controlling tools aren't very user-friendly. I just run a lot into this `Killed` w/o any explanation use-case on HPC so I know to suspect these.
Yes, normal swap. The question is whether the `cgroups` is set to have swap help out. Doesn't hurt to try.
`cgroups` typically monitors the residential memory, so the unused memory will go to swap if there is one.
Here is how I normally add it (of course edit the paths):
```
### Add a new swap file or extend one ###
# turn off all swap processes
sudo swapoff -a
# add 128GB file (or resize it if it already exists)
sudo dd if=/dev/zero of=/mnt/nvme0/swapfile bs=1G count=128
# prep as swap
sudo chmod 600 /mnt/nvme0/swapfile
sudo chown root.root /mnt/nvme0/swapfile
sudo mkswap /mnt/nvme0/swapfile
# activate the swap file
sudo swapon /mnt/nvme0/swapfile
# check the amount of swap available
grep SwapTotal /proc/meminfo
# to make permanent add to /etc/fstab if it isnโt already there
/mnt/nvme0/swapfile none swap sw 0 0
```<|||||>Now the more interesting question is why 200GB of RAM is not enough. Can you tell how much is available when you start the program?
To load the model on each process would be 2x - so 2*24GB = just 48 GB, which is temp memory and is freed once you moved models to gpu.
Then you have 6*18=108GB just for the weights, optim states and grads and then some for activations. So you can't fully fit those into 2x 48GB gpus
And so you're using the CPU offload, and that's where you run out of memory as you're offloading both optim states and the params.
But given that you have 2x 48GB GPU - you probably don't need to offload both, params and optim stages - how about just offloading the optim states, i.e. set:
```
"offload_param": {
"device": "none",
"pin_memory": false
},
```
Also monitor `nvidia-smi` and watch your gpu memory usage - I bet at the moment it's barely being used. (I use an alias `wn=watch -n 1 nvidia-smi`)
And additionally you can play with the buffer sizes, please have a look at the discussion here:
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#zero3-config
I'm talking about tweaking these param:
```
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
```
Additionally this is an expensive config for when you save the checkpoint as it has to reconstruct the model on one gpu, so you can turn it to `False`:
```
"stage3_gather_16bit_weights_on_model_save": true
```
and then use zero_to_fp32 to extrace the fp32 weights:
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#getting-the-model-weights-out
The config in the docs is sort of a generic one that fits many cases and requires a finetuning in some cases to fit a specific use case.
I know this can appear complex, so please don't hesitate to ask questions and I'm sure we will figure out how to fit your model finetuning into 200GB CPU RAM and 96GB GPU RAM.
<|||||>@stas00 So I ran it again, just as it was and this time mapped the gpu and the cpu memory. And you are right (again), the gpu sits at 93% free while the checkpoint is loaded completely in to cpu ram which when the checkpoint load starts is 76% used. Is it weird that this happens only in restarting from a checkpoint and not during training? Uninterrupted the model trains without ever exceeding the cpu memory max.
Here's what I tried in response to your helpful comments:
The real answer was adding a 200gb swap file. That got it over the hump.
Changing offload_param to "none" meant that 60% of the gpu was used. But it still maxed out cpu memory.
I Changed the weights_on_model to false and I looked at the documentation on the stage3_max parameters, but it was unclear what was a significant change. I increased it to 2e9 and then 3e9, but it still maxed out the cpu memory. Seemed like with 3e9 on of the gpu's increased use slightly 60%->62%. Still not of that prevented getting to maxed out cpu memory.
<|||||>I'm glad you found a workaround, @randywreed
It's odd that you see a different pattern during initial training vs. same but loaded from a checkpoint - perhaps a model is leaked somewhere - or may b e a bug in deepspeed where it allocates everything on CPU even when it shouldn't?
Let me see if I can try to analyze the memory usage during different stages. At the very least we will have a map of what we should expect.<|||||>OK, I was able to investigate this some more and found the explanation for the problem you're experiencing.
tldr; the problem is the overhead of `torch.load` for the optim states at resume which don't exist when you finetune the first time. the file is huge and thus requires a ton of additional CPU peak memory.
The full analysis:
I'm going to offload only optimizer states, that is:
```
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "none",
"pin_memory": true
},
[...]
```
Let's take a smaller gpt2-large model so it's faster to run:
```
# 1. create checkpoint
deepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2-large --train_file tests/fixtures/sample_text.txt \
--do_train --fp16 --evaluation_strategy=steps --output_dir xxx \
--num_train_epochs 1 --eval_steps 1 --gradient_accumulation_steps 1 \
--per_device_train_batch_size 2 --use_fast_tokenizer False --learning_rate \
5e-06 --warmup_steps 10 --save_steps 1 --save_strategy steps --tokenizer_name \
gpt2 --max_train_samples 2 --max_eval_samples 2 --deepspeed \
tests/deepspeed/ds_config_zero3.json --skip_memory_metrics 0 \
--overwrite_output_dir
***** train metrics *****
before_init_mem_cpu = 3745MB
before_init_mem_gpu = 1786MB
epoch = 1.0
init_mem_cpu_alloc_delta = 0MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 0MB
init_mem_gpu_peaked_delta = 0MB
train_loss = 3.793
train_mem_cpu_alloc_delta = 17578MB
train_mem_cpu_peaked_delta = 2952MB
train_mem_gpu_alloc_delta = -148MB
train_mem_gpu_peaked_delta = 6372MB
train_runtime = 0:00:13.66
train_samples = 1
train_samples_per_second = 0.073
train_steps_per_second = 0.073
***** eval metrics *****
epoch = 1.0
eval_accuracy = 0.2344
eval_loss = 4.0664
eval_mem_cpu_alloc_delta = 0MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = 0MB
eval_mem_gpu_peaked_delta = 1377MB
eval_runtime = 0:00:01.58
eval_samples = 1
eval_samples_per_second = 0.632
eval_steps_per_second = 0.632
perplexity = 58.3469
# 2. resume from checkpoint
deepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2-large --train_file tests/fixtures/sample_text.txt \
--do_train --fp16 --evaluation_strategy=steps --output_dir xxx \
--num_train_epochs 1 --eval_steps 1 --gradient_accumulation_steps 1 \
--per_device_train_batch_size 2 --use_fast_tokenizer False --learning_rate \
5e-06 --warmup_steps 10 --save_steps 1 --save_strategy steps --tokenizer_name \
gpt2 --max_train_samples 2 --max_eval_samples 2 --deepspeed \
tests/deepspeed/ds_config_zero3.json --skip_memory_metrics 0 \
--resume_from_checkpoint xxx/checkpoint-1
***** train metrics *****
before_init_mem_cpu = 4618MB
before_init_mem_gpu = 1786MB
epoch = 1.0
init_mem_cpu_alloc_delta = 0MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 0MB
init_mem_gpu_peaked_delta = 0MB
train_loss = 0.0
train_mem_cpu_alloc_delta = 16071MB
train_mem_cpu_peaked_delta = 14762MB
train_mem_gpu_alloc_delta = -148MB
train_mem_gpu_peaked_delta = 0MB
train_runtime = 0:00:00.00
train_samples = 1
train_samples_per_second = 135.318
train_steps_per_second = 135.318
***** eval metrics *****
epoch = 1.0
eval_accuracy = 0.2344
eval_loss = 4.0664
eval_mem_cpu_alloc_delta = 9MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = -2MB
eval_mem_gpu_peaked_delta = 146MB
eval_runtime = 0:00:00.41
eval_samples = 1
eval_samples_per_second = 2.399
eval_steps_per_second = 2.399
perplexity = 58.3469
```
As you can see in `train_mem_cpu_alloc_delta+train_mem_cpu_peaked_delta` numbers - the resuming one took more than 10GB extra of CPU peak memory.
I was able to reproduce your issue with GPT-J-6 on a somewhat similar setup, except using one 80GB gpu.
And I forced max 100GB CPU and 50GB RAM with:
```
systemd-run --user --scope -p MemoryHigh=100G -p MemoryMax=100G -p MemorySwapMax=50G bash
```
Now the same commands but with `EleutherAI/gpt-j-6B`:
```
deepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path EleutherAI/gpt-j-6B --train_file \
tests/fixtures/sample_text.txt --do_train --fp16 --evaluation_strategy=steps \
--output_dir xxx --num_train_epochs 1 --eval_steps 1 \
--gradient_accumulation_steps 1 --per_device_train_batch_size 2 \
--use_fast_tokenizer False --learning_rate 5e-06 --warmup_steps 10 \
--save_steps 1 --save_strategy steps --tokenizer_name gpt2 --max_train_samples \
2 --max_eval_samples 2 --deepspeed tests/deepspeed/ds_config_zero3.json \
--skip_memory_metrics 0 --overwrite_output_dir
***** train metrics *****
before_init_mem_cpu = 3714MB
before_init_mem_gpu = 12438MB
epoch = 1.0
init_mem_cpu_alloc_delta = 0MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 0MB
init_mem_gpu_peaked_delta = 0MB
train_loss = 2.6777
train_mem_cpu_alloc_delta = 66213MB
train_mem_cpu_peaked_delta = 32635MB
train_mem_gpu_alloc_delta = 33MB
train_mem_gpu_peaked_delta = 11879MB
train_runtime = 0:07:44.12
train_samples = 1
train_samples_per_second = 0.002
train_steps_per_second = 0.002
***** eval metrics *****
epoch = 1.0
eval_accuracy = 0.4531
eval_loss = 2.5234
eval_mem_cpu_alloc_delta = 25MB
eval_mem_cpu_peaked_delta = 0MB
eval_mem_gpu_alloc_delta = 0MB
eval_mem_gpu_peaked_delta = 2319MB
eval_runtime = 0:00:00.58
eval_samples = 1
eval_samples_per_second = 1.708
eval_steps_per_second = 1.708
perplexity = 12.4714
note the huge size of the checkpoint it needs to load into cpu:
$ ls -l xxx/checkpoint-1/global_step1/
total 68G
-rw-rw-r-- 1 stas stas 113M May 20 17:51 zero_pp_rank_0_mp_rank_00_model_states.pt
-rw-rw-r-- 1 stas stas 68G May 20 17:56 zero_pp_rank_0_mp_rank_00_optim_states.pt
# 2. resume from checkpoint
deepspeed --num_gpus=1 examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path EleutherAI/gpt-j-6B --train_file \
tests/fixtures/sample_text.txt --do_train --fp16 --evaluation_strategy=steps \
--output_dir xxx --num_train_epochs 1 --eval_steps 1 \
--gradient_accumulation_steps 1 --per_device_train_batch_size 2 \
--use_fast_tokenizer False --learning_rate 5e-06 --warmup_steps 10 \
--save_steps 1 --save_strategy steps --tokenizer_name gpt2 --max_train_samples \
2 --max_eval_samples 2 --deepspeed tests/deepspeed/ds_config_zero3.json \
--skip_memory_metrics 0 --resume_from_checkpoint xxx/checkpoint-1
[....]
[INFO|deepspeed.py:449] 2022-05-20 17:58:56,691 >> Attempting to resume from xxx/checkpoint-1
[2022-05-20 18:00:08,066] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 121940
```
The problem is that `torch.save` saves one param at a time, but `torch.load` loads the whole thing to CPU memory at once. That's why the first finetuning works, but resuming doesn't.
(edit: this is an incorrect statement as I show in the next comment)
The second stage needs some additional 70GB of CPU RAM.
In the past Deepspeed devs and Deepspeed users all used huge expensive DGX servers which had TBs of CPU RAM so nobody was worried about "normal" users with little CPU RAM. Things are shifting and slowly slowly the Deepspeed team is moving towards embracing the low resource stage.
Tunji and I are working on an universal checkpoint format where each param and optim states are saved as separate files and thus can be loaded on a tiny amount of CPU memory. Currently we are working on the Megatron-Deepspeed checkpoints since we need it for manipulating the 176B checkpoint, which is much bigger than 6B of GPT-J-6. If all goes well this work will eventually end up in normal ZeRO stages as well. The current `torch.load()` to cpu is simply not an option we can continue with.
So for now please use the swap memory workaround, it shouldn't impact anything other than making the startup a bit slower.
--------------
The other issue this Issue has shined light to is not having much flexibility about how much is offloaded to CPU - same historical observation applies here. Currently one can only offload 12x or 14x params (8+4 for optim states and 2 for half precision params) and the GPU remains mainly empty, which is far from good utilization of resources. I have passed to Tunji a request to support more flexible offloading in the future. Let's see what comes out of it.
--------------
Both issues are Deepspeed's core issues so there is not much we can do at the HF side to make thing better.
If you have any other questions and want me to explain anything please don't hesitate to ask. and if all is clear please feel free to close this issue.
Thank you for your patience, @randywreed
Also cc: @tjruwase for awareness.
<|||||>I was digging some more into this and noticed that `torch.load` is symmetrical to `torch.save` when it comes to a model on gpu <=> disc if `map_location="cuda"` is used - it doesn't copy it fully to CPU memory first - it will use CPU peak memory of the size of the largest entry in the state_dict.
We can see that empirically through the following test:
Here is a 12GB checkpoint.
```
$ ls -l xxx/checkpoint-1/pytorch_model.bin
-rw-rw-r-- 1 stas stas 12G May 20 17:51 xxx/checkpoint-1/pytorch_model.bin
```
Let's load it to gpu:
```
$ /usr/bin/time -f %M python -c 'import torch; _=torch.load
("xxx/checkpoint-1/pytorch_model.bin", map_location="cuda")
3279196
```
It used only 3GB of CPU RAM in total. (Largest key)
and of course, let's check the baseline of loading to cpu:
```
$ /usr/bin/time -f %M python -c 'import torch; _=torch.load
("xxx/checkpoint-1/pytorch_model.bin", map_location="cpu")'
12167976
```
It used 12GB of CPU RAM in total as the size of the checkpoint.
Looking deeper it appears that the issue is on the deepspeed side. It loads the checkpoint into cpu first:
https://github.com/microsoft/DeepSpeed/blob/5208eb73da5269034ded69c4dd7c4bff81df81e7/deepspeed/runtime/engine.py#L2748
and hence the huge additional peak memory usage.
I filed an issue https://github.com/microsoft/DeepSpeed/issues/1971<|||||>Thanks for the help on this. I appreciate it. |
transformers | 17,257 | closed | Improve mismatched sizes management when loading a pretrained model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Examples currently fail when the loaded model head has different dimensions from the expected ones. For instance, in the image classification example, if a pretrained classification head has different dimensions from the classification head to fine-tune, the current implementation will lead to this error:
```
Traceback (most recent call last):
File "run_image_classification.py", line 377, in <module>
main()
File "run_image_classification.py", line 267, in main
model = AutoModelForImageClassification.from_pretrained(
File "/home/regis/HuggingFace/dev/transformers/venv/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/regis/HuggingFace/dev/transformers/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2067, in from_pretrained
model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_pretrained_model(
File "/home/regis/HuggingFace/dev/transformers/venv/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2276, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for SwinForImageClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([1000, 1024]) from checkpoint, the shape in current model is torch.Size([3, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([3]).
```
To reproduce this, you can run the image classification example with Swin, such as:
```
python run_image_classification.py \
--dataset_name beans \
--output_dir /tmp/beans_outputs/ \
--remove_unused_columns False \
--do_train \
--per_device_train_batch_size 8 \
--model_name_or_path microsoft/swin-base-patch4-window7-224
```
The solution is to add the argument `ignore_mismatched_sizes=True` to the `AutoModelForXXX.from_pretrained` method. Thus, this PR does the following:
- expand the error message and suggest a solution when the error is raised
- for all classification examples, an argument `--ignore_mismatched_sizes` can now be given to adapt the dimensions of the classification head when they are different from the expected ones
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-14-2022 21:48:44 | 05-14-2022 21:48:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I see that all tests corresponding to mismatched sizes failed. Looking at [test_modeling_common.py](https://github.com/huggingface/transformers/blob/ee393c009a243bbb86fa11d2efa771a1704d85cf/tests/test_modeling_common.py#L2191), I guess the current behaviour is expected.
Shall I close this PR and open a new issue regarding the fact that some models will fail in examples (such as Swin for image classification as explained in the first message)?<|||||>I'm not sure why you would open a new issue: this is not a bug and there is an option to load a model with a pretrained head that has different shapes from the checkpoint. Or do you mean the examples should have a flag to activate that option? <|||||>> I'm not sure why you would open a new issue: this is not a bug and there is an option to load a model with a pretrained head that has different shapes from the checkpoint. Or do you mean the examples should have a flag to activate that option?
After using it a bit more I realized it's not a bug indeed, sorry for the confused wording. The problem is that just changing the model used in the example may break it and I couldn't fine anywhere in the doc or in the READMEs how to solve this, I had to take a look at the code. I suggest one of the following to make it more user-friendly:
- the error message suggests to add the argument `ignore_mismatched_sizes=True` to `AutoModelForXXX.from_pretrained`
- adding a flag to activate that option as you propose, with a mention in the README
- changing the default value of `ignore_mismatched_sizes` to `True` since a warning is displayed when sizes are different, but I guess I'm lacking of context here and I'm just considering this example use case
Again I'm certainly lacking of context here but I would be happy to modify this PR so that it makes using a different model in examples less tedious when the pretrained head has different dimensions :)<|||||>> * the error message suggests to add the argument `ignore_mismatched_sizes=True` to `AutoModelForXXX.from_pretrained`
This is definitely something we can add and would help the user!
> * adding a flag to activate that option as you propose, with a mention in the README
Yes, another welcome improvement!
> * changing the default value of `ignore_mismatched_sizes` to `True` since a warning is displayed when sizes are different, but I guess I'm lacking of context here and I'm just considering this example use case
This can't be done for backward compatibility reasons. In further work in `from_pretrained`, we might have a default that does this, but only for the head of the model. It's dangerous to have it enabled by default on the whole body.<|||||>Great, I'm going to modify this PR accordingly!<|||||>I just modified the *classification* examples because I'm not sure about the types of head used in other scenarios.
Also, I noticed that VSCode automatically trimmed extra whitespaces (because I configured it this way). Let me know if this is an issue and I'll revert that. |
transformers | 17,256 | closed | RAG - ValueError: Columns ['embeddings'] not in the dataset. Current columns in the dataset: ['title', 'text'] | ### System Info
```shell
Hi,
INFO:__main__:Step 1 - Create the dataset
WARNING:datasets.builder:Using custom data configuration default-3b4ec65e3c3d818f
Downloading and preparing dataset csv/default to /local/data/daa2182/.cache/huggingface/modules/datasets_modules/datasets/csv/default-3b4ec65e3c3d818f/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...
Downloading data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 5322.72it/s]
Extracting data files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 915.59it/s]
Dataset csv downloaded and prepared to /local/data/daa2182/.cache/huggingface/modules/datasets_modules/datasets/csv/default-3b4ec65e3c3d818f/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 1262.20ba/s]
Some weights of the model checkpoint at facebook/dpr-ctx_encoder-multiset-base were not used when initializing DPRContextEncoder: ['ctx_encoder.bert_model.pooler.dense.weight', 'ctx_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRContextEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRContextEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
/local/data/daa2182/anaconda/lib/python3.9/site-packages/torch/cuda/__init__.py:145: UserWarning:
NVIDIA RTX A4000 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA RTX A4000 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'DPRQuestionEncoderTokenizer'.
The class this function is called from is 'DPRContextEncoderTokenizerFast'.
INFO:__main__:Step 2 - Index the dataset
Traceback (most recent call last):
File "/local/data/daa2182/13MAy/transformers/examples/research_projects/rag/use_own_knowledge_dataset.py", line 209, in <module>
main(rag_example_args, processing_args, index_hnsw_args)
File "/local/data/daa2182/13MAy/transformers/examples/research_projects/rag/use_own_knowledge_dataset.py", line 107, in main
dataset.add_faiss_index("embeddings", custom_index=index)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 4197, in add_faiss_index
with self.formatted_as(type="numpy", columns=[column], dtype=dtype):
File "/local/data/daa2182/anaconda/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1809, in formatted_as
self.set_format(type, columns, output_all_columns, **format_kwargs)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/local/data/daa2182/anaconda/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1868, in set_format
raise ValueError(
ValueError: Columns ['embeddings'] not in the dataset. Current columns in the dataset: ['title', 'text']
I got this message every time I created a new CSV.
@patrickvonplaten
@lhoestq
thanx!
```
### Who can help?
@patrickvonplaten
@lhoestq
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
python examples/research_projects/rag/use_own_knowledge_dataset.py \
--csv_path path/to/my_csv \
--output_dir path/to/my_knowledge_dataset \
### Expected behavior
```shell
It should create a new KB
```
| 05-14-2022 19:31:35 | 05-14-2022 19:31:35 | Hi ! The "embeddings" column is computed at the line:
https://github.com/huggingface/transformers/blob/95b6bef624bd9dfdfcdfdedd86bb2173f7fb4bfe/examples/research_projects/rag/use_own_knowledge_dataset.py#L88-L93
Can you make sure this line is run and check that "embeddings" is in `dataset.column_names` ?<|||||>@deema-A,
Also note that we don't officially maintain code under `research_projects`<|||||>The csv files you are creating are not in the format expected by the code.
This is the line in the code that reads the csv file:
`
dataset = load_dataset(
"csv", data_files=[rag_example_args.csv_path], split="train", delimiter="\t", column_names=["title", "text"]
)`
There is the delimiter="\t"
You can use this to create the csv:
`
import csv
row_list = [
["title", "text"],]
with open('my_knowledge_dataset.csv', 'w', newline='') as file:
writer = csv.writer(file, delimiter='\t')
writer.writerows(row_list) `<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 17,255 | closed | Added es version of bertology.mdx doc | Fixes #15947
Added spanish version of language_modeling.mdx documentation file.
@omarespejel @sgugger | 05-14-2022 19:09:15 | 05-14-2022 19:09:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Muchas gracias @jQuinRivero for the PR! ๐ค Please let me know if you wish to translate another one.
@sgugger LGTM :) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.