repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 19,161 | closed | Can't load too big text file for dataset (RAM exhausts), HELP PLEASE !!!!! | Hello All,
I am trying to load a big text file for pertaining a BERT model from scratch the size of the txt file is about 11GB and trying to load the model for pertaining is exhausting all the RAM on the system.
Is it possible to load all the data in batches and then perform training.
I am a bit new to hugging face ecosystem so I would request you all to help me around if you have any clue about this.
I am using Google Colab for the purpose.
Please share code snippets, If possible.
Cheers !
`#construct dataset
from transformers import LineByLineTextDataset
file_path="/content/drive/MyDrive/full_text_data.txt"
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path=file_path,
block_size=32)`
| 09-22-2022 17:57:34 | 09-22-2022 17:57:34 | (duplicate of #19199) |
transformers | 19,159 | closed | Fix ckpt paths in ViT MSN | @sgugger FYI.
PR w.r.t https://github.com/huggingface/transformers/pull/18815 | 09-22-2022 14:33:08 | 09-22-2022 14:33:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,158 | closed | [WIP] Trainer supporting evaluation on multiple datasets | # What does this PR do?
With this PR `Trainer` and `Seq2SeqTrainer` support evaluating on multiple datasets. For this, the `eval_dataset` and `compute_metrics` parameters have been updated. In order to evaluate on multiple datasets, `eval_dataset` should be a dict mapping a dataset name to a Dataset. In `_maybe_log_save_evaluate` we then loop over the dict, calling `evaluate` with each Dataset. The metric prefix is also updated to contain the dataset name. Furthermore, each eval dataset can optionally have its own `compute_metrics` function. For this, `compute_metrics` should be a dict where the keys match with `eval_dataset`.
Fixes #15857
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger | 09-22-2022 13:59:02 | 09-22-2022 13:59:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @sgugger, I mostly followed your suggestion in #15857, except instead of having a list of eval_datasets and another training arg, I solved it via passing a dict of eval_datasets. I thought a dict would work better because we also need multiple compute_metric functions. This way it is all lined up and less error-prone. However, let me know if you think otherwise.
Also, could you suggest what tests to write for this PR? I am not really sure, since the major change is in `_maybe_log_save_evaluate` and I didn't find a test for that.<|||||>Thanks for checking it so quickly!
In my case, I am training a seq2seq QA model and evaluating it on multiple datasets. However, they have different formats (eg extractive qa like SQuAD, or multiple-choice qa like commonsese QA). Using a seq2seq model for multiple formats has been for example proposed in the [UnifiedQA paper](https://arxiv.org/abs/2005.00700). Having multiple trainers has the limitation that I could only train on a single dataset at a time, but not train on multiple ones at the same time. However, note that if you pass multiple eval_datasets as a dict, but only a single compute_metric callable, the same compute_metrics function will be called on all the eval_datasets. That's what [this if statement](https://github.com/huggingface/transformers/pull/19158/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR2048) is doing. So the original scenario described in the Issue is also solved.<|||||>It's too niche of a use-case to allow for support, especially when we have other tools that easily let you more customizable training/evaluation loops like Accelerate.<|||||>Alright, I have reverted the change. Let me know in case of anything else:)<|||||>I'm trying to take advantage of the feature to include multiple eval_datasets in the trainer. Maybe I'm misreading the documentation, I've tried several ways to present the eval_dataset, but keep getting a KeyError when I include a DatasetDict / dict with datasets for the eval_dataset parameter. Am I doing something wrong? Do I need to specify the compute_metrics differently? Couldn't find anything on that.
Here's an example notebook resulting in the ValueError: https://colab.research.google.com/drive/1yLo9iqY4Cz9_h8BtAvcYRCtK5O_xa5jP?usp=sharing<|||||>> Thanks for your PR! Having the multiple datasets as a dict solves the problem of distinguishing a single dataset that is a list or a list of datasets. So I like this part.
>
> However I didn't see anything in the issue regarding using several `compute_metrics` function. If there is a need for different metrics, it probably means different Trainer should be built as it represents different tasks/problems. That change should be reverted, as the part where `compute_metrics` can be passed along to the `evaluate`/`predict` function.
@sgugger passing multiple `compute_metrics` functions for evaluation purposes can actually be more general than stated by @timbmg. For example, suppose we are doing multi-task training and we wish to evaluate on the same or held-out tasks as we train. This is common in recent research publications (eg FLAN-T5). Would you accept to support the multiple compute metrics functions? Or would your advice be to not use the trainer altogether and look towards using `accelerate`? I was worried that `accelerate` for training is a big step back towards writing a lot of boilerplate and code that the `Trainer` saves us.<|||||>I'd recommend using Accelerate instead of the Trainer for this use case.<|||||>@sgugger Any examples out there of using Accelerate for this? I would also like to evaluate on multiple datasets while training. Thanks! |
transformers | 19,157 | closed | Can't use decoder_inputs_embeds argument on MBartForConditionalGeneration | ### System Info
- `transformers` version: 4.22.0
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.23
- Python version: 3.9.12
- Huggingface_hub version: 0.7.0
- PyTorch version (GPU?): 1.12.0 (True)
- Tensorflow version (GPU?): 2.9.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
maybe not necessary
### Expected behavior
https://github.com/huggingface/transformers/pull/13800
Looks same problem at MBartForConditionalGeneration.foward
It is written as:
```python
if decoder_input_ids is None:
decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)
```
So, looks need edit to:
```python
if decoder_input_ids is None and decoder_inputs_embeds is None:
decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id)
``` | 09-22-2022 12:08:10 | 09-22-2022 12:08:10 | Maybe of interest to @ArthurZucker :)<|||||>Having a look right now! Thanks for finding this ๐ค |
transformers | 19,156 | closed | Reduce LR for TF MLM example test | The TF MLM example test was a little flakey, depending on the exact shuffling order of the dataset. I reduced the LR to 1e-4 and it seems to be consistently around 36 final perplexity now (compared to the test threshold of 42).
Cc @sgugger | 09-22-2022 11:43:31 | 09-22-2022 11:43:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,155 | closed | load_in_8bit=True crashes GPT-J when running model.generate() | ### System Info
Colab Pro, NVIDIA P100
Transformers: 4.22.1
Accelerate: 0.12.0
- `transformers` version: 4.22.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.8.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: N
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
This works fine, loads the model using 6GB GPU memory, 10GB free
```
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
revision="float16",
torch_dtype=torch.float16,
load_in_8bit=True,
device_map='auto',
low_cpu_mem_usage=True
).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
```
However when I run:
```
prompt = "I "
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=5,)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
```
The colab environment crashes with an unknown error, during model.generate()
`max_length=5` and a short prompt were tested to lower mem requirements, but this happens with any length and prompt.
Presumbaly it's not a memory issue as I can just about use inference without load_in_8bit, on the same arch.
### Expected behavior
Model generates exciting tokens, decodes, and prints! | 09-22-2022 09:51:33 | 09-22-2022 09:51:33 | Hey @petertjmills -- can you share with us the error you're seeing or, better yet, a link to a colab with the error?<|||||>Thank you for the reply! I threw together a quick notebook to send and it worked ๐
I've found the source of the issue. In the bitsandbytes readme they specify:
```
Hardware requirements:
LLM.int8(): NVIDIA Turing (RTX 20xx; T4) or Ampere GPU (RTX 30xx; A4-A100); (a GPU from 2018 or older).
8-bit optimizers and quantization: NVIDIA Maxwell GPU or newer (>=GTX 9XX).
```
The first time I got a P100 (Pascal arch, 2016)
The second time I got a T4, go figure.
May be worth documenting?<|||||>> May be worth documenting?
Definitely! Would you like to open a PR with a more informative exception? :D |
transformers | 19,154 | closed | [Conditional DETR] Add doc tests | # What does this PR do?
This PR improves the docs of Conditional DETR, by adding doc tests + a figure summarizing the paper.
cc @DeppMeng | 09-22-2022 08:05:41 | 09-22-2022 08:05:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you! It is great! |
transformers | 19,153 | closed | Fixed typo: "dictionnary" to "dictionary". | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-22-2022 07:07:09 | 09-22-2022 07:07:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,152 | closed | [TensorFlow] Adding LeViT | # Adding TensorFlow version of LeViT
This PR adds the TensorFlow version of [LeViT](https://arxiv.org/abs/2104.01136).
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Issue linked: #19123
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 09-22-2022 04:36:41 | 09-22-2022 04:36:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19152). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still working on it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Still working.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Working!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,151 | closed | update perf_train_cpu_many doc | Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @sgugger
| 09-22-2022 03:04:43 | 09-22-2022 03:04:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,150 | closed | Fixed type hint for pipelines/check_task | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-22-2022 02:22:32 | 09-22-2022 02:22:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,149 | closed | Fix `m2m_100.mdx` doc example missing `labels` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The `labels` variable is not defined, the `model_inputs` already contain this information :
```python
model_inputs.keys() # dict_keys(['input_ids', 'attention_mask', 'labels'])
```
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2022 20:39:15 | 09-21-2022 20:39:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,148 | closed | Luke Finetuning Error | ### System Info
- `transformers` version: 4.21.2
- Platform: Linux-4.15.0-192-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger @jplu
Hi, I was running the finetuning script for LUKE: [Link](https://github.com/huggingface/transformers/blob/main/examples/research_projects/luke/run_luke_ner_no_trainer.py).
I ran into this weird issue:
forward() got an unexpected keyword argument 'ner_tags'
Also, any idea as to what's the f1 score for ner task using this script?
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`CUDA_VISIBLE_DEVICES=0 python run_luke_ner_no_trainer.py --model_name_or_path studio-ousia/luke-base --dataset_name conll2003 --task_name ner --max_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir NER`
### Expected behavior
Starts the training process. | 09-21-2022 20:17:35 | 09-21-2022 20:17:35 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,147 | closed | Is BertWordPieceTokenizer behaviour deterministic on the same data? | So, I accidentally lost BertWordPieceTokenizer checkpoint, what will happen If I retrain it from the same data? Will the result be the same? | 09-21-2022 20:06:15 | 09-21-2022 20:06:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,334 | closed | `truncation='do_not_truncate'` is not working equivalently to `truncation=False` | Hi,
`truncation='do_not_truncate'` is not working equivalently to `truncation=False`.
When using `truncation=False` and providing `max_length`, it defaults to `'longest_first'` truncation strategy.
Whether this default behavior is natural or not, isn't `False` supposed to be identical to `'do_not_truncate'`?
This leads to a situation when the user explicitly specifies `truncation=False` but the text **is tokenized**.
This manual: https://huggingface.co/docs/transformers/pad_truncation and this doc https://huggingface.co/docs/transformers/main_classes/tokenizer
say that:
>`False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior.
Which means that they are supposed to be equivalent (regardless of what they do, they should behave the same).
I suggest that `False` should just mean "no truncation", regardless of `max_length` was supplied or not.
Here is a short example:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base")
sent = 'The quick brown fox jumps over the lazy dog'
len(tokenizer.encode(sent, max_length=5, truncation='do_not_truncate'))
```
prints: `11`
```python
len(tokenizer.encode(sent, max_length=5, truncation=False))
```
prints: `5`
Thanks,
Uri | 09-21-2022 19:38:05 | 09-21-2022 19:38:05 | Hi,
This issue belongs in `transformers` afaik. All the logic should be handled there as `truncation=False` does not mean anything for this library (IIRC).
If you can reproduce the bug just using `tokenizers` and not `transformers` then it's probably that the bug in this library.<|||||>Thanks, closing and re-creating in the `transformers` format |
transformers | 19,146 | closed | Add some tests for check_dummies | # What does this PR do?
This PR adds a new subfolder in the `tests` folder for the tests of the quality scripts used in the CI (so not inside the Transformers lib). As seen recently with the `check_dummies` script, when people apply too many modifications to our repo and update those scripts, there might be some breaking changes that slip through the cracks. Hopefully such new tests will help catch the failures early. | 09-21-2022 18:47:35 | 09-21-2022 18:47:35 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19146). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,145 | closed | Fixed typo in generation_utils.py | Changed "unfeasable" to "unfeasible"
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2022 16:42:24 | 09-21-2022 16:42:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,144 | closed | Fix dummy creation for multi-frameworks objects | # What does this PR do?
Follow-up from #17304, this finishes fixing the dummy creation script when there is more than one framework on which the object depends.
The problem is that it does not recognize the lines in the init like:
```
if not (is_sentencepiece_available() and is_tokenizers_available()):
```
To check the change, after the PR the following passes:
```py
import sys
# Adapt to where the repo is
sys.path.append("../git/transformers/utils")
from check_dummies import find_backend
assert find_backend(" if not is_tokenizers_available():") == "tokenizers"
assert find_backend(" if not (is_sentencepiece_available() and is_tokenizers_available()):") == "sentencepiece_and_tokenizers"
```
Before this PR only the first test passes.
I will work on adding unit tests such as the one above for all of our quality scripts. | 09-21-2022 15:35:14 | 09-21-2022 15:35:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19144). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,143 | closed | fix a bug in beam_search (moving log_softmax after logits_processor) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix a bug in beam_search function. If we use `log_softmax` before `logits_processor`, then the processed logits may not represent logP correctly because some logits may be set to `-inf` while other logits remain unchanged (i.e., `sum(exp(logits))!=1)`. This can significantly influence the generation results in my cases.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2022 14:06:11 | 09-21-2022 14:06:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Maybe of interest to @gante ?<|||||>Hey @nonstopfor ๐
Yes, your point is correct -- the normalization should happen after the logits processors. We have a flag to do it -- if you call `generate` with `renormalize_logits=True`, the last logit processor renormalizes your logits :)
This means this PR should not be needed -- let me know if it doesn't address your problem.<|||||>Ok. Thanks. |
transformers | 19,142 | closed | use `@unittest.skipIf` decorators inside tokenizer's tests instead of `if ...: return` | ### Feature request
Currently in tokenizer testing, many tests coded in `test_tokenization_common.py` are not relevant for all tokenizers. In most cases, when the test is not relevant, it is still run but no verification is done in it. See the snippet below for example:
https://github.com/huggingface/transformers/blob/114295c010dd9c94d48add7a0f091ba6ebdf482b/tests/test_tokenization_common.py#L384-L396
I would like to propose to replace these if tests in the test methods by `@unittest.skipIf` decorators. On the previous example it would give:
```python
@unittest.skipIf(not test_sentencepiece, "Not testing sentencepiece")
def test_subword_regularization_tokenizer(self) -> None:
# Subword regularization is only available for the slow tokenizer.
sp_model_kwargs = {"enable_sampling": True, "alpha": 0.1, "nbest_size": -1}
tokenizer = self.get_tokenizer(sp_model_kwargs=sp_model_kwargs)
self.assertTrue(hasattr(tokenizer, "sp_model_kwargs"))
self.assertIsNotNone(tokenizer.sp_model_kwargs)
self.assertTrue(isinstance(tokenizer.sp_model_kwargs, dict))
self.assertEqual(tokenizer.sp_model_kwargs, sp_model_kwargs)
self.check_subword_sampling(tokenizer)
```
### Motivation
The problem with the current method is that we don't have a view on the number of tests actually performed on each type of tokenizers. If errors are made in the configuration of the test classes, we can have a green check for all the tests but in reality nothing has been checked
### Your contribution
If you ever find it relevant, I can make the changes or let someone else who would be available to do it before me. | 09-21-2022 14:02:35 | 09-21-2022 14:02:35 | Let me ping you @ydshieh as it seems to me that you have a great knowledge of testing and your opinion on it could be really appreciated! (and cc @LysandreJik and @sgugger for visibility)<|||||>Would `test_sentencepiece` be an attribute of the common tester? Not sure where it comes from in your code sample ;-)<|||||>Oh yes you're right, I should have mentioned that!
Indeed `test_sentencepiece` is an attribute of the test class (if you want to see how this decorator works on a very simple test class, you can see an example [here](https://www.tutorialspoint.com/unittest_framework/unittest_framework_skip_test.htm)).
In `TokenizerTesterMixin` it is set to `False` by default but is sometimes overridden to `True` by the test class of a particular tokenizer.<|||||>I love the usage of `@unittest.skipIf`!
However, I see a problem with `@unittest.skipIf(not test_sentencepiece, "Not testing sentencepiece")`, as `test_sentencepiece` wouldn't be defined at the time this decoration (and the condition) is evaluated.
We will need to think of a solution :-)<|||||>FYI, the same exists in `transformers`, for example
```
def test_tie_model_weights(self):
if not self.test_torchscript:
return
```<|||||>> I love the usage of @unittest.skipIf!
Yeah :raised_hands:!
> However, I see a problem with @unittest.skipIf(not test_sentencepiece, "Not testing sentencepiece"), as test_sentencepiece wouldn't be defined at the time this decoration (and the condition) is evaluated.
For the case of `test_sentencepiece` I think they will be defined before as they are defined here:
https://github.com/huggingface/transformers/blob/114295c010dd9c94d48add7a0f091ba6ebdf482b/tests/test_tokenization_common.py#L142
https://github.com/huggingface/transformers/blob/19420fd99e1f08a052a1d0d267f3496002d03618/tests/models/xlm_prophetnet/test_tokenization_xlm_prophetnet.py#L33<|||||>You are right, @SaulLu! I didn't know that the decoration will be evaluated with the class attributes :-) You know better than me ๐ So It makes the change much easier!
This works!
```python
import unittest
class DummyTest(unittest.TestCase):
test_dummy = False
@unittest.skipIf(not test_dummy, "not test dummy")
def test_me(self):
assert 1 == 2
```<|||||>Well, I am quite cautious, and found we have something more to deal with. The following will test both method. It looks like it only uses the value in `DummyTestMixin`, not the one overridden in the subclasses.
(it will work if we override `test_me` methods too with the skipIf)
```
import unittest
class DummyTestMixin:
test_dummy = True
@unittest.skipIf(not test_dummy, "not test dummy")
def test_me(self):
assert 1 == 2
class DummyTest(DummyTestMixin, unittest.TestCase):
test_dummy = True
class DummyNotTest(DummyTestMixin, unittest.TestCase):
test_dummy = False
```<|||||>Oh no! It would have been so great to have! But cool that you saw it quickly @ydshieh <|||||>We can do some research though. I can find some time to see if there is any approach.<|||||>From a brief look, it looks like the solution adopted in the model tester (testing the class variable at the beginning of the test and exiting early) is the easiest one.<|||||>I have one last suggestion: it seems to me that it is possible to skip a test from inside the test using `self.skipTest` ([doc](https://docs.python.org/3/library/unittest.html?highlight=skiptest#unittest.TestCase.skipTest) - it was introduced in python 3.1).
On the toy example it seems to work well:
```python
import unittest
class DummyTestMixin:
test_dummy = True
a = 2
def test_me(self):
if not self.test_dummy:
self.skipTest("not test dummy")
assert self.a == 2
class DummyTestPass(DummyTestMixin, unittest.TestCase):
test_dummy = True
class DummyNotTestPass(DummyTestMixin, unittest.TestCase):
test_dummy = False
class DummyTestNotPass(DummyTestMixin, unittest.TestCase):
test_dummy = True
a = 1
class DummyNotTestNotPass(DummyTestMixin, unittest.TestCase):
test_dummy = False
a = 1
if __name__ == "__main__":
unittest.main()
```
What do you think about this?<|||||>This is awesome, @SaulLu ! Thank you :-). I would love this new approach to skip. Leave @sgugger and @LysandreJik for a final confirmation.<|||||>That works for me!<|||||>So cool! I'll try to take care of these changes today or tomorrow<|||||>Oops, it completely slipped my mind. I'll try to do this next week. |
transformers | 19,141 | closed | Support depencencies from github | Updated the `deps` regex to support dependencies from github.
For example I often want to run the `transformers` CI but with the `main` branch of `datasets` using
```
"datasets @ git+https://github.com/huggingface/datasets@main#egg=datasets",
``` | 09-21-2022 13:30:11 | 09-21-2022 13:30:11 | Tests failure are not related to this PR (but to `datasets`, sorry)
<s>May I merge anyway ?</s>
EDIT: actually the CI has been fixed on main, let me just update this PR<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,140 | closed | [fix] Add DeformableDetrFeatureExtractor | # What does this PR do?
As pointed out by @Deppmeng who is adding Conditional DETR in #18948, the postprocessing of Deformable DETR is actually different compared to regular DETR. Namely, a sigmoid activation function is used rather than softmax, and the no-object class is included, whereas DETR discards this class.
Hence, we'll need a new `DeformableDetrFeatureExtractor` which includes this custom postprocessing logic. As only the postprocessing of object detection is different, I'm using `Copied from` statements wherever possible.
To do:
- [x] update `preprocessor_config.json` of all repos on the hub
- [x] use `from_pretrained` in the code snippets for the feature extractor | 09-21-2022 12:21:25 | 09-21-2022 12:21:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,139 | closed | Allowing users to use the latest `tokenizers` release ! | # What does this PR do?
- Allow users to use the most recent `tokenizers` version.
- Should be 100% backward compatible, but there were quite large changes to the actual codebase.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 09-21-2022 11:45:03 | 09-21-2022 11:45:03 | You need to run `make style` when changing the setup.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Failure seem unrelated. Yet rebasing on main to remove the failure linked to the Datasets release and re-laucnhing spurious tests would be helpful to ease everyone's mind :-)<|||||>@sgugger Should I wait for a second core maintainer's opinion on this ?<|||||>Nope, you can go ahead and merge :-) <|||||>I couldn't immediately find the release process for this repository - when will this make it into a release? `tokenizers` (for versions earlier than 0.13.0) had no wheel available for Apple silicon, so I believe until this PR is released we're stuck with source builds for that dependency.<|||||>The next release of Transformers will be in a month roughly. In the meantime, you can install it from source. |
transformers | 19,138 | closed | `is_torch_tpu_available() got an unexpected keyword argument 'check_device'` | ### System Info
transformers==4.18.0
pytorch==1.12.0
pytorch-lightning==1.6.0
### Who can help?
@patrickvonplaten @sgugger @SaulLu
When I run `examples/pytorch/question-answering/run_qa.py`, it reports a bug that `is_torch_tpu_available() got an unexpected keyword argument 'check_device'`. I don't know why this problem happens. Could you help me solve this problem?

### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python run_qa.py \
--model_name_or_path xlm-roberta-base \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--version_2_with_negative \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
### Expected behavior
Could you help me solve this problem? | 09-21-2022 10:58:24 | 09-21-2022 10:58:24 | You are trying to run the examples of the main branch along with an older version of Transformers. Either upgrade your Transformers or use the [examples matching v4.18.0](https://github.com/huggingface/transformers/tree/v4.18.0/examples).<|||||>Thanks for your response. |
transformers | 19,137 | closed | how to load multiple text files in LineByLineTextDataset ? | null | 09-21-2022 09:52:54 | 09-21-2022 09:52:54 | Hi everyone,
I am a bit new to hugging face environment , I was trying to pretrain a model from scratch following taking some inspirations from this post
[tutorial link](https://ireneli.eu/2021/03/28/deep-learning-19-training-mlm-on-any-pre-trained-bert-models/)
Question : can I pass all the text files to construct the dataset ?
`dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path='MyData.tsv',
block_size=128
)`
and also If someone can explain me what the block size means ?
does it mean it will load 128 lines at a time to construct a batch of dataset ?<|||||>[LineByLineTextDataset](https://github.com/huggingface/transformers/blob/main/src/transformers/data/datasets/language_modeling.py#L115) seems like do not provide such functionality, I think you can either combing those tsvs yourself or you could Extend a class similar to this:
```python3
from typing import Union, List
import os
from datasets import Dataset
class LineByLineTextDataset(Dataset):
"""
This will be superseded by a framework-agnostic approach soon.
"""
def __init__(self, tokenizer: PreTrainedTokenizer, file_paths: Union[str, List[str]], block_size: int):
warnings.warn(
DEPRECATION_WARNING.format(
"https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py"
),
FutureWarning,
)
if isinstance(file_paths, list):
for file in file_paths:
if os.path.isfile(file) is False:
raise ValueError(f"Input file path {file} not found")
else:
if os.path.isfile(file_paths) is False:
raise ValueError(f"Input file path {file_paths} not found")
# Here, we do not cache the features, operating under the assumption
# that we will soon use fast multithreaded tokenizers from the
# `tokenizers` repo everywhere =)
logger.info(f"Creating features from dataset file at {file_paths}")
all_lines = []
for file in file_paths:
with open(file, encoding="utf-8") as f:
lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
all_lines.extend(lines)
batch_encoding = tokenizer(all_lines, add_special_tokens=True, truncation=True, max_length=block_size)
self.examples = batch_encoding["input_ids"]
self.examples = [{"input_ids": torch.tensor(e, dtype=torch.long)} for e in self.examples]
def __len__(self):
return len(self.examples)
def __getitem__(self, i) -> Dict[str, torch.tensor]:
return self.examples[i]
```
`batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)` here it seems like block_size controls how much token should be present after encoding at max, so if line is 'too long' it will just truncate it<|||||>Hi @ZurabDz,
Thanks, for the help.
So I will combine to get the single tsv file.
<|||||>I think you should close an issue if it's resolved. |
transformers | 19,136 | closed | `if not something` is ambiguous | ### System Info
- `transformers` version: 4.22.0
- Platform: Linux-4.19.157-1.20201118.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.3
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.10.2+cu111 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (gpu)
- Jax version: 0.3.17
- JaxLib version: 0.3.15
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
GPU device: A100 x 1
### Who can help?
@SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I run the following code
https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_mlm_flax.py#L296
and an error occurs here:
https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/tokenization_utils_base.py#L2907
My `required_input` is a tensor (in jax), sometimes I get error `ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()`.
### Expected behavior
https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/tokenization_utils_base.py#L2907
I suggest to use `if required_input is None` instead of `if not required_input` to avoid error, as the latter one is ambiguous. | 09-21-2022 09:10:47 | 09-21-2022 09:10:47 | Thank you for the issue! It's good to know the problems encountered. :hugs:
On the other hand, I'm not sure the case we want to catch is if `required_input` is equal to None (rather an empty list and maybe something else..). I don't have a typical case in mind but that would be the first thing we would need to find to solve your problem.
Would you happen to know what input was given to the pad method when you got this error?<|||||>I examined what happened here with the following method:
I add two lines
```python
print(f"{required_input=}")
print(f"{self.model_input_names[0]=}")
```
between line 2095 and 2097 to see the values of these two variables, and get the following output
https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/tokenization_utils_base.py#L2905-L2910
OUTPUT:
```python
required_input=DeviceArray([[[ 1, 3091, 459, ..., 3, 3, 3],
[ 1, 1175, 60, ..., 3, 3, 3],
[ 1, 1191, 90, ..., 3, 3, 3],
...,
[ 1, 1433, 60, ..., 3, 3, 3],
[ 1, 511, 292, ..., 3, 3, 3],
[ 1, 442, 318, ..., 3, 3, 3]]], dtype=int32)
self.model_input_names[0]='input_ids'
```
Therefore, I believe `required_input` is typically a list or a tensor. If we use `if not required_input`, sometimes it's possible to be processed as `bool` type.
As you said, if we want to catch if `required_input` is an empty list, why don't we consider judging from its shape?
In the following verifications I would like to show that this line (2907) might not be exactly working as how we want.
list
```python
In [1]: empty = [[], []] # tokenizer([""] * 2, add_special_tokens=False)
In [2]: not_empty = [[1, 2, 3], [4, 5, 6]]
In [3]: not empty, not not_empty
Out[3]: (False, False)
```
numpy
```python
In [4]: import numpy as np
In [5]: not np.array(empty)
<ipython-input-5-b0dbaf8aec3d>:1: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
not np.array(empty)
Out[5]: True
In [6]: not np.array(not_empty)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 not np.array(not_empty)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
pt
```python
In [8]: import torch
In [12]: not torch.Tensor(empty)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [12], in <cell line: 1>()
----> 1 not torch.Tensor(empty)
RuntimeError: Boolean value of Tensor with no values is ambiguous
In [13]: not torch.Tensor(not_empty)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [13], in <cell line: 1>()
----> 1 not torch.Tensor(not_empty)
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
```
jax
```python
In [14]: import jax.numpy as jnp
In [15]: not jnp.array(empty)
Out[15]: True
In [16]: not jnp.array(not_empty)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [16], in <cell line: 1>()
----> 1 not jnp.array(not_empty)
File /usr/local/lib/python3.8/functools.py:399, in partialmethod._make_unbound_method.<locals>._method(cls_or_self, *args, **keywords)
397 def _method(cls_or_self, /, *args, **keywords):
398 keywords = {**self.keywords, **keywords}
--> 399 return self.func(cls_or_self, *self.args, *args, **keywords)
File ~/env/xxx/lib/python3.8/site-packages/jax/_src/device_array.py:43, in _forward_method(attrname, self, fun, *args)
42 def _forward_method(attrname, self, fun, *args):
---> 43 return fun(getattr(self, attrname), *args)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
As a conclusion, if the goal is to determine whether `required_input` is a list/array/tensor containing nothing, there is a risk causing error. In this case, I suggest to judge from the shape if it is an array/tensor (if it's a list, it's fine, so it might be necessary to get its type first).
If the goal is just to determine whether it's `None` (as `tokenizer(something)` seems not to return `None` in most cases, I think this line doesn't mean to do this), would `if something is not None` be better?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry this slipped through the cracks. Very much in agreeance here @lsz05 and this is one of the reason we usually avoid relying on Python magic bool conversion but test for explicit values. Would you mind making a PR with a fix?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I almost forgot it.
I'll do something asap.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,135 | closed | Use metrics that consider the input as well as the (predicted, reference) tuple in the Trainer | ### Feature request
Allow the **compute_metrics()** in the **Trainer** to take into account the **original input** in addition to the predictions and labels.
### Motivation
It is currently possible to pass a custom compute_metrics() to the Trainer for evaluation. An example is
```
def compute_metrics(eval_preds):
metric = evaluate.load("glue", "mrpc")
logits, labels = eval_preds
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
```
However, the compute_metrics seems to be constrained only to receive a logit, label tuple.
This is insufficient for some metrics that also depend on the original sentence. An example is [SARI](https://huggingface.co/spaces/evaluate-metric/sari), which is currently implemented in the evaluate library.
Being unable to use the original input in the evaluation makes it impossible to use the Trainer for some seq2seq tasks, e.g. simplification.
### Your contribution
If the request is accepted, I will try to contribute with a PR. | 09-21-2022 08:53:03 | 09-21-2022 08:53:03 | I just saw that the feature is actually already added.
One can add the trainer argument --include_inputs_for_metrics
and the compute_metrics will receive the inputs as the third element of the tuple. |
transformers | 19,134 | closed | Added the option to specify "use_one_hot=True" in the forward pass/moโฆ | โฆdel call. Allows to have optimizable inputs in the vector space of WordPiece.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2022 08:39:27 | 09-21-2022 08:39:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19134). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,133 | closed | Fix FlaxPretTrainedModel pt weights check | # What does this PR do?
[files tab](https://github.com/huggingface/transformers/pull/19133/files) should make it clear on what's being changed.
```py
if os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME)
# will evaluate always to True as long as `pretrained_model_name_or_path` is a non-empty str
``` | 09-21-2022 07:56:34 | 09-21-2022 07:56:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,132 | closed | oneccl_bindings_for_pytorch 1.12.0 prebuilt wheel does not work with โฆ | โฆPyTorch 1.12.1
raise error in this condition and update the doc
Signed-off-by: Wang, Yi A <[email protected]>
Fixes # (issue)
torch 1.12.1 does not work with intel ccl 1.12.0
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 09-21-2022 03:26:27 | 09-21-2022 03:26:27 | @yao-matrix @sgugger please notice the issue and help review<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19132). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger I agree with your point, so I upload another PR (https://github.com/huggingface/transformers/pull/19151) to only update the doc.<|||||>we will release a new oneCCL 1.12.1 to work with torch 1.12.1 |
transformers | 19,131 | closed | [BugFix] Fix fsdp option on shard_grad_op. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix fsdp option on shard_grad_op.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2022 03:22:21 | 09-21-2022 03:22:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Is there someone can review and merge this pr, thanks a lot! cc: @sgugger @ydshieh |
transformers | 19,130 | closed | Remove duplicate parameters in run_clip.py | # What does this PR do?
The overwrite_cache parameter in this file is declared twice. Remove one of the two.
https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same person ---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-21-2022 03:16:49 | 09-21-2022 03:16:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,129 | closed | Add doctests to Perceiver examples | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Related to #16292
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Taken from #16292 : @patrickvonplaten @ydshieh @patil-suraj
| 09-20-2022 22:43:06 | 09-20-2022 22:43:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @stevenmanton
Thank you a lot for the PR! Currently the test will fail for the following
```bash
FAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationConvProcessing.forward
FAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationFourier.forward
FAILED src/transformers/models/perceiver/modeling_perceiver.py::transformers.models.perceiver.modeling_perceiver.PerceiverForImageClassificationLearned.forward
```
We should add the expected outputs in the doc examples.
For other doc examples in this model, the tests pass, but they don't have outputs to test.
For example, `PerceiverForOpticalFlow` has
```
>>> logits = outputs.logits
```
without output and the expected value. We can have something like
```python
>>> list(logits .shape)
expected shapes
```
Would you like to fix and enhance those doc examples?
Once you have a change (staged or commited), you can run the test like
```
python utils/prepare_for_doc_test.py src/transformers/models/perceiver/modeling_perceiver.py
pytest --doctest-modules src/transformers/models/perceiver/modeling_perceiver.py -sv --doctest-continue-on-failure
```
Once the test is run, you can clean up the git status before further changes or push.
Thanks!<|||||>@ydshieh thanks for the quick feedback! Yes, I noticed those tests were failing, but I thought it might be something about my local environment and the extra newline stuff. I just pushed a fix, which passes locally.
By the way, how did you know those tests were failing? The CI pipelines all seemed to be passing. Did you have to checkout my branch and run it locally?<|||||>@stevenmanton Thanks for the push. However, instead of changing to
```python
>>> predicted_class = model.config.id2label[predicted_class_idx]
```
we should change it to
```python
>>> print("Predicted class:", model.config.id2label[predicted_class_idx])
add some output here - so the doctest will test again it
```
You can find similar work is done
https://github.com/huggingface/transformers/blob/d5848a574a3990c95f20512673ecef9f57e0fe81/src/transformers/models/deit/modeling_deit.py#L735-L736
Otherwise, the example has nothing to be tested.
> By the way, how did you know those tests were failing? The CI pipelines all seemed to be passing. Did you have to checkout my branch and run it locally?
Yes. The PR CI on CircleCI does not run doctest :-). It is run after the PR being merged.<|||||>@ydshieh Thanks for your feedback and patience. I believe I've corrected it. The extra newline stuff is confusing, but if you run `prepare_for_doc_test.py` (which I think just adds a newline to all the docstrings) on the last commit, all tests pass for me locally. <|||||>Thanks! As we are going to run the doctest for this model, would you like to add some expected outputs at the following places?
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1921
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1695
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1131
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1033
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1021
And a few places in this doc example (for `logits` and `loss`)
https://github.com/huggingface/transformers/blob/bebcb950c7ad4dc1ef676806a6ac4283df7f5885/src/transformers/models/perceiver/modeling_perceiver.py#L1695
For `logits`, it would look like to add (the values should be collected from the run)
```
>>> list(logits.shape)
[1, 196, 8192]
```<|||||>When running `prepare_for_doc_test.py`, it will add some empty lines - to make doctest pass. That is why we should stage our change, run that script, run doctest, and discard the change before commit or further changes :-)<|||||>@ydshieh Ok, I added some more checks for the sizes of logits. They all pass for me locally. |
transformers | 19,128 | closed | Document and validate typical_p in generation | # What does this PR do?
Throws a `ValueError` when `typical_p` argument is provided to text-generation, but its value or `do_sample=False` prevent typical decoding from happening as intended. Adds a line documenting typical decoding.
Most arguments to generate were previously covered in #18261 , but not `typical_p`.
| 09-20-2022 22:12:07 | 09-20-2022 22:12:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,127 | closed | document-question-answering pipeline does not work with some models | ### System Info
Colab, latest release
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
!apt install tesseract-ocr
!apt install libtesseract-dev
!pip install Pillow
!pip install pytesseract
# You can use a http link, a local path or a PIL.Image object
img_path = "https://huggingface.co/spaces/impira/docquery/resolve/main/invoice.png"
from transformers import pipeline
# This works
pipe = pipeline("document-question-answering", model="impira/layoutlm-document-qa")
# This breaks with strange error
pipe = pipeline("document-question-answering", model="impira/layoutlm-invoices")
# Error: KeyError: 'layoutlm-tc'
```
### Expected behavior
This would work with both models | 09-20-2022 21:05:10 | 09-20-2022 21:05:10 | The `model_type` in the config.json of this specific model seems to be wrong. The types currently supported that would work with LayoutLM are:
- `layoutlm`
- `layoutlmv2`
- `layoutlmv3`
- `layoutxlm`
The specified type is `layoutlm-tc`.<|||||>Cc @ankrgyl <|||||>From the `transformers` side, I think the error could be a bit more descriptive/informative than having a `KeyError`.<|||||>I had a bit of discussion with @NielsRogge about this. The model type here is different because this model actually has a slightly different architecture than standard LayoutLM (it has an additional token classifier head). @NielsRogge was kind enough to submit a PR (https://huggingface.co/impira/layoutlm-invoices/discussions/1) which changes it to `layoutlm`.
With this change (now merged), your code above should run just fine. However, you will likely get suboptimal results, because the model has learned to depend on the token classifier to produce accurate results. I'd recommend running it through DocQuery (https://github.com/impira/docquery) which has a patched version of the model ([here](https://github.com/impira/docquery/blob/main/src/docquery/ext/model.py#L152)) that makes use of it.
You can do that via something like:
```
!apt install tesseract-ocr
!apt install libtesseract-dev
!pip install Pillow
!pip install pytesseract
!pip install docquery
# You can use a http link, a local path or a PIL.Image object
img_path = "https://huggingface.co/spaces/impira/docquery/resolve/main/invoice.png"
# This is a patched version of the pipeline that knows how to use the token classifier
from docquery import pipeline
# This works
pipe = pipeline("document-question-answering", model="impira/layoutlm-document-qa")
# This should work
pipe = pipeline("document-question-answering", model="impira/layoutlm-invoices")
```
In the meantime, I'll explore a few alternatives, e.g. packaging up the model directly in the repo or patching it a different way, so that it uses the token classifier.<|||||>@NielsRogge and @osanseviero just following up on this, we made the necessary changes in https://github.com/impira/docquery to keep the model working both in transformers directly and DocQuery, so at least from our side, we could close this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@osanseviero I believe this issue should be closable now (your original repro should now succeed). But please let me know if you see otherwise.<|||||>Sounds good! Thanks a lot for this! |
transformers | 19,126 | closed | Fix None loss in docstring for Wav2Vec2ForPretraining | - [ ] This PR Fix None loss in docstring for Wav2Vec2ForPreTraining | 09-20-2022 16:55:13 | 09-20-2022 16:55:13 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,125 | closed | [Wav2Vec2] Fix None loss in docstring for Wav2Vec2ForPreTraining | - [ ] This PR fix None loss in docstring for Wav2Vec2ForPretraining | 09-20-2022 16:22:18 | 09-20-2022 16:22:18 | |
transformers | 19,124 | closed | Sharding fails in TF when absolute scope was modified if `.` in layer name | # What does this PR do?
Fixes #18776, by taking care of the particular case of absolute scope modifications
## Who can review?
| 09-20-2022 16:08:17 | 09-20-2022 16:08:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Do you think we should add a special test for that @sgugger ? <|||||>On it! ๐ค<|||||>Just added a test, RAG works, I think I'll try to test most of our models just to be sure that there is no other strange pattern. <|||||>Tested with `"openai/clip-vit-large-patch14", "xlm-roberta-base"` in addition, works nicely. |
transformers | 19,123 | open | Adding TensorFlow port of LeViT | ### Feature request
To add the TensorFlow port of the [LeViT](https://arxiv.org/abs/2104.01136) architecture. The architecture is currently present in the Transformers library in [PyTorch](https://github.com/huggingface/transformers/blob/main/src/transformers/models/levit/modeling_levit.py).
### Motivation
[LeViT](https://arxiv.org/abs/2104.01136) is a family of architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. The TensorFlow port would be an addition to the hybrid architecutre families.
### Your contribution
I would like to make the contribution by building out the TensorFlow port.
Tagging: @amyeroberts who could assign me to the task of adding the TensorFlow port of the model. | 09-20-2022 14:34:20 | 09-20-2022 14:34:20 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am still working on this. |
transformers | 19,122 | closed | Skip `test_export_to_onnx` for `LongT5` if `torch` < 1.11 | # What does this PR do?
With torch `1.10`, We get some exception from C++ file.
```bash
Exception raised from index_select_out_cpu_ at ../aten/src/ATen/native/TensorAdvancedIndexing.cpp:887 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f60b7f4dd62 in /home/yih_dar_huggingface_co/miniconda3/envs/py39/lib/python3.9/site-packages/torch/lib/libc10.so)
```
Skip this test for torch < 1.11: **Make past CI clean**
P.S. this test is only defined for 3 models (T5, LongT5 and FSMT), but skipped (without any condition) for T5 and FSMT.
It should be fine to remove this test, and rely on `tests/onnx`. | 09-20-2022 13:35:17 | 09-20-2022 13:35:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,121 | closed | german processing | continues https://github.com/huggingface/transformers/issues/18564 @sgugger | 09-20-2022 13:02:01 | 09-20-2022 13:02:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,120 | closed | Add LayoutLMv2ForRelationExtraction | # What does this PR do?
This PR adds the relation extraction head of LayoutLMv2, which was a highly requested feature as seen in #14330 #15451 #18091 | 09-20-2022 09:19:50 | 09-20-2022 09:19:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19120). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger I'm getting the following error from `make fixup`:
```
Checking all objects are properly documented.
Traceback (most recent call last):
File "/home/niels/python_projects/transformers/utils/check_repo.py", line 788, in <module>
check_repo_quality()
File "/home/niels/python_projects/transformers/utils/check_repo.py", line 782, in check_repo_quality
check_all_objects_are_documented()
File "/home/niels/python_projects/transformers/utils/check_repo.py", line 693, in check_all_objects_are_documented
raise Exception(
Exception: The following objects are in the public init so should be documented:
- LayoutLMv2ForRelationExtraction
```
However, this model is added to layoutlmv2.mdx, so not sure why this error occurs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @NielsRogge , any chance that this will ever be implemented?
Looking around the history of the PR and Iissues, it seems that there was a fair bit of interest<|||||>Hi @lamaeldo,
The reason the PR wasn't merged is because models need to output fixed size tensors, to make sure things like distributed training and ONNX export work. However LayoutLMv2ForRelationExtraction outputs lists of tensors in its current implementation, due to each example in the batch having a different amount of relations. So we would need to pad them up to a fixed size such that the model outputs fixed size tensors.
Haven't looked into that yet but if you're willing to contribute, let me know!
Btw I do have a notebook on fine-tuning this model [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutXLM).
|
transformers | 19,119 | closed | Fix BeitFeatureExtractor postprocessing | # What does this PR do?
- Fixes a `BeitFeatureExtractor.post_process_semantic_segmentation()` assertion error when no `target_sizes` argument is provided
- Ensures post_process_semantic_segmentation returns a list of int64 PyTorch tensors
- Adds a test to ensure correct post-processing
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X ] Did you write any new necessary tests?
| 09-20-2022 09:01:05 | 09-20-2022 09:01:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @alaradirik, please also ping a core maintainer for review before merging PRs. |
transformers | 19,118 | closed | CLIPTokenizer behaves inconsistently depending on whether ftfy is installed or not | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.22.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Run following code without ftfy installed.
```py
from transformers import CLIPTokenizer
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
tokenizer("rรฉsumรฉ") # {'input_ids': [49406, 15077, 49407], 'attention_mask': [1, 1, 1]}
```
2. Run following code with ftfy installed.
```py
from transformers import CLIPTokenizer
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
tokenizer("rรฉsumรฉ") # {'input_ids': [49406, 29106, 7054, 4166, 49407], 'attention_mask': [1, 1, 1, 1, 1]}
```
### Expected behavior
They should work consistently. | 09-20-2022 08:21:11 | 09-20-2022 08:21:11 | This happens because `BasicTokenizer`, which is [used as](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/tokenization_clip.py#L159) fallback text fix function, [strips accents if `do_lower_case=True`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py#L174-L176).
We may fix this by explicitly set `strip_accents` to `False`; [ViT/L-14 tokenizer](https://huggingface.co/openai/clip-vit-large-patch14/raw/main/tokenizer.json) includes vocabs with accents, so I think stripping accents should not be done.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,117 | closed | Fix the wrong schedule in runner check CI | # What does this PR do?
The current (wrong) schedule in `check_runner_status.yml`:
`* */1 * * *` -> โAt every minute past every hour.โ
But we want
`0 */1 * * *` -> โAt minute 0 past every hour.โ
| 09-20-2022 08:06:19 | 09-20-2022 08:06:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,116 | closed | HfArgumentParser support yaml parser | ### Feature request
HfArgumentParser now supports for parsing dict and json files, will it be possible to support for parsing the widely used yaml files?
### Motivation
I think using yaml is a good way to record arguments.
### Your contribution
Not yet. | 09-20-2022 07:53:49 | 09-20-2022 07:53:49 | cc @sgugger
If you want to open a PR, please go ahead!<|||||>You can just use
`parser.parse_dict(yaml.safe_load(f))`<|||||>Which could all go in a `parse_yaml_file` method :-) Doing this and also refactoring the `parse_json_file` to use `parse_dict`, as well as adding small tests would be nice additions that shouldn't be too hard, so putting the "Good first issue" label here.
To summarize:
- [ ] adding as `parse_yaml_file` method to `HfArgumentParser` with the code above
- [ ] refactor the dupe code between `parse_json_file` and `parse_dict` similar to the code above
- [ ] add a small test of `parse_yaml_file`
- [ ] add a small test of `parse_json_file`
This could be done in a single PR or separate ones :-)<|||||>
Hi, I would like to work on it
<|||||>How can i write test for `parse_yaml_file` and `parse_json_file` it will require an external json and yaml file to testing<|||||>No, you can create it during the test by saving some dictionary (look at the `parse_dict` tests) into a temporary file.<|||||>Hey, @sgugger I have written the test for `parse_yaml_file` and `parse_json_file` using tempfile is it acceptable?? Also it passes the tests.

<|||||>You can also use the context manager for a temp dir.
```
with tempfile.TemporaryDirectory() as tmp_dir:
# Save file in tmp_dir as usual
# do the tests
```
The plus for this is that it's automatically cleaned up when you exit the with block (whereas the temp file will stay until the next restart).<|||||>Okay I will change that! |
transformers | 19,115 | closed | [WIP] support auto-compress for glue task | # What does this PR do?
support auto-compress for glue task | 09-20-2022 06:21:01 | 09-20-2022 06:21:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19115). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,114 | closed | GPT Neox Japanese is in the release notes for v4.22.X but does not appear to be in the v4.22.X package. | ### System Info
I've checked in Colab that GPT Neox Japanese is not in v4.22.1
```
- `transformers` version: 4.22.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Open colab
2. `!pip install transformers` to install the latest version (currently v4.22.1)
3. `from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer`
then raise below import error.
```---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
[<ipython-input-7-401722e84f23>](https://localhost:8080/#) in <module>
----> 1 from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer
ImportError: cannot import name 'GPTNeoXJapaneseForCausalLM' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
```
In [this release note](https://github.com/huggingface/transformers/releases/tag/v4.22.0), I thought we can use `GPT NeoX Japanese` from v4.22.0 release. That's not surprising, the model was not included in the zip file of the v4.22.0 release...
I would be glad to know the situation:smiley:
FYI
We can use GPT NeoX Japanese from `pip install git+https://github.com/huggingface/transformers` because it's in the main branch.
### Expected behavior
`GPT Neox Japanese` must be able to import correctly. | 09-20-2022 02:19:52 | 09-20-2022 02:19:52 | Hey @SO0529, sorry about that! That's on me, I missed this in the release notes. I just removed it. For now you can install the repo from source as you have noted in order to use it.
Thanks for letting me know!<|||||>Thank you for quick response and update the release note!
We are looking forward to the next release when GPT NeoX Japanese will be available.
Let's close this issue. Thank you! |
transformers | 19,113 | closed | Add a missing space in a script arg documentation | null | 09-19-2022 22:50:48 | 09-19-2022 22:50:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,112 | closed | Problem trying to migrate cache | ### System Info
Using MacOS Big Sur v11.6.4 and jupyter lab v3.4.7.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Installed transformers in a new venv with pip
2. Imported autocast from torch and diffusers in jupyter lab, it automatically attempted to migrate the cache and I received this error asking me to open an issue.
code:
```
from torch import autocast
from diffusers import StableDiffusionPipeline
```
<img width="918" alt="differror" src="https://user-images.githubusercontent.com/30514239/191115866-5e239ee8-23b5-4f7d-b55f-0646a63d86a7.png">
### Expected behavior
I opened this because the error messaged asked me to. I imagine the expected behavior is migrating the cache without incident. | 09-19-2022 21:03:29 | 09-19-2022 21:03:29 | cc @sgugger <|||||>I'm not too sure what the problem is, maybe it is due to the intermediate subfolder. This error should only happen once in any case, and you might have lost some cached files, but they will jsut be re-downloaded.<|||||>Thanks for the clarity. Didn't cause a problem otherwise.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,111 | closed | DPR pooler weights not loading correctly | ### System Info
tested on multiple versions
- `transformers` version: 4.12.3
- Platform: Linux-4.14.281-212.502.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:no
another environment
- `transformers` version: 4.16.2
- Platform: macOS-12.6-x86_64-i386-64bit
- Python version: 3.9.7
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten @lhoestq
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from transformers import DPRContextEncoder, DPRQuestionEncoder
question_encoder_path = "facebook/dpr-question_encoder-single-nq-base" # can also be a custom checkpoint
answer_encoder_path = "facebook/dpr-ctx_encoder-single-nq-base"
DPRQuestionEncoder.from_pretrained(question_encoder_path)
DPRContextEncoder.from_pretrained(answer_encoder_path)
```
results in the following message
```
Some weights of the model checkpoint at facebook/dpr-question_encoder-single-nq-base were not used when initializing DPRQuestionEncoder: ['question_encoder.bert_model.pooler.dense.weight', 'question_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRQuestionEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRQuestionEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of the model checkpoint at facebook/dpr-ctx_encoder-single-nq-base were not used when initializing DPRContextEncoder: ['ctx_encoder.bert_model.pooler.dense.weight', 'ctx_encoder.bert_model.pooler.dense.bias']
- This IS expected if you are initializing DPRContextEncoder from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DPRContextEncoder from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
```
### Expected behavior
Model loads successfully without re-intitializing weights | 09-19-2022 18:33:07 | 09-19-2022 18:33:07 | @ArthurZucker could you take a look here (happy to answer questions about the model)<|||||>on it! <|||||>Hi @ArthurZucker , do you have any updates on this by chance? I'm getting the same issue, but the results I get when benchmarking do not suggest random initialization of the weights.<|||||>Hey, it seems that the issue comes from the following [line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/dpr/modeling_dpr.py#L178), where it is hardcoded that `add_pooling_layer=False`. Setting it to `True` fixes the issue. Now I am not very familiar with the model, but a few test seems to be link to that checkpoint. Let me open a PR for a fix ! <|||||>Hey! So as mentioned here in a previous [PR](https://github.com/huggingface/transformers/pull/15068/commits/95eaf44ce93bbf46b552c606957f98b354dae73a), the optional pooling layer was removed as no checkpoints use it.
My first question would then be : do you need to have the `BERTPoolerLayer`? It is a bit confusing indeed that the pooling output do not come from the `BertPoolerLayer`. Have a look at #14486, I think it explains pretty well what's going on here.
We have to ways to go about this :
1. We add an argument in the config of DPR, and take care about updating the online config to have no breaking changes.
2. If you don't need it, then we just add a warning/update the online weights doing `from_pretrained` then `push_to_hub` and the checkpoints will then not include the `pooler` weights ๐ <|||||>Hi Arthur, I think I understand. Thanks for getting back so quickly! Its
performance suggests that the model is loading correctly so that must be
it! Thanks!
On Thu, Oct 20, 2022 at 3:42 AM Arthur ***@***.***> wrote:
> Hey! So as mentioned here in a previous PR
> <https://github.com/huggingface/transformers/pull/15068/commits/95eaf44ce93bbf46b552c606957f98b354dae73a>,
> the optional pooling layer was removed as no checkpoints use it.
>
> My first question would then be : do you need to have the BERTPoolerLayer?
> It is a bit confusing indeed that the pooling output do not come from the
> BertPoolerLayer. Have a look at #14486
> <https://github.com/huggingface/transformers/issues/14486>, I think it
> explains pretty well what's going on here.
>
> We have to ways to go about this :
>
> 1. We add an argument in the config of DPR, and take care about
> updating the online config to have no breaking changes.
> 2. If you don't need it, then we just add a warning/update the online
> weights doing from_pretrained then push_to_hub and the checkpoints
> will then not include the pooler weights ๐
>
> โ
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/19111#issuecomment-1285084844>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AXTXAGMEKV3HT2J3QSWTHF3WEDZVZANCNFSM6AAAAAAQQLMVMQ>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
|
transformers | 19,110 | closed | Add documentation of Trainer.create_model_card | # What does this PR do?
This PR adds some documentation for `Trainer.create_model_card` and fixes the type annotations. | 09-19-2022 17:33:39 | 09-19-2022 17:33:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,109 | closed | Don't warn of move if cache is empty | # What does this PR do?
This PR makes sure the warning(s) about moving cache are only issued when there is a cache (so not on a fresh install). | 09-19-2022 17:24:49 | 09-19-2022 17:24:49 | |
transformers | 19,108 | closed | Flax vs torch benchmark on Wav2vec2 | So my question is, should FlaxWav2Vec2ForCTC generally be faster than Wav2Vec2ForCTC?
1.14 s ยฑ 138 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) -> FlaxWav2Vec2ForCTC
37.7 ms ยฑ 10.1 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) -> Wav2Vec2ForCTC
so the question is, should flax be faster than the default torch model?
P.S: benchmarks are done on GPU, it seems like VRAM usage is drastically larger on flax for some reason as well. | 09-19-2022 14:53:55 | 09-19-2022 14:53:55 | maybe of interest to @sanchit-gandhi <|||||>Hey @ZurabDz! `FlaxWav2Vec2ForCTC` should be faster than `Wav2Vec2ForCTC` **if** the `__call__` method is just in time (JIT) compiled (_c.f._ https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html). Could you share your code for running this benchmark? We can then go through and make sure the Flax model is appropriately set-up to get max performance!
Also of interest: this notebook which JIT compiles the `__call__` method for BLOOM https://github.com/sanchit-gandhi/codesnippets/blob/main/check_flax_bloom_jit_small_testing.ipynb
You can see the speed up you get by JIT'ing the fprop! We can do something similar for your benchmark, comparing the iteration time of PyTorch to Flax (rather than the accuracy).<|||||>@sanchit-gandhi
So the code I use was something like this:
```python3
'''
Trying inference with jax. Note: it errors out without modifying source code currently
I was only concerned with speed so just silent errors:
assigned self.config.do_stable_layer_norm = True in modeling_flax_wav2vec2.py
assigned self.config.feat_extract_norm = "layer"
'''
from transformers import Wav2Vec2Processor, FlaxWav2Vec2ForCTC
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = FlaxWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h", from_pt=True)
sig, sr = torchaudio.load('out.mp3') # Make sure you have linux and ffmpeg 4 or use wav/mp3 format + soundfile/librosa
# preprocess, this is computed in prefetch don't care what time will it take...(in my pipeline)
input_values = processor(sig[0], sampling_rate=16_000, return_tensors="pt").input_values
%%timeit # jupyter magic or you could use time
logits = model(input_values).logits
```
```python3
'''
Just standard inference nothing fancy
'''
from transformers import Wav2Vec2Processor, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
sig, sr = torchaudio.load('out.mp3') # Make sure you have linux and ffmpeg 4 or use wav format + soundfile/librosa
# preprocess, this is computed in prefetch don't care what time will it take...(in my pipeline)
input_values = processor(sig[0], sampling_rate=16_000, return_tensors="pt").input_values
%%timeit # jupyter magic or you could use time
logits = model(input_values).logits
```
now whats interesting is that, with flax inference GPU utilisation is jumpy from 0-20% there might be some problem
in memory allocation on cuda idk...
Tried this:
```python3
@jax.jit
def flax_model_jitted(input_values):
return model(input_values).logits
```
seems like jit expects known type for flax so, added something like this as well `input_values = numpy.array(input_values)`
in this case GPU was not used. On CPU speed up definitely is present.
I installed cuda, cudnn, flax and jax with following way:
```bash
conda install -c conda-forge cudatoolkit-dev=11.2 cudnn=8.2.0
pip install -U jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
```
P.s What do you guys think about [custom forward written in cuda](https://pytorch.org/tutorials/advanced/cpp_extension.html#custom-c-and-cuda-extensions) vs flax assuming it(flax) performs to its peak?<|||||>Oops accidentally closed an issue sorry guys <|||||>> in this case GPU was not used
You can verify that you're running on an accelerator device by checking the number of JAX devices:
```python
print(jax.device_count())
```
This will tell you if you're on CPU or GPU!
> On CPU speed up definitely is present.
Did you make sure to use `.block_until_ready()` on the output logits? https://jax.readthedocs.io/en/latest/async_dispatch.html
Perhaps you could post your full code snippet for the JIT benchmark!
I'd do something as follows:
```python
@jax.jit
def flax_model_jitted(input_values):
return model(input_values).logits
input_values = jnp.array(input_values)
```
```python
# Compilation time (should be ~s)
%time logits = flax_model_jitted(input_values=input_values).block_until_ready()
```
```python
# Compiled time (should be ~ms)
%time logits = flax_model_jitted(input_values=input_values).block_until_ready()
```
You can refer to the ipynb for a template on how to set up a performance test: https://github.com/sanchit-gandhi/codesnippets/blob/main/check_flax_bloom_jit_small_testing.ipynb
<|||||>```python3
import jax
# This prints 1
print(jax.device_count(backend='gpu'))
```
Unfortunately, GPU utilisation is still 0% which means inference is still done on CPU. Memory is definitely allocated when model is loaded but after that, nothing really happens on it.
Currently flax benchmark looks like this:
```python3
from transformers import Wav2Vec2Processor, FlaxWav2Vec2ForCTC
import torch
import torchaudio
import jax
from jax import numpy
print(jax.device_count(backend='gpu')) # this prints 1
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = FlaxWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h", from_pt=True)
sig, sr = torchaudio.load('out.mp3')
input_values = processor(sig[0], sampling_rate=16_000, return_tensors="pt").input_values
input_values = numpy.array(input_values)
@jax.jit
def flax_model_jitted(input_values):
return model(input_values).logits
%%timeit
logits = flax_model_jitted(input_values=input_values).block_until_ready()
# 90.8 ms ยฑ 41 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each)
%%timeit
logits = flax_model_jitted(input_values=input_values).block_until_ready()
# 62.8 ms ยฑ 2.4 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each)
```
Is something wrong with `jax.numpy.array`? Do I need to somehow force things into GPU?
`print(jax.device_count(backend='gpu')) # this prints 1` is this what I should be expecting for GPU usage?
<|||||>@sanchit-gandhi sorry for pinging, but any thoughts on what could be the reasoning for such weird results?<|||||>Hey @ZurabDz! Sorry for the late reply. It looks like JAX is recognising your GPU which is good! The problem likely lies in your preparation of the inputs. First, what I'd try is returning the input values as np arrays:
```python
import jax.numpy as jnp
sig, sr = torchaudio.load("out.mp3")
input_values = processor(sig[0], sampling_rate=16_000, return_tensors="np").input_values
input_values_jnp = jnp.array(input_values)
```
and then pass these to the model.
If that does not help, then you can try using `device_put()` as explained in [multiplying-matrices](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html#multiplying-matrices).<|||||>Sorry, currently I am unable to test ```device_put``` I am occupied with a different problem. Maybe we should close an issue and open it later if the problem persists.<|||||>Hey @ZurabDz! Sure, let's close it for now and re-open if you continue to encounter this problem. Feel free to open a new issue for the different problem you are facing and tag me! |
transformers | 19,107 | closed | Add post_process_semantic_segmentation method to DPTFeatureExtractor | # What does this PR do?
Adds post_process_semantic_segmentation method to DPTFeatureExtractor.
I will open an issue and separate PRs to make sure that:
- Segmentation models (DETR, MaskFormer, SegFormer, etc.) have consistently named post-processing methods, arguments and outputs
- ImageSegmentationPipeline works with all available segmentation models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-19-2022 14:51:46 | 09-19-2022 14:51:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,106 | closed | Michael branch | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-19-2022 14:37:43 | 09-19-2022 14:37:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19106). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,105 | closed | Add semantic segmentation post-processing method to MobileViT | # What does this PR do?
Adds post_process_semantic_segmentation method to `MobileViTFeatureExtractor`.
I will open an issue and separate PRs to make sure that
- Segmentation models (DETR, MaskFormer, SegFormer, etc.) have consistently named post-processing methods, arguments and outputs
- ImageSegmentationPipeline works with all available segmentation models
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-19-2022 14:25:35 | 09-19-2022 14:25:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,104 | closed | [wip: test doc-builder] | Testing https://github.com/huggingface/doc-builder/pull/296 | 09-19-2022 12:21:13 | 09-19-2022 12:21:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,103 | closed | Improve vision models docs | # What does this PR do?
Improves the docs of several vision models:
* for `xxxForMaskedImageModeling` models, add a link to the `run_mim.py` script
* for `ViTMAEForPreTraining`, add a link to the `run_mae.py` script
* for ViT, add a tip about interpolation of pre-trained position embeddings (in order to fine-tune on higher resolution images)
* add figures for ViT and BEiT | 09-19-2022 11:38:35 | 09-19-2022 11:38:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,102 | closed | TF: check embeddings range | # What does this PR do?
Adds the same [check that was recently added to TFBart](https://github.com/huggingface/transformers/blob/ba7f2173cc578fe6d9f1cdb900d5af609f195cf6/src/transformers/models/bart/modeling_tf_bart.py#L751), which asserts that the inputs are within the embedding input range, in all models with token embeddings. As a reminder: TF doesn't enforce this check by default on `tf.gather`-dependent operations on GPU, returning a vector of `0.0` when out of bounds.
After this change, all `test_embeddings_out_of_bounds_raise_exception` tests pass (36 failures in the previous scheduled CI).
To simplify the review, there are 3 models you should check. All others are copy/paste from these.
1. Bert (Encoder)
2. GPT2 (Decoder)
3. Pegasus (Encoder-Decoder with `TFSharedEmbeddings` or `TFWrappedEmbeddings`. Encoder-Decoder models that only use the embeddings at the decoder, like Speech2Text, also follow the same code pattern) | 09-19-2022 11:00:34 | 09-19-2022 11:00:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(cc @ydshieh -- this fixes a large number of scheduled CI failures)<|||||>@gante Is this PR ready to merge? I guess so, but would like to wait your confirmation (or better for you to merge).<|||||>> @gante Is this PR ready to merge? I guess so, but would like to wait your confirmation (or better for you to merge).
@ydshieh It was ready -- merged now :D |
transformers | 19,101 | closed | Fix push ci workflow file | # What does this PR do?
#19054 breaks push CI due to a missing working dir in CI workflow file. Sorry. | 09-19-2022 10:49:31 | 09-19-2022 10:49:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,100 | closed | Revert "Check self-hosted runners are online" | Reverts huggingface/transformers#19054
Sorry, but I merged a PR that breaks push CI. Will try to fix it. | 09-19-2022 10:35:43 | 09-19-2022 10:35:43 | |
transformers | 19,099 | closed | Beit postprocessing | # What does this PR do?
Adds a post-processing method to BeiTFeatureExtractor for semantic segmentation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-19-2022 10:34:00 | 09-19-2022 10:34:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,098 | closed | Some feature requests for the Trainer | ### Feature request
2 feature requests for the HuggingFace Trainer:
* seems like the Trainer is currently using `huggingface_hub.Repository` when pushing a model to the hub. Would be great to update this to leverage the new HTTP methods, as currently I'm getting errors like the following:
```
OSError Traceback (most recent call last)
[<ipython-input-37-be354f2ba166>](https://localhost:8080/#) in <module>
----> 1 trainer.push_to_hub("nielsr/layoutxlm-xfund-fr-relation-extraction")
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in push_to_hub(self, commit_message, blocking, **kwargs)
3373 # it might fail.
3374 if not hasattr(self, "repo"):
-> 3375 self.init_git_repo()
3376
3377 if self.args.should_save:
[/usr/local/lib/python3.7/dist-packages/transformers/trainer.py](https://localhost:8080/#) in init_git_repo(self, at_init)
3255 clone_from=repo_name,
3256 use_auth_token=use_auth_token,
-> 3257 private=self.args.hub_private_repo,
3258 )
3259 except EnvironmentError:
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py](https://localhost:8080/#) in __init__(self, local_dir, clone_from, repo_type, use_auth_token, git_user, git_email, revision, private, skip_lfs_files, client)
496
497 if clone_from is not None:
--> 498 self.clone_from(repo_url=clone_from)
499 else:
500 if is_git_repo(self.local_dir):
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py](https://localhost:8080/#) in clone_from(self, repo_url, use_auth_token)
725 if not in_repository:
726 raise EnvironmentError(
--> 727 "Tried to clone a repository in a non-empty folder that isn't a"
728 " git repository. If you really want to do this, do it"
729 " manually:\ngit init && git remote add origin && git pull"
OSError: Tried to clone a repository in a non-empty folder that isn't a git repository. If you really want to do this, do it manually:
git init && git remote add origin && git pull origin main
or clone repo to a new folder and move your existing files there afterwards.
````
* would be great to have an argument `push_to_hub_frequency`, to indicate at which steps to push the model to the hub (seems like it's pushing to the hub every epoch at the moment by default).
### Motivation
Improving the push to hub functionalities of the Trainer.
### Your contribution
I hope @sgugger has the bandwidth to work on this :D | 09-19-2022 10:03:58 | 09-19-2022 10:03:58 | For 1, just overwrite the output dir with `--overwrite_output_dir` to have a clean output dir for the beginning of training (when resuming there shouldn't be any error since the repo will be synced with the local folder). If we stop leveraging `Repository`, we lose the async pushes so basically training will be interrupted each time there is a push, until the push is finished.
For 2, pushes are synced with saves, so change your `save_strategy` to `"steps"` and set the `save_steps` to the value of your liking.<|||||>Thanks for clarifying, makes sense! |
transformers | 19,097 | closed | fix position bias related logic after prune heads in T5 model | # What does this PR do?
A follow up pr after #17968
If the attention layer `self.has_relative_attention_bias == False`, then the position bias shape will be wrong, the head number should be the original model (before prune heads) head number `self.n_heads + len(self.pruned_heads)`
## Who can review?
- t5: @patrickvonplaten @patil-suraj | 09-19-2022 08:47:56 | 09-19-2022 08:47:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19097). All of your documentation changes will be reflected on that endpoint.<|||||>Maybe of interest to @ArthurZucker :)<|||||>Hey, before diving a bit deeper, sorry for the long delay, and thanks for the PR.
Would you mind adding a test? ๐ค I can take care of it otherwise! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing in favor of #20106. Thanks for your contribution |
transformers | 19,096 | closed | HPO: keep the original logic if there's only one process, pass the trโฆ | โฆial to trainer
need to find out solution for following cases
*if we need to use trial in model_init, how to do it for non-main rank, sync the model with rank0 in app?
*how to use optuna prune feature for DDP, if we do it in rank0, how does other rank know it.
Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @sgugger
| 09-19-2022 02:05:02 | 09-19-2022 02:05:02 | @yao-matrix @sgugger please review the patch.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||> need to find out solution for following cases
*if we need to use trial in model_init, how to do it for non-main rank, sync the model with rank0 in app?
*how to use optuna prune feature for DDP, if we do it in rank0, how does other rank know it. |
transformers | 19,095 | closed | Activation checkpointing for TFGPT2DoubleHeadsModel | ### Feature request
Activation checkpointing is implemented for the PyTorch GPT2 model (and different head variants), however, this is not the case for the Tensorflow implementation of GPT2,
### Motivation
A lot of GPU memory is required to finetune GPT2. This is especially the case for TFGPT2DoubleHeadsModel, because different choices (represented by different sequences) are combined in one sample. I think [`tf.recompte_grad`](https://www.tensorflow.org/api_docs/python/tf/recompute_grad) can play a role here.
### Your contribution
I've more experience with PyTorch then Tensorflow, but I could investigate possible solution directions. If it turns out to be easy I could spent time to create a PR, however, help by others is appreciated. | 09-18-2022 22:53:22 | 09-18-2022 22:53:22 | cc @Rocketknight1 @gante<|||||>Hi @visionscaper - this seems like something that could work! We haven't experimented with `tf.recompute_grad` in detail but the core code for training our Keras models is in the `train_step` and `test_step` methods [here](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L1380), so you could try adding it there. Anything you add there will control `model.fit()` for all of our models.
Bear in mind I don't know if this will work - I don't know the exact semantics of `recompute_grad` or if it plays nicely with Graph mode, but if you discover anything or you have any questions feel free to post them here!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,094 | closed | Allow custom signature while saving TF models | ### Feature request
Currently, when we use the [save_pretrained](https://github.com/huggingface/transformers/blob/ca485e562b675341409e3e27724072fb11e10af7/src/transformers/modeling_tf_utils.py#L2085) function from this library the model signature used to save the model is the default one that only calls the model on the inputs, I would like to be able to provide a custom signature while using the `save_pretrained` function.
### Motivation
Persisting models with custom signatures is quite important for models that target production setups, especially if they are going to be served with TF Serving.
I might be wrong but it seems that currently, the only way to save a `Transformer` model with a custom signature is by saving it using functions from the TF library, it would be very nice if the HF ecosystem could also support this feature.
### Your contribution
I think this might be simple to implement and I would be happy to draft a PR if you think this could be a helpful feature. | 09-18-2022 18:48:38 | 09-18-2022 18:48:38 | cc @Rocketknight1 @gante<|||||>Hi @dimitreOliveira, that sounds like a great feature, and we'd be happy to accept that PR! We've been working on making our default signatures more general and usable, but this sounds like a good idea too. Are you planning to add a `signatures` argument that's passed through to `model.save()` when `saved_model=True`?<|||||>Hey @Rocketknight1 I am glad you liked the feature, I am happy to collaborate with the TF side of the lib.
Yes, my idea is to just add a `signature` parameters to the [signatures](https://github.com/huggingface/transformers/blob/ca485e562b675341409e3e27724072fb11e10af7/src/transformers/modeling_tf_utils.py#L2085) function, that parameter would default to `None` and if that is the case we would just use `self.serving` as we already do, this way there would not be any relevant side-effect, and users could just create their custom signatures and pass it while saving. Looking at the code design it seems that this change would be compatible with all TF transformers models ; )
I have not looked yet to see if that would generate any issues with the tests, but if the plan is good I will work on the code during the weekend.
For context, the idea for this feature came to me while I was working on [this repository](https://github.com/dimitreOliveira/hf_tf_serving_examples), that also have a collection of custom signatures that range from text classification to text generation.
Maybe this feature also works for the vision and speech models but I do not have a lot of experience with those, maybe later I could also take a look there.<|||||>@Rocketknight1 @gante you can find the draft PR above, let me know if it looks good, then I can finish the work, if needed, I can provide some examples of cool use cases using custom signatures with the models. |
transformers | 19,093 | closed | Add BPE Wav2Vec2CTCTokenizer | ### Feature request
Hi there!
Is there scope for a BPE (SentencePiece) CTC tokenizer? Using a trained SentencePiece vocabulary in a CTC model is pretty straightforward - all we need now is a tokenizer than can bridge the missing step between grouping CTC ids and decoding them with SentencePiece.
### Motivation
Grouping characters that occur commonly together is a cool way of sometimes helping with spelling mistakes (like a mini LM) and introduces dependence between characters which can help hold the model's hand.
As far as I know this is fully supported by pyctcdecode so no pipelines/LM processors, etc. need to change.
### Your contribution
The few changes would be something like
```python
def _tokenize(self, text):
return self.sp_model.encode(text, out_type=str)
```
```python
def _convert_token_to_id(self, token):
spm_id = self.sp_model.PieceToId(token)
return spm_id
```
Additional SentenciePiece args would be similar to [other SentencePiece tokenizers](https://huggingface.co/docs/transformers/model_doc/xlm-roberta#transformers.XLMRobertaTokenizer), as would be the [saving of the tokenizer](https://github.com/huggingface/transformers/blob/2c8b508ccabea6638aa463a137852ff3b64be036/src/transformers/models/deberta_v2/tokenization_deberta_v2.py#L474). | 09-18-2022 17:23:36 | 09-18-2022 17:23:36 | cc @SaulLu @ArthurZucker <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,092 | closed | correct spelling in README | fixes to typos / spellings | 09-18-2022 11:07:53 | 09-18-2022 11:07:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,091 | closed | TypeError: __init__() got an unexpected keyword argument 'has_model_config' | ### System Info
- transformers version: 4.18.1
- Platform: Linux Jupyter Notebook, TF2.3 Python 3.6, 2 GPU
- Python version: '1.7.1+cu101'
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@mf
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
model_checkpoint = "xlm-roberta-large-finetuned-conll03-english"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint,add_prefix_space=True)
```
`train_examples ={'texts':[x[0] for x in train_set],'tag_names':[x[1] for x in train_set]}`
```
def isin(a, b):
return a[1] > b[0] and a[0] < b[1]
```
```
def tokenize_and_align_labels(examples, label2id, max_length=256):
tokenized_inputs = tokenizer(examples["texts"], truncation=True, padding='max_length', max_length=max_length,
return_offsets_mapping=True)
labels = []
for i, label_idx_for_single_input in enumerate(tqdm.tqdm(examples["tag_names"])):
labels_for_single_input = ['O' for _ in range(max_length)]
text_offsets = tokenized_inputs['offset_mapping'][i]
for entity in label_idx_for_single_input:
tag = entity['tag']
tag_offset = [entity['start'], entity['end']]
affected_token_ids = [j for j in range(max_length) if isin(tag_offset, text_offsets[j])]
if len(affected_token_ids) < 1:
continue
if any(labels_for_single_input[j] != 'O' for j in affected_token_ids):
continue
for j in affected_token_ids:
labels_for_single_input[j] = 'I_' + tag
labels_for_single_input[affected_token_ids[-1]] = 'L_' + tag
labels_for_single_input[affected_token_ids[0]] = 'B_' + tag
label_ids = [label2id[x] for x in labels_for_single_input]
labels.append(label_ids)
tokenized_inputs["labels"] = labels
print(tokenized_inputs.keys())
return tokenized_inputs
```
```
class MyDataset(torch.utils.data.Dataset):
def __init__(self, examples):
self.encodings = examples
self.labels = examples['labels']
def __getitem__(self, idx):
item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
item["labels"] = torch.tensor([self.labels[idx]])
return item
def __len__(self):
return len(self.labels)
train_data=MyDataset(train_data)
```
```
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint,id2label=id2label,label2id=label2id,ignore_mismatched_sizes=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
```
```
args = TrainingArguments(
"xlmroberta-finetuned-ner",
# evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
num_train_epochs=2,
weight_decay=0.01,
per_device_train_batch_size=4,
# per_device_eval_batch_size=32
fp16=True
# bf16=True #Ampere GPU
)
```
```
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
# eval_dataset=train_data,
# data_collator=data_collator,
# compute_metrics=compute_metrics,
tokenizer=tokenizer)
trainer.train()
```
```
Using amp half precision backend
FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
FutureWarning,
***** Running training *****
Num examples = 141648
Num Epochs = 2
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 35412
MLflow's log_param() only accepts values no longer than 250 characters so we dropped this attribute.
TypeError: __init__() got an unexpected keyword argument 'has_model_config'
```
### Expected behavior
To train NER model | 09-18-2022 05:51:50 | 09-18-2022 05:51:50 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,090 | closed | [Tracker] [bnb] Supporting `device_map` containing GPU and CPU devices | ### Feature request
We should be able to provide custom `device_map` when using 8-bit models using `bitsandbytes`. This would enable users having more control over the modules they want to quantize.
Linked issue: https://github.com/TimDettmers/bitsandbytes/issues/40
### Motivation
Users should be able to pass their own custom `device_map` and chose which module should be quantized or not
### Your contribution
Try coding this enhancement! | 09-17-2022 20:12:26 | 09-17-2022 20:12:26 | UPDATE (for future readers): the title was changed.
---
I think that the title of this issue is a little bit misleading. Technically, a custom `device_map` is already supported for `bitsandbytes`, as long as all the layers are on GPU.
For example, in the linked issue, this `device_map` works correctly:
```python
device_map = {
"transformer.wte": 0,
"transformer.wpe": 0,
"transformer.ln_f": 0,
"lm_head": 0,
"transformer.h.0": 0,
"transformer.h.1": 0,
"transformer.h.2": 0,
"transformer.h.3": 0,
"transformer.h.4": 0,
"transformer.h.5": 0,
"transformer.h.6": 0,
"transformer.h.7": 0,
"transformer.h.8": 0,
"transformer.h.9": 0,
"transformer.h.10": 0,
"transformer.h.11": 0
}
```
And I believe that there will be no problem in using `1` instead of `0` for any `transformer.*` layer if you have more than one GPU (but I may be mistaken, I didn't find any specific info in any docs about using `bitsandbytes` with multiple GPUs). And I suppose that replacing all `0` with `1` will also work. So, I think that users already can customize the device map, as long as it doesn't put anything on CPU.
The original issue was not about a custom map. It was about supporting the `load_in_8bit` flag for models that are shared between CPU and GPU.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> If you think this still needs to be addressed please comment on this thread.
unstale
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> If you think this still needs to be addressed please comment on this thread.
unstale
I guess this will be my monthly routine...<|||||>Hi
The PR #20281 will not be merged until a fix will be found on `bitsandbytes` side.
Could you please checkout from this PR if you want to use this feature from now? Thanks.<|||||>I've just tested that PR and it works. Thank you!
I tested it with a 13B model on GTX 3060. Without `load_in_8bit` only 10 layers are able to fit into the GPU. With that patch and `load_in_8bit=True` now 19 layers are able to fit into the GPU. Which gives a 30% speedup of the inference in my case.
For some reason when I test it on my initial example, it gives this warning:
```
/home/user/test/bnb-test/transformers/src/transformers/generation/utils.py:1470: UserWarning: You are calling .generate() with the `input_ids` being on a device type different than your model's device. `input_ids` is on cpu, whereas the model is on cuda. You may experience unexpected behaviors or slower generation. Please make sure that you have put `input_ids` to the correct device by calling for example input_ids = input_ids.to('cuda') before running `.generate()`.
warnings.warn(
```
However, I was not able to reproduce it in my other more complex program.
In the PR's discussion it was said:
> this will result in weights offloaded on the CPU to not be converted in int8 at all
I expected this much, but I think it's still better than nothing.
Though, are there some gotchas in the fact that CPU layers are not converted to 8bit?
Also, not sure how to proceed next. You said:
> we should probably wait until bitsandbytes supports weights offloading in 8-bit to add this feature
So I suppose this issue should remain open? I will then add more info to my initial issue at the `bitsandbytes` repo.<|||||>Thank you very much for your feedback and happy that it worked for your usecase!
> For some reason when I test it on my initial example, it gives this warning:
This is because you have set your `input_ids` on the `cpu` before running your inference! Make sure to set `input_ids` to the device of the first layers (so I guess here, your GPU) before running `generate`.
> Though, are there some gotchas in the fact that CPU layers are not converted to 8bit?
I did not quite get your question here, but CPU layers are kept in their native `dtype` here indeed, which can be quite confusing. For example you could provide a device_map that contains only `cpu` layers and still load your model with `load_in_8bit` - users will think that they're loading their model in 8-bit on their CPU when actually it's not the case.
> So I suppose this issue should remain open? I will then add more info to my initial issue at the bitsandbytes repo.
Yes, it can remain open. But feel free also to jump in the PR #20281 to give your opinion on the question and stress about the fact that you think this feature is useful. You can also add more information on the `bitsandbytes` repo also!
<|||||>> This is because you have set your `input_ids` on the `cpu` before running your inference! Make sure to set `input_ids` to the device of the first layers (so I guess here, your GPU) before running `generate`.
I use the following code:
```python
pipe = pipeline(
model="EleutherAI/gpt-neo-125M",
max_length=32,
model_kwargs={
"device_map": device_map,
"load_in_8bit": load_in_8bit
}
)
print("\n", pipe("It was")[0]["generated_text"])
```
Not sure where I am supposed to set `input_ids` here.
> I did not quite get your question here
I mean, purely from a technical standpoint, are there some downsides to mixing 8bit and 16/32bit layers?
<|||||>> Not sure where I am supposed to set input_ids here.
Thanks for sharing the code! It's clearer for me now, can you try to add `device=0` as follows:
```
pipe = pipeline(
model="EleutherAI/gpt-neo-125M",
max_length=32,
device=0,
model_kwargs={
"device_map": device_map,
"load_in_8bit": load_in_8bit
}
)
```
> I mean, purely from a technical standpoint, are there some downsides to mixing 8bit and 16/32bit layers?
Indeed, from a technical standpoint I don't see any downside
<|||||>When I add `device=0` I get this:
```
Traceback (most recent call last):
File "/home/user/test/bnb-test/main.py", line 28, in <module>
pipe = pipeline(
File "/home/user/test/bnb-test/transformers/src/transformers/pipelines/__init__.py", line 870, in pipeline
return pipeline_class(model=model, framework=framework, task=task, **kwargs)
File "/home/user/test/bnb-test/transformers/src/transformers/pipelines/text_generation.py", line 64, in __init__
super().__init__(*args, **kwargs)
File "/home/user/test/bnb-test/transformers/src/transformers/pipelines/base.py", line 778, in __init__
self.model = self.model.to(self.device)
File "/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in to
return self._apply(convert)
File "/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
File "/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
File "/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 639, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File "/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 662, in _apply
param_applied = fn(param)
File "/home/user/test/bnb-test/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 985, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
```
The full code for clarity:
```python
from transformers import pipeline
auto_map = False
load_in_8bit = True
if auto_map:
device_map = "auto"
else:
device_map = {
"transformer.wte": 0,
"transformer.wpe": 0,
"transformer.ln_f": "cpu",
"lm_head": 0,
"transformer.h.0": 0,
"transformer.h.1": "cpu",
"transformer.h.2": "cpu",
"transformer.h.3": "cpu",
"transformer.h.4": "cpu",
"transformer.h.5": "cpu",
"transformer.h.6": "cpu",
"transformer.h.7": "cpu",
"transformer.h.8": "cpu",
"transformer.h.9": "cpu",
"transformer.h.10": "cpu",
"transformer.h.11": "cpu"
}
pipe = pipeline(
model="EleutherAI/gpt-neo-125M",
device=0,
max_length=32,
model_kwargs={
"device_map": device_map,
"load_in_8bit": load_in_8bit
}
)
print("\n", pipe("It was")[0]["generated_text"])
```
The error occurs even when `load_in_8bit = False`.
Also, in any case, the original error is pretty confusing. It says `You are calling .generate() with the input_ids`, but I don't do such a thing.<|||||>Thanks for sharing, I think it is fine, for now I would say that you can leave the pipeline without `device=0`. I expect a small speedup since `accelerate` copies the `input_ids` that is created on the `cpu` to the device of the model at the beginning, and copies back the result on `cpu`. Let me get back to you on this to see if I can find a solution
the reason it says `generate()` is because `pipeline` calls `.generate()` under the hood here<|||||>> the reason it says `generate()` is because `pipeline` calls `.generate()` under the hood here
I know, but to an end user it still will not be immediately clear what the problem is just by reading that error message. It also says how to fix it:
```
Please make sure that you have put input_ids to the correct device
by calling for example input_ids = input_ids.to('cuda') before running .generate()
```
But it's absolutely not applicable in this situation, adding even more confusion. Maybe the call to `pipeline` should have a different error message?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale
Also, I added some comments in the PR discussion:
https://github.com/huggingface/transformers/pull/20281#issuecomment-1328092770
https://github.com/huggingface/transformers/pull/20281#issuecomment-1345605654<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale
Technically, I personally don't need this fix anymore, since in my project I applied the hack described in the PR.
Though it would be nice to have it properly integrated into the `transformers`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This should be solved by the introduction of `BitsAndBytesConfig` in #21579 <|||||>Yes, indeed it works. Thank you, @younesbelkada!
For completeness sake, here's the final working version:
```python
import torch
from transformers import BitsAndBytesConfig, pipeline
device_map = {
"transformer.wte": 0,
"transformer.wpe": 0,
"transformer.ln_f": "cpu",
"lm_head": 0,
"transformer.h.0": 0,
"transformer.h.1": "cpu",
"transformer.h.2": "cpu",
"transformer.h.3": "cpu",
"transformer.h.4": "cpu",
"transformer.h.5": "cpu",
"transformer.h.6": "cpu",
"transformer.h.7": "cpu",
"transformer.h.8": "cpu",
"transformer.h.9": "cpu",
"transformer.h.10": "cpu",
"transformer.h.11": "cpu"
}
quantization_config = BitsAndBytesConfig(
load_in_8bit=True,
llm_int8_enable_fp32_cpu_offload=True,
llm_int8_skip_modules=["lm_head"]
)
pipe = pipeline(
model="EleutherAI/gpt-neo-125M",
max_length=32,
torch_dtype=torch.float16,
model_kwargs={
"device_map": device_map,
"quantization_config": quantization_config
}
)
print("\n", pipe("It was")[0]["generated_text"])
``` |
transformers | 19,089 | closed | Add type hints for TF MPNet models | Based on Issue https://github.com/huggingface/transformers/issues/16059
I have added type hints for the all the Tensorflow MPNet models.
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance. | 09-17-2022 18:19:28 | 09-17-2022 18:19:28 | Thanks @Rocketknight1 ๐.<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,088 | closed | Added type hints for TFConvBertModel | Based on Issue #16059
I have added type hints for the [TFConvBertModel](https://huggingface.co/docs/transformers/model_doc/convbert#transformers.TFConvBertModel).
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance. | 09-17-2022 17:28:49 | 09-17-2022 17:28:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @Rocketknight1. |
transformers | 19,087 | closed | v4.22.1 ErnieForMaskedLM Bug | ### System Info

@LysandreJik
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import ErnieForMaskedLM
### Expected behavior
Failed to import transformers.models.ernie | 09-17-2022 15:39:25 | 09-17-2022 15:39:25 | Hi @wzjj98, could you please share a bit more information about your PC setup and the script you're calling?
I can confirm it shouldn't be a problem to import `ErnieForMaskedLM` with `transformers==4.22.1`<|||||>My GPU NVIDIA GeForce GTX 3090
Processor:Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz
<|||||>


<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,086 | closed | Added type hints for YolosForObjectDetection | Based on Issue #16059
I have added the type hints for `YolosForObjectDetection` model.
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance. | 09-17-2022 14:47:53 | 09-17-2022 14:47:53 | Thanks @Rocketknight1. |
transformers | 19,085 | closed | Added Type hints for VIT MAE | Based on Issue #16059
While looking through the codebase, I found that the ViTMAE model doesn't had the type hints as suggested in Issue #16059. Added type hints for [ViTMAEModel](https://huggingface.co/docs/transformers/model_doc/vit_mae#transformers.ViTMAEModel) and [ViTMAEForPreTraining](https://huggingface.co/docs/transformers/model_doc/vit_mae#transformers.ViTMAEForPreTraining) models.
@Rocketknight1 Could you kindly check if this is fine?
Thanks in advance. | 09-17-2022 14:28:54 | 09-17-2022 14:28:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @Rocketknight1. |
transformers | 19,084 | closed | Added type hints to ResNetForImageClassification | Based on Issue #16059.
While looking into Issue #16059, I found that the `ResNetForImageClassification` model's type hints are inconsistent with the [docs](https://huggingface.co/docs/transformers/model_doc/resnet#transformers.ResNetForImageClassification.forward). Modified it to be consistent with the docs.
@Rocketknight1 Could you kindly check if this is fine?
This is my First PR for the Transformers library. Thanks in advance.
| 09-17-2022 13:24:21 | 09-17-2022 13:24:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @Rocketknight1. |
transformers | 19,083 | closed | some data is dropped when encoding by LayoutLMv3Processor | ### System Info
I'm using LayoutLMv3Processor to encoding.
when I input (1, 82, 4) as boxes, the processor will extends the boxes to (1, 512, 4), but some boxes of input is dropped by processor as I can't find them in the encoding;
seems the last n boxes is dropped.
**_tokenizer = LayoutLMv3TokenizerFast.from_pretrained('microsoft/layoutlmv3-base')
processor = LayoutLMv3Processor(LayoutLMv3FeatureExtractor(apply_ocr=False), tokenizer)_**
**boxes of before encoding (512, 4)
boxes of after encoding (82, 4)
boxes of before encoding without duplicated (58, 4)
boxes of after encoding without duplicated (82, 4)**
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
import numpy as np
from PIL import Image
from transformers import LayoutLMv3Processor, LayoutLMv3TokenizerFast, LayoutLMv3FeatureExtractor, \
LayoutLMv3ForTokenClassification, AutoModelForTokenClassification, AutoConfig
from inference_util import prepare_annotation,load_original_dataset
image_paths,bboxes,ner_tags=load_original_dataset("cro_vl_fr/","test")
tokenizer = LayoutLMv3TokenizerFast.from_pretrained('microsoft/layoutlmv3-base')
processor = LayoutLMv3Processor(LayoutLMv3FeatureExtractor(apply_ocr=False), tokenizer)
item = image_paths[0]
image = Image.open( item).convert("RGB")
# get word-level annotations
image,words,boxes=prepare_annotation(image,bboxes[0])
boxes_2_points= np.hstack((np.array(boxes)[:,0:2],np.array(boxes)[:,4:6])).astype(int)
encoding = processor(image, words, boxes=boxes_2_points,
padding="max_length", truncation=True,
return_tensors="pt")
for k,v in encoding.items():
encoding[k] = v.squeeze()
token_boxes = encoding['bbox'].numpy()
print("boxes of before encoding",np.shape(token_boxes))
print("boxes of after encoding",np.shape(boxes_2_points))
token_boxes=[tuple(a) for a in token_boxes]
token_boxes=np.array(list(set(token_boxes)))
boxes_2_points=[tuple(a) for a in boxes_2_points]
boxes_2_points=np.array(list(set(boxes_2_points)))
print("boxes of before encoding without duplicated ",np.shape(token_boxes))
print("boxes of after encoding without duplicated",np.shape(boxes_2_points))
### Expected behavior
original boxes should not be drop. | 09-17-2022 06:13:04 | 09-17-2022 06:13:04 | |
transformers | 19,082 | closed | `--with_tracking` doesn't seem to work | ### System Info
Hi,
When I enable --with_tracking within run_glue_no_trainer.py, nothing seems to happen. After the training is over, I don't find any log files in output_dir. How do I save these training results (e.g., accuracy, training_loss, ...) as shown in the following figure?

Thanks in advance!
### Who can help?
@sgugger @muellerzr
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`Execute the script`
python run_glue_no_trainer.py \
--model_name_or_path bert-large-uncased \
--task_name mrpc \
--output_dir ./output_dir \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 100 \
--seed 42 \
--with_tracking
### Expected behavior
I want to save the results of the training process in a log file by enabling `--with_tracking`. | 09-17-2022 04:41:25 | 09-17-2022 04:41:25 | @muellerzr will correct me if I'm wrong, but this API is for external trackers (TensorBoard, WandB etc.). To save results on disk, just use regular python code like `json.dump`.<|||||>@sgugger exactly. @Ericmututu do you have any tracking libraries installed on your system?
(I can probably raise an error in the scripts if it's tried and none are available)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,081 | open | add Unified-IO | ### Model description
I'd like to request the addition of the Unified-IO model. It is a multimodal model capable of visual question answering, image generation and more...
the repo is this: https://github.com/allenai/unified-io-inference
the paper: [Unified-IO: Sequential Modeling for Generally Applicable Vision Models](https://arxiv.org/abs/2206.08916)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://github.com/allenai/unified-io-inference | 09-17-2022 02:39:28 | 09-17-2022 02:39:28 | Hi, have you started working on the issue? Do you plan to integrate it yourself?<|||||>I'd like to work on this issue, is there any documentation on adding new models that I should follow?<|||||>I would like to work on this one.<|||||>@NielsRogge @alaradirik If no one else is currently working on adding this model, I would like to work on it.<|||||>Hi @kumar-devesh , I'm working on it (made some progress toward getting a working version of the Discrete VAE in Torch) but @osanseviero told me that it would be better to verify if there's interest from the development team. If they're ok with it then we could work on it together.<|||||>cc @sgugger @amyeroberts <|||||>Hi @ChanBong @kumar-devesh @alceballosa, Unified-IO would be a great addition to the library.
If you are not familiar with contributing to transformers, you can refer to the [guidelines](https://huggingface.co/docs/transformers/add_new_model) to get started. I'd recommend checking if you can run the original repo without any issues and get the expected results first.
Here are some summarised points that might help with model addition:
- Each model, including different checkpoints of the same model, has it's own repo on the Hub (see [DETR-ResNet-50 repo](https://huggingface.co/facebook/detr-resnet-50) as an example). This is basically a git repo that stores the checkpoint specific configuration, preprocessing configuration and the model weights.
- The code added to transformers acts as a boilerplate to initialise the model and load different checkpoints - Unified-IO trained on different datasets and/or with different resolution and/or larger / smaller architecture.
- configuration_unifiedio.py should contain all the hyperparameters, the input image size and architectural details (e.g. number of hidden layers) to initialize the model.
- Multi-modal models (e.g. CLIP, ALIGN) have a `Processor` class that capsulates `Tokenizer` and `ImageProcessor` classes that preprocesses the text and image inputs.
- image_processing_unifiedio.py should contain the ImageProcessor class that takes in the raw input image and preprocesses it to the format expected as input to the model (resizing to a fixed input size, normalization, cropping, etc.)
- tokenizer_unifiedio.py should contain the Tokenizer class that preprocesses the raw input text.
- processor_unifiedio.py combines the two to preprocess image-text pair inputs.
- modeling_unifiedio.py should contain the model definition.
- The conversion script:
- Loads the pretrained original model and randomly initializes the HF implementation with the corresponding configuration
- Copies the pretrained parameters (weights and biases) of the original model to the corresponding parameters of the randomly initialized HF model (the conversion step)
- Forward propagates an arbitrary input (text + image in this case) through both the original model and converted HF model and checks if the outputs match
- Uploads the converted HF model to the hub
- Each model, tokenizer, image processor and processor class is tested with scripts under `tests/models/<MODEL_NAME>/ `, you can refer to other test files to see what tests to add.
Once you are done, you would need to run the following commands to check the PR passes all CI tests:
```
make style
make quality
make repo-consistency
RUN_SLOW=TRUE pytest tests/models/unifiedio/test_modeling_unifiedio.py
RUN_SLOW=TRUE pytest tests/models/unifiedio/test_image_processor_unifiedio.py
RUN_SLOW=TRUE pytest tests/models/unifiedio/test_tokenizer_unifiedio.py
RUN_SLOW=TRUE pytest tests/models/unifiedio/test_processor_unifiedio.py
```
We can do an in-depth review or create a Slack channel to address questions and issues once there is a draft PR.
Hope this helps! |
transformers | 19,080 | closed | Bump oauthlib from 3.2.0 to 3.2.1 in /examples/research_projects/decision_transformer | Bumps [oauthlib](https://github.com/oauthlib/oauthlib) from 3.2.0 to 3.2.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/releases">oauthlib's releases</a>.</em></p>
<blockquote>
<h2>3.2.1</h2>
<h2>In short</h2>
<p>OAuth2.0 Provider:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/803">#803</a> : Metadata endpoint support of non-HTTPS</li>
<li>CVE-2022-36087</li>
</ul>
<p>OAuth1.0:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/818">#818</a> : Allow IPv6 being parsed by signature</li>
</ul>
<p>General:</p>
<ul>
<li>Improved and fixed documentation warnings.</li>
<li>Cosmetic changes based on isort</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>add missing slots to TokenBase by <a href="https://github.com/ariebovenberg"><code>@โariebovenberg</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/804">oauthlib/oauthlib#804</a></li>
<li>Add CORS support for Refresh Token Grant. by <a href="https://github.com/luhn"><code>@โluhn</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/806">oauthlib/oauthlib#806</a></li>
<li>GitHub Action to lint Python code by <a href="https://github.com/cclauss"><code>@โcclauss</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/797">oauthlib/oauthlib#797</a></li>
<li>Docs: fix Sphinx warnings for better ReadTheDocs generation by <a href="https://github.com/JonathanHuot"><code>@โJonathanHuot</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/807">oauthlib/oauthlib#807</a></li>
<li>Allow non-HTTPS issuer when OAUTHLIB_INSECURE_TRANSPORT. by <a href="https://github.com/luhn"><code>@โluhn</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/803">oauthlib/oauthlib#803</a></li>
<li>chore: fix typo in test by <a href="https://github.com/tamanobi"><code>@โtamanobi</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/816">oauthlib/oauthlib#816</a></li>
<li>Fix typo in server.rst by <a href="https://github.com/NemanjaT"><code>@โNemanjaT</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/819">oauthlib/oauthlib#819</a></li>
<li>Fixed isort imports by <a href="https://github.com/dasm"><code>@โdasm</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/820">oauthlib/oauthlib#820</a></li>
<li>docs: Fix a few typos by <a href="https://github.com/timgates42"><code>@โtimgates42</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/822">oauthlib/oauthlib#822</a></li>
<li>docs: fix typos by <a href="https://github.com/kianmeng"><code>@โkianmeng</code></a> in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/823">oauthlib/oauthlib#823</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/ariebovenberg"><code>@โariebovenberg</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/804">oauthlib/oauthlib#804</a></li>
<li><a href="https://github.com/tamanobi"><code>@โtamanobi</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/816">oauthlib/oauthlib#816</a></li>
<li><a href="https://github.com/NemanjaT"><code>@โNemanjaT</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/819">oauthlib/oauthlib#819</a></li>
<li><a href="https://github.com/kianmeng"><code>@โkianmeng</code></a> made their first contribution in <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/pull/823">oauthlib/oauthlib#823</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/oauthlib/oauthlib/compare/v3.2.0...v3.2.1">https://github.com/oauthlib/oauthlib/compare/v3.2.0...v3.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/oauthlib/oauthlib/blob/master/CHANGELOG.rst">oauthlib's changelog</a>.</em></p>
<blockquote>
<h2>3.2.1 (2022-09-09)</h2>
<p>OAuth2.0 Provider:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/803">#803</a>: Metadata endpoint support of non-HTTPS</li>
<li>CVE-2022-36087</li>
</ul>
<p>OAuth1.0:</p>
<ul>
<li><a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/818">#818</a>: Allow IPv6 being parsed by signature</li>
</ul>
<p>General:</p>
<ul>
<li>Improved and fixed documentation warnings.</li>
<li>Cosmetic changes based on isort</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/oauthlib/oauthlib/commit/88bb1562930a9bd9368bf26120655794d90d9585"><code>88bb156</code></a> Updated date and authors</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/1a45d9790543673208e603e13a7be4aa4cba7339"><code>1a45d97</code></a> Prepare 3.2.1 release</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/0adbbe10ed8ef822d1c780987fffc56670ce3f9f"><code>0adbbe1</code></a> docs: fix typos</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/6569ec3c062be7268f4a17f5a371aa29f1bcfa4a"><code>6569ec3</code></a> docs: Fix a few typos</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/bdc486e2bc3a188027a4ebec3a3013e64023ce62"><code>bdc486e</code></a> Fixed isort imports</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/7db45bda96ea6f5fde1186e8fd43d75ce6b95ab5"><code>7db45bd</code></a> Fix typo in server.rst</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/b14ad85921db2406ecaf5927a8be08a7566c236e"><code>b14ad85</code></a> chore: s/bode_code_verifier/body_code_verifier/g</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/b123283ba3d41acb3e787fdf68bd5907972b4bad"><code>b123283</code></a> Allow non-HTTPS issuer when OAUTHLIB_INSECURE_TRANSPORT. (<a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/803">#803</a>)</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/2f887b5a070bf617a471c573ad52fb58251c61af"><code>2f887b5</code></a> Docs: fix Sphinx warnings for better ReadTheDocs generation (<a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/807">#807</a>)</li>
<li><a href="https://github.com/oauthlib/oauthlib/commit/d4bafd9f1d0eba3766e933b1ac598cbbf37b8914"><code>d4bafd9</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/oauthlib/oauthlib/issues/797">#797</a> from cclauss/patch-2</li>
<li>Additional commits viewable in <a href="https://github.com/oauthlib/oauthlib/compare/v3.2.0...v3.2.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 09-16-2022 22:47:25 | 09-16-2022 22:47:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,079 | closed | Small Typo in Docs GenerationMixin for use_cache parameter | ### System Info
In the text_generation docs (https://huggingface.co/docs/transformers/main_classes/text_generation), `use_cache` does not show up as its own line in the list of parameters.
<img width="973" alt="Screen Shot 2022-09-16 at 6 09 30 PM" src="https://user-images.githubusercontent.com/565363/190812185-35c6eb4d-fbbf-4d17-ad86-6c1d2083c0e0.png">
I think this is a small typo due to an extra `:` in the code. Happy to fix.
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The website link
### Expected behavior
`use_cache` should be on its own line | 09-16-2022 22:10:30 | 09-16-2022 22:10:30 | @ankrgyl that's a good finding ๐ A PR contribution would be deeply appreciated (for TF and FLAX as well, if the typo also exists there), but I will pick it up otherwise :) |
transformers | 19,078 | closed | Add tests for legacy load by url and fix bugs | # What does this PR do?
This PR adds to tests we can load object with the single url to the relevant file, which is a deprecated behavior until v5, but that we unintentionally broke early because it's not tested. The tests added are marked to be removed at v5 (when they test deprecated behavior). | 09-16-2022 20:52:16 | 09-16-2022 20:52:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,077 | closed | Bump mako from 1.2.0 to 1.2.2 in /examples/research_projects/decision_transformer | Bumps [mako](https://github.com/sqlalchemy/mako) from 1.2.0 to 1.2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/sqlalchemy/mako/releases">mako's releases</a>.</em></p>
<blockquote>
<h1>1.2.2</h1>
<p>Released: Mon Aug 29 2022</p>
<h2>bug</h2>
<ul>
<li>
<p><strong>[bug] [lexer]</strong> Fixed issue in lexer where the regexp used to match tags would not
correctly interpret quoted sections individually. While this parsing issue
still produced the same expected tag structure later on, the mis-handling
of quoted sections was also subject to a regexp crash if a tag had a large
number of quotes within its quoted sections.</p>
<p>References: <a href="https://github-redirect.dependabot.com/sqlalchemy/mako/issues/366">#366</a></p>
</li>
</ul>
<h1>1.2.1</h1>
<p>Released: Thu Jun 30 2022</p>
<h2>bug</h2>
<ul>
<li>
<p><strong>[bug] [tests]</strong> Various fixes to the test suite in the area of exception message rendering
to accommodate for variability in Python versions as well as Pygments.</p>
<p>References: <a href="https://github-redirect.dependabot.com/sqlalchemy/mako/issues/360">#360</a></p>
</li>
</ul>
<h2>misc</h2>
<ul>
<li>
<p><strong>[performance]</strong> Optimized some codepaths within the lexer/Python code generation process,
improving performance for generation of templates prior to their being
cached. Pull request courtesy Takuto Ikuta.</p>
<p>References: <a href="https://github-redirect.dependabot.com/sqlalchemy/mako/issues/361">#361</a></p>
</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a href="https://github.com/sqlalchemy/mako/commits">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 09-16-2022 19:06:47 | 09-16-2022 19:06:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,076 | closed | Add type hints for PyTorch SEWD | Based on the issue https://github.com/huggingface/transformers/issues/16059
@Rocketknight1 could you please take a look at it?
Thanks :) | 09-16-2022 18:29:11 | 09-16-2022 18:29:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,075 | closed | Note about developer mode | Adds a note about developer mode being required on Windows + overdue update of the READMEs | 09-16-2022 18:23:14 | 09-16-2022 18:23:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,073 | closed | Fix tokenizer load from one file | # What does this PR do?
#18438 broke the (deprecated) API allowing a user to load a tokenizer from the path to a given file when said tokenizer only needs one file. This PR should fix it.
Fixes #19057 | 09-16-2022 17:48:35 | 09-16-2022 17:48:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,072 | closed | Add post_process_semantic_segmentation method to SegFormer | # What does this PR do?
Adds a post_process_semantic_segmentation method to `SegFormerFeatureExtractor` with optional resizing. This model doesn't support instance or panoptic segmentation.
I will open separate PRs to make sure the naming and outputs of post_process methods of segmentation models are consistent.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 09-16-2022 16:20:41 | 09-16-2022 16:20:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge FYI, I will also need to open a PR to edit `ImageSegmentationPipeline` after making sure all post-processing methods across segmentation models are consistent in terms of naming and functionality.<|||||>Hey @alaradirik, please also ping a core maintainer for review before merging PRs. |
transformers | 19,071 | closed | Change document question answering pipeline to always return an array | # What does this PR do?
Updates the DocumentQuestionAnsweringPipeline to always return an array, to fix the inference widget, and also be easier to use in general.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- x[ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x ] Did you write any new necessary tests?
## Who can review?
@Narsil @mishig25
| 09-16-2022 15:50:22 | 09-16-2022 15:50:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>i think this will fix this issue:
<img width="686" alt="image" src="https://user-images.githubusercontent.com/326577/191121642-1fd004ea-4111-439b-923c-020acf05c5b7.png">
<|||||>Yes that is exactly its intent! |
transformers | 19,070 | open | PhraseConstraints apearing only directly after input or at the end of the generated sentence | ### System Info
- `transformers` version: 4.22.0
- Platform: Linux-3.10.0-1160.25.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.12
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@patrickvonplaten @Narsil @cwkeam
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
## Overview
In the [PR](https://github.com/huggingface/transformers/pull/15761) that introduced word constraints to the generation function we have an example script --> Example 2: A Mix of Strong Constraint and a Disjunctive Constraint.
Following up you see it slightly modified, but the modifications should not have an impact on the output
- I added the import for `GPT2LMHeadModel` and `GPT2Tokenizer`
- I removed the `.to(torch_device)` for me to run the script
- I redid the assertions, so we can run the script on its own --> removing `self.....`
```py
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
force_word = "scared"
force_flexible = ["scream", "screams", "screaming", "screamed"]
force_words_ids = [
tokenizer([force_word], add_prefix_space=True, add_special_tokens=False).input_ids,
tokenizer(force_flexible, add_prefix_space=True, add_special_tokens=False).input_ids,
]
starting_text = ["The soldiers", "The child"]
input_ids = tokenizer(starting_text, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
force_words_ids=force_words_ids,
num_beams=10,
num_return_sequences=1,
no_repeat_ngram_size=1,
remove_invalid_values=True,
)
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
assert generated_text[0] == "The soldiers, who were all scared and screaming at each other as they tried to get out of the"
assert generated_text[1] == "The child was taken to a local hospital where she screamed and scared for her life, police said."
```
## ToDo
- [ ] run the script on `transformers==4.20.1`it works perfectly well
- [ ] run the script on a version above `4.20.1` it will not pass the assertions
### Expected behavior
## Problem
The constraining algorithm seems to be somewhat broken in versions above `4.20.1`
For example on version `4.22`we the script generates the following the outputs:
> _The soldiers, who had been stationed at the base for more than a year before being evacuated **screaming scared**_
> _The child was taken to a local hospital where he died.\n 'I don't think **screaming scared**_
You can see that the constraints just get added to the end of the generated sentence. In fact, when trying around with constraints, I found out, that they are either placed right after the input:
--> example is made up to show what happens...
> _The soldiers **screaming scared**, who had been stationed at the base for more than a year before being evacuated _
> _The child **screaming scared** was taken to a local hospital where he died.\n 'I don't think_
or at the end of the generated sentence:
> _The soldiers, who had been stationed at the base for more than a year before being evacuated **screaming scared**_
> _The child was taken to a local hospital where he died.\n 'I don't think **screaming scared**_
---
- [ ] I expect for the constraints to appear naturally within the generated sentence (like in the testing-script). On versions above `4.20.1` they are just appended in a senseless manner?
---
- hope that helps
- pls ask me if you have further questions, through I am a beginner myself
| 09-16-2022 14:07:08 | 09-16-2022 14:07:08 | cc @gante as well :)<|||||>Hi @JoWohlen ๐ to confirm that I got the problem correctly -- the `example 2` of the PR that introduced the feature, modified to be self-contained, no longer works on `v4.22`. However, up to `v4.20.1`, it worked fine. Is this correct?<|||||>> Hi @JoWohlen ๐ to confirm that I got the problem correctly -- the example 2 of the PR that introduced the feature, modified to be self-contained, no longer works on v4.22. However, up to v4.20.1, it worked fine. Is this correct?
Yes that is correct<|||||>Awesome, thank you for the clarification @JoWohlen ๐ It helps to pinpoint the issue.
I've added this issue to the list of `.generate()` related issues -- I will let you know when we start looking into it!<|||||>You are welcome, and thanks for the great library!<|||||>By accident I stumbled over what probably is the cause of all this. In https://github.com/huggingface/transformers/pull/17814 a change was made to the constraint-beam-search. This change became active after v4.20.1 . Linked in the PR you can find another PR that adapts the tests to expect the faulty results (as in the issue description)<|||||>Also @boy2000-007man, maybe you have a solution to this? <|||||>@gante more generally should we maybe mark the disjunctive decoding as experimental and state that we don't actively maintain them? It's simply too time-consuming to look into this at the moment IMO<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Would it be possible to keep this issue open? We are trying to improve output using constrained decoding and this issue prevents that.<|||||>I am also interested to use this constrained text generation functionality which currently doesn't work anymore.<|||||>Reopened (it's still on my generate task queue, which sadly is quite long) :)<|||||>Looking forward to the solution. Btw, even using the version 4.20.1, which doesnโt have this issue, it also has the problem to use two or more words in force_word.
for example:
force_word = "very scared because"<|||||>@gante I would like to pick up this. Any pointers on where to start ? <|||||>Hey @raghavanone ๐ Thank you for pitching in!
I suggest opening two debugging sessions, one using v4.20 (where the output for this mode is correct) and the other using `main`. Check the internals of `.generate()` until the variables on the two sides start diverging -- after pinpointing exactly where they start diverging, the problem (and the fix) should become clear :)
This is my go-to strategy for numerical problems in `.generate()`, btw<|||||>Hello everyone,
Is there an update on this?
> Looking forward to the solution. Btw, even using the version 4.20.1, which doesnโt have this issue, it also has the problem to use two or more words in force_word.
> for example:
> force_word = "very scared because"
Weirdly, it works well when forcing chunks of two words, but fails when forcing chunks of > words. Here is an example to play around with:
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
words = [["scared of"], ["fire"]]
words = [["scared for their lives"], ["fire"]]
force_words_ids = [tokenizer(w, add_prefix_space=True, add_special_tokens=False).input_ids[0] for w in words]
force_flexible = ["scream", "screams", "screaming", "screamed"]
force_words_ids = [
force_words_ids,
tokenizer(force_flexible, add_prefix_space=True, add_special_tokens=False).input_ids,
]
starting_text = ["The soldiers", "The child"]
input_ids = tokenizer(starting_text, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
force_words_ids=force_words_ids,
num_beams=32,
num_return_sequences=1,
no_repeat_ngram_size=1,
max_length=60,
remove_invalid_values=True,
)
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(generated_text)
```
when using `words = [["scared of"], ["fire"]]`, the output is very okay. When using `words = [["scared for their lives"], ["fire"]]`, there is this annoying repetition:
`'The soldiers are scared for their life scared for their lives," he said. "They\'re screaming,`
I think that if this could be fixed for 4.20.1, that would be an awesome next step.
Additionally, it would be great to have the ability to define different constraints for each hypothesis in the input_ids.
I suppose this line: https://github.com/huggingface/transformers/blob/v4.20.1/src/transformers/generation_beam_search.py#L477 should be changed so that the right constraint is returned according to the beam_idx `n`.
<|||||>Hi,I would love to work on this issue and fix the issue.<|||||>@SHUBHAPRIYA95 feel free to open a PR and tag me :)<|||||>I don't feel this is a bug. The 4.20.1 works because it inappropriately rewards constraints with `token_score` instead of `beam_score` and causes incomplete constraints repetition.
The constraints appear at EOS, because model constantly prefers `topk_beam + constraint_token` than `topk_beam2append_constraint_token + constraint_token + top1_token`.
I guess model treats adding constraint_token as a mistake and put it at EOS to only suffer one low `token_score` instead of at least two otherwise.
One potential solution deserves a try is to set up [`push_progress`](https://github.com/huggingface/transformers/blob/33aafc26ee68df65c7d9457259fc3d59f79eef4f/src/transformers/generation/beam_search.py#L715). |
transformers | 19,069 | closed | Fix `LeViT` checkpoint | # What does this PR do?
Fix `LeViT` checkpoint -> can't find `https://huggingface.co/facebook/levit-base-192` | 09-16-2022 13:52:25 | 09-16-2022 13:52:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,068 | closed | Replace logger.warn by logger.warning | These print unwanted warning messages, as per https://docs.python.org/3/library/logging.html#logging.warning | 09-16-2022 13:32:55 | 09-16-2022 13:32:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,067 | closed | Generate: add warning when left padding should be used | # What does this PR do?
As the title describes: add a warning when left padding should be used. Incorrect use of right padding is detected when:
1. the model is decoder-only;
2. there is a padding token in the last member of the sequence. | 09-16-2022 12:37:03 | 09-16-2022 12:37:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,066 | closed | [FuturWarning] Add futur warning for LEDForSequenceClassification | # What does this PR do?
fixes #19019 by replacing the construction of the `eos_mask` in the `SequenceClassification`.
Also adds a test to make sure that long sequence are properly processed
| 09-16-2022 11:44:16 | 09-16-2022 11:44:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think deleting the class would be less confusing, we discussed this with Patrick, the original paper does not use the encoder decoder model for sequence classification. WDYT @LysandreJik <|||||>Let's please do a deprecation cycle :pray: <|||||>Would love to learn about, what can I do? <|||||>I think there is still work to do here?<|||||>@ArthurZucker, instead of deleting the class, you would first start by adding a `FutureWarning` when it is instantiated or called, mentioning that it is deprecated and what is recommended instead. You would mention that the class will be deleted in version 5, and that such a code will error out then.<|||||>Oh it ! Thanks for the pointers |
transformers | 19,065 | closed | [doc] Fix link in `PreTrainedModel.save_pretrained` documentation | # What does this PR do?
Prevent a hyperlink from displaying as the full markdown representation near the bottom of the [PreTrainedModel.save_pretrained](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.save_pretrained) documentation. Currently, the additional backticks cause the body to be deemed a code block, preventing it from becoming a link like it should be.
## The current situation

## The new situation
See [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19065/en/main_classes/model#transformers.PreTrainedModel.save_pretrained) for the documentation after this PR.

## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger
- Tom Aarsen | 09-16-2022 09:20:54 | 09-16-2022 09:20:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,064 | closed | Automatically tag CLIP repos as zero-shot-image-classification | # What does this PR do?
This PR changes the mapping to map CLIP to zero-shot-image-classification in the Hub automatically. (internal [context](https://huggingface.slack.com/archives/C02EK7C3SHW/p1663315010068709))
| 09-16-2022 08:47:15 | 09-16-2022 08:47:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging as it's all green now! Thanks for the review! :hugs: |
transformers | 19,063 | closed | Zero-Shot Classification - Pipeline - Batch Size | ### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
class TextDataset(Dataset):
def __init__(self, list_of_text):
self.news = list_of_text
def __len__(self):
return len(self.news)
def __getitem__(self, idx):
sample = {'text' : self.news[idx]}
return sample
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli", device=0, framework='pt')
candidate_labels = ['advertisement','politics']
dataset = TextDataset(news_list)
for i in classifier(KeyDataset(dataset, 'text'),candidate_labels=candidate_labels, batch_size=32):
print(i)
break
Out : {'sequence': 'Love is Where It All Begins: adidas X Thebe Magugu Launch .... Herzogenaurach, Aug 15 2022 โ Today, adidas launches its latest Tennis collection, created in partnership with contemporary South African...', 'labels': ['advertisement', 'global warming'], 'scores': [0.9311832189559937, 0.0002945002052001655]}
### Expected behavior
As I am using batch_size 32, I do expect my output to be a sequence of dicts of length 32. However, it only returns the first element each and every time. | 09-16-2022 07:41:03 | 09-16-2022 07:41:03 | > Out : {'sequence': 'Love is Where It All Begins: adidas X Thebe Magugu Launch .... Herzogenaurach, Aug 15 2022 โ Today, adidas launches its latest Tennis collection, created in partnership with contemporary South African...', 'labels': ['advertisement', 'global warming'], 'scores': [0.9311832189559937, 0.0002945002052001655]}
> Expected behavior
>
> As I am using batch_size 32, I do expect my output to be a sequence of dicts of length 32. However, it only returns the first element each and every time.
Actually everything is working as intended. The model is indeed seeing 32 items at a time, however 32 texts in this occurence is NOT a batch of 32.
In order to work on this data, there is 1 text pair constructed for each text + candidate_labels, so in your case each text generates 2 items to be processed by the model.
This pipeline is actually quite smart, and starts by outputting in a generating fashion all the items one by one, which are automatically batched (regardless if it's the same text or not) into a batch of 32 (so here 16 texts x 2 candidate labels but it would work the same with any amount of candidate labels)
It then proceeds and run the model on this batch
Then the output is iteratively debatched, to be processed 1 texts + candidate_labels at a time, yielding the exact same output as if it wasn't batched (but it was indeed batched yielding performance speedups if used on the appropriate GPU for instance ).
Does that answer your question ?
More info there:
https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/pipelines#pipeline-chunk-batching
https://github.com/huggingface/transformers/pull/14225<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,062 | closed | Possibility to access initial indices of the data during the Training | ### Feature request
Scenario 1:
For a specific logging tasks, we need the indices of the data, therefore we save them in our dataset.
Therefore we want to disable remove_unused_columns before we logged our data with indices.
### Motivation
Right now this needs to be set in the trainer arguments beforehand.
What is the best practice for logging indices of the dataset during the forward step?
### Your contribution
I can submit an integration of a logger with more data centric focus then just logging training performance metrics. | 09-16-2022 02:16:05 | 09-16-2022 02:16:05 | Hi there! The datasets attribute of the `Trainer` are never modified, so they always retain all their columns. You therefore don't need to change the training arguments (which is something the `Trainer` is not allowed to do by the way, otherwise its logs are not accurate and we can't reproduce the same results easily).<|||||>Thanks Sylvain @sgugger for your quick reply.
To clarify the issue when writing a callback integration for data monitoring:
For some monitoring, the embedding and original row indices (to identify the text) is needed to be logged during on_step_end.
An example implementation in PyTorch (see the bottom of the forward function):
```
def forward(self, x, attention_mask, idxs):
"""Model forward function."""
embedding = self.feature_extractor(
input_ids=x, attention_mask=attention_mask
).last_hidden_state[:, 0]
emb = self.pre_step(embedding)
emb = self.relu(emb)
emb = self.dropout(emb)
logits = self.classifier(emb)
# The logging function that is moved to a callback.
logging_function(
embs= embedding, logits=logits, indices=idxs
)
```
According to the Trainer, if remove unused columns is enabled, it would overwrite the training dataset in the class
https://github.com/huggingface/transformers/blob/16242e1bf07450c5dc39fe64fbc810c877455519/src/transformers/trainer.py#L844
So the feature I'm trying to specify is to give integrations the logging capabilities and the user the least parameter change overhead. I couldn't find a lot of discussions about preserving or accessing the initial indices during the forward step except this:
https://discuss.pytorch.org/t/how-does-one-obtain-indicies-from-a-dataloader/16847/7
The embedding or logits are straightforward with the register_forward_hook api though
<|||||>I am very confused as to how letting the callback change the training arguments would help in this instance. By the time you arrive at the model, the dataloader has been built. So extra args have been removed (or not) and changing the training arguments won't do anything.<|||||>Yes, I see. The initial thought was the user needs to just populate the report_to flag. But as you have said access wise and order wise, when dealing with adding/accessing indices to/of the data, it's not possible at the callback level without previous modifications of the dataset itself. It's necessary to set remove_unused_columns to false and use the data collater fn to deal with the indices. Am I correct? I hope this clears some confusion :D <|||||>Most likely the model itself, from what you shared. Normally data collators collate what they get (as long as it's in a "collatable" type). As you also pointed out, you can use a forward hook without needing to rewrite the model class.<|||||>Though if modify the dataset to add the idices:
row_len = len(ds["train"])
ds["train"] = ds["train"].add_column("idx",list(range(row_len)))
the unmodified forward function will throw an error.
As the input then is:
` ['text', 'label', 'idx', 'input_ids', 'attention_mask']`
instead of
`['label', 'input_ids', 'attention_mask']`
So to get back to your reply I will double check if I can add a further parameter with the forward hook to the forward function.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,061 | closed | [Wav2Vec2] Fix None loss in docstring for Wav2Vec2ForPretraining | # What does this PR do?
- [ ] This PR fix None loss in docstring for Wav2Vec2ForPretraining | 09-15-2022 22:49:25 | 09-15-2022 22:49:25 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19061). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @abdouaziz! Looks like you need to rebase onto main: https://github.com/huggingface/transformers/pull/18960#issuecomment-1248291349
That should fix the file changes! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.