repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 15,346 | closed | Fix deepspeed docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes typos and misspellings in deepspeed docs.
## Who can review?
@sgugger @stas00 @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-26-2022 10:25:52 | 01-26-2022 10:25:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm sorry I overlooked mistakes in those two sentences. It's true those are not fixes, though the first one is not gramatically correct anyway.
Do as you wish to fix those, I'll make sure to stay very far away form the deepspeed file in the future.<|||||>I tried revert, but I thought it'd magically resume this PR and it didn't, so it'd be a bad workflow. So if I have to make a new PR, it's much simpler to just do a new PR with just the fixes: https://github.com/huggingface/transformers/pull/15355
|
transformers | 15,345 | closed | DebertaV2 For run_qa.py | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-4.4.0-142-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.5 (cpu)
- Jax version: 0.2.22
- JaxLib version: 0.1.72
## Models:
- [microsoft/deberta-v2-xlarge](https://huggingface.co/microsoft/deberta-v2-xlarge)
## Error:
```
Traceback (most recent call last):
File "run_qa.py", line 642, in <module>
main()
File "run_qa.py", line 308, in main
"This example script only works for models that have a fast tokenizer. Checkout the big table of models "
ValueError: This example script only works for models that have a fast tokenizer. Checkout the big table of models at https://huggingface.co/transformers/index.html#supported-frameworks to find the model types that meet this requirement
```
## Script
```
CUDA_VISIBLE_DEVICES=0 python run_qa.py \
--model_name_or_path microsoft/deberta-v2-xlarge \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
[last run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py)
## My Problem
DebertaV2 has significant capabilities in NLI task. I want to use DebertaV2 for QA task. However, DebertaV2 don't have DebertaV2TokenizerFast. How can I use DevertaV2 for QA task? Is it possible to get offset_mapping without TokenizerFast? Thank you very much!
| 01-26-2022 10:12:46 | 01-26-2022 10:12:46 | DebertaV2TokenizerFast is close to being merged. If you look at the code in the PR, you can use it now. https://github.com/huggingface/transformers/pull/14928<|||||>OK! thank you~ |
transformers | 15,344 | closed | wav2vec with LM leads to CPU OOM | ## Environment info
I used the code described in the blog post for [wav2vec with LMs](https://huggingface.co/blog/wav2vec2-with-ngram). Also posted this [on the forum](https://discuss.huggingface.co/t/wav2vec-with-new-lm-causing-cpu-oom/14069) and will update both.
### Who can help
@patrickvonplaten, @anton-l and whoever can help.
Models:
Wav2Vec2ProcessorWithLM with pyctcdecode
## To reproduce
Steps to reproduce the CPU OOM via GPU (maybe increase range):
```python
from datasets import load_dataset
from transformers import Wav2Vec2ProcessorWithLM, Wav2Vec2ForCTC
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
processor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm").to("cuda")
print("* * * * * * loaded models")
for i in range(200):
audio_sample = dataset[i]
print(" * * * * Sample: ", i)
inputs = processor(audio_sample["audio"]["array"], sampling_rate=audio_sample["audio"]["sampling_rate"], return_tensors="pt").to("cuda")
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.cpu().numpy()).text
print(transcription[0].lower())
```
and the same for just CPU:
```python
from datasets import load_dataset
from transformers import Wav2Vec2ProcessorWithLM, Wav2Vec2ForCTC
import torch
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
processor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2-base-100h-with-lm")
print("* * * * * * loaded models")
for i in range(200):
audio_sample = dataset[i]
print(" * * * * Sample: ", i)
inputs = processor(audio_sample["audio"]["array"], sampling_rate=audio_sample["audio"]["sampling_rate"], return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
transcription = processor.batch_decode(logits.numpy()).text
print(transcription[0].lower())
```
No problems with the classic processor. Just the new processor seems to leak some memory ... | 01-26-2022 10:11:09 | 01-26-2022 10:11:09 | https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L307
I believe closing the pool process here should solve the issue !
(`pool.close()`).
For maintainer info, I also ran into this issue and it seems a few memory leaks were reported in ctcdecode/kenlm on this type of problem.<|||||>Kudos to you, Manuel. Closing the pool solves the problem, feel free to submit a PR or @patrickvonplaten?<|||||>I believe the issue should still be open until the problem is fixed but will let @patrickvonplaten decide whether it's the cleanest way around the problem<|||||>Good point, will leave this open<|||||>Turns out issue was solved 6 days ago actually (on the master branch). Until the patch is issued, the library can thus be built from source.<|||||>You are right, I did clone the master branch, but 7 days ago ... should have updated. Closing again before I forget to do so.<|||||>Ah yeah the pool close() one. We indeed need to install transformers from scratch for this one! |
transformers | 15,343 | closed | Fix `bad_words_ids` not working with sentencepiece-based tokenizers | # What does this PR do?
This fixes the problem models using sentencepiece-based tokenizers can not prevent bad words when decoding.
For sentencepiece-based tokenizers like T5Tokenizer, when creating `bad_words_ids` from `bad_words`, `add_special_tokens` must be set to False.
## Code to reproduce
```python
from transformers import T5Tokenizer, AutoModelForCausalLM, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small", use_fast=False)
model = T5ForConditionalGeneration.from_pretrained("t5-small")
bad_words = ["my", "will", "My", "you", "are", "I", "You", "it"] # words should not be generated
bad_words_ids = tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids # get bad words ids
input_context = "You are my friend"
# encode input context
input_ids = tokenizer(input_context, return_tensors="pt").input_ids
outputs = model.generate(input_ids=input_ids, max_length=20, do_sample=True, bad_words_ids=bad_words_ids, num_return_sequences=3)
gen_texts = tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Who can review?
@patrickvonplaten @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-26-2022 10:10:43 | 01-26-2022 10:10:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @ngoquanghuy99,
That's a great fix - thanks a lot for diving into this :-)
Could you run `make style` once so that the `check_code_quality` test goes green? |
transformers | 15,342 | closed | random word masking index shouble be great than 0 | ## Information
https://github.com/huggingface/transformers/blob/05fa1a7ac17bb7aa07b9e0c1e138ecb31a28bbfe/src/transformers/data/data_collator.py#L771
This line should be:
```python
random_words = torch.randint(1, len(self.tokenizer), labels.shape, dtype=torch.long)
```
because random word masking index should be great than 0 (0 means [PAD] in bert's vocabulary).
I found the bug when I want to change masked input_ids to tokens using tokenizer.convert_ids_to_tokens. If replace a masked id by 0, and then convert_ids_to_tokens, all tokens after 0 will be discarded.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: Linux-5.4.0-62-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.2
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <False>
- Using distributed or parallel set-up in script?: <False>
| 01-26-2022 07:51:42 | 01-26-2022 07:51:42 | cc @sgugger <|||||>I don't think there is anything in the research article or the original pretraining code that excludes the pad token from the random words, so I would leave this as is. You can use your own data collator with any modification you want :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,341 | closed | ValueError: No valid checkpoint found in output directory | import sys
sys.path.append("./NeZha_Chinese_PyTorch-main/")
from transformers import BertTokenizer, WEIGHTS_NAME,TrainingArguments
from model.modeling_nezha import NeZhaForMaskedLM
from model.configuration_nezha import NeZhaConfig
from transformers import (
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments,
LineByLineTextDataset
)
from transformers import BertTokenizer
# tokenizer = BertTokenizer(vocab_file='./vocab.txt',do_lower_case=False,do_basic_tokenize=False)
tokenizer = BertTokenizer.from_pretrained('./vocab.txt',do_lower_case=False,do_basic_tokenize=False)
model_path='./nezha-cn-base/'
config=NeZhaConfig.from_pretrained(model_path)
model=NeZhaForMaskedLM.from_pretrained(model_path, config=config)#
model.resize_token_embeddings(len(tokenizer))
train_dataset=LineByLineTextDataset(tokenizer=tokenizer,file_path='../data/bert_data/mlm_data/train.txt',block_size=128)
# MLM模型的数据DataCollator
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
# 训练参数
pretrain_batch_size=64
num_train_epochs=300
training_args = TrainingArguments(
output_dir='./outputs/', overwrite_output_dir=True, num_train_epochs=num_train_epochs, learning_rate=6e-5,
per_device_train_batch_size=pretrain_batch_size, save_steps=10000,save_total_limit=10)#
# 通过Trainer接口训练模型
trainer = Trainer(
model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset)
trainer.train(True)
ValueError Traceback (most recent call last)
<ipython-input-9-9d3121db9099> in <module>
29 trainer = Trainer(
30 model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset)
---> 31 trainer.train(True)
D:\Program Files (x86)\Anconda3\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1088 resume_from_checkpoint = get_last_checkpoint(args.output_dir)
1089 if resume_from_checkpoint is None:
-> 1090 raise ValueError(f"No valid checkpoint found in output directory ({args.output_dir})")
1091
1092 if resume_from_checkpoint is not None:
ValueError: No valid checkpoint found in output directory (./outputs/)
| 01-26-2022 06:11:14 | 01-26-2022 06:11:14 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> import sys sys.path.append("./NeZha_Chinese_PyTorch-main/") from transformers import BertTokenizer, WEIGHTS_NAME,TrainingArguments from model.modeling_nezha import NeZhaForMaskedLM from model.configuration_nezha import NeZhaConfig from transformers import ( DataCollatorForLanguageModeling, Trainer, TrainingArguments, LineByLineTextDataset ) from transformers import BertTokenizer
>
> # tokenizer = BertTokenizer(vocab_file='./vocab.txt',do_lower_case=False,do_basic_tokenize=False)
> tokenizer = BertTokenizer.from_pretrained('./vocab.txt',do_lower_case=False,do_basic_tokenize=False) model_path='./nezha-cn-base/' config=NeZhaConfig.from_pretrained(model_path) model=NeZhaForMaskedLM.from_pretrained(model_path, config=config)# model.resize_token_embeddings(len(tokenizer)) train_dataset=LineByLineTextDataset(tokenizer=tokenizer,file_path='../data/bert_data/mlm_data/train.txt',block_size=128)
>
> # MLM模型的数据DataCollator
> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
>
> # 训练参数
> pretrain_batch_size=64 num_train_epochs=300 training_args = TrainingArguments( output_dir='./outputs/', overwrite_output_dir=True, num_train_epochs=num_train_epochs, learning_rate=6e-5, per_device_train_batch_size=pretrain_batch_size, save_steps=10000,save_total_limit=10)#
>
> # 通过Trainer接口训练模型
> trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset) trainer.train(True)
>
> ValueError Traceback (most recent call last) in 29 trainer = Trainer( 30 model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset) ---> 31 trainer.train(True)
>
> D:\Program Files (x86)\Anconda3\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1088 resume_from_checkpoint = get_last_checkpoint(args.output_dir) 1089 if resume_from_checkpoint is None: -> 1090 raise ValueError(f"No valid checkpoint found in output directory ({args.output_dir})") 1091 1092 if resume_from_checkpoint is not None:
>
> ValueError: No valid checkpoint found in output directory (./outputs/)
I got the same error. But I do not know how to fix it.<|||||>> > import sys sys.path.append("./NeZha_Chinese_PyTorch-main/") from transformers import BertTokenizer, WEIGHTS_NAME,TrainingArguments from model.modeling_nezha import NeZhaForMaskedLM from model.configuration_nezha import NeZhaConfig from transformers import ( DataCollatorForLanguageModeling, Trainer, TrainingArguments, LineByLineTextDataset ) from transformers import BertTokenizer
> > # tokenizer = BertTokenizer(vocab_file='./vocab.txt',do_lower_case=False,do_basic_tokenize=False)
> > tokenizer = BertTokenizer.from_pretrained('./vocab.txt',do_lower_case=False,do_basic_tokenize=False) model_path='./nezha-cn-base/' config=NeZhaConfig.from_pretrained(model_path) model=NeZhaForMaskedLM.from_pretrained(model_path, config=config)# model.resize_token_embeddings(len(tokenizer)) train_dataset=LineByLineTextDataset(tokenizer=tokenizer,file_path='../data/bert_data/mlm_data/train.txt',block_size=128)
> > # MLM模型的数据DataCollator
> > data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
> > # 训练参数
> > pretrain_batch_size=64 num_train_epochs=300 training_args = TrainingArguments( output_dir='./outputs/', overwrite_output_dir=True, num_train_epochs=num_train_epochs, learning_rate=6e-5, per_device_train_batch_size=pretrain_batch_size, save_steps=10000,save_total_limit=10)#
> > # 通过Trainer接口训练模型
> > trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset) trainer.train(True)
> > ValueError Traceback (most recent call last) in 29 trainer = Trainer( 30 model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset) ---> 31 trainer.train(True)
> > D:\Program Files (x86)\Anconda3\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1088 resume_from_checkpoint = get_last_checkpoint(args.output_dir) 1089 if resume_from_checkpoint is None: -> 1090 raise ValueError(f"No valid checkpoint found in output directory ({args.output_dir})") 1091 1092 if resume_from_checkpoint is not None:
> > ValueError: No valid checkpoint found in output directory (./outputs/)
>
> I got the same error. But I do not know how to fix it.
trainer.train(True) >>> trainer.train() 把这句改成这个就可以了
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,340 | closed | CPU OOM when using IterableDataset with dataloader_num_workers > 0 | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-3.10.107-1-tlinux2_kvm_guest-0049-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@sgugger
## Information
I want to pre-train a large language model. So the train-dataset is huge (~100G), I need to use IterableDataset. and the dev dataset is small (~10M) so I use Dataset.
In order to speed up the training process, the `dataloader_num_workers` of `TrainingArguments` is set to 10. But when I train the model, the CPU memory keeps increasing and gets OOM finally.

`dataloader_num_workers` of `TrainingArguments` is set to 0 and the CPU memory is stable but the training is too slow!
May be It is the same problem of this https://pytorch.org/docs/stable/data.html#multi-process-data-loading
I want to know how to correctly use IterableDataset with huggingface's trainer (train-dataset is IterableDataset, and dev-dataset is Dataset, and dataloader_num_workers > 0)? Thanks a lot!
## To reproduce
I write a sample code to reproduce:
demo_train.py
```Python
import argparse
from itertools import cycle
from typing import Optional
from dataclasses import dataclass
from torch.nn import CrossEntropyLoss
from torch.utils.data import Dataset, IterableDataset
from transformers import BertTokenizer, Trainer, TrainingArguments, BertConfig, PreTrainedModel
import torch
import os
import numpy as np
from transformers.file_utils import ModelOutput
class DemoEvalDataset(Dataset):
def __init__(self, data_file):
self.items = []
with open(data_file, 'rt', encoding='utf-8', buffering=10000) as f:
for line in f:
self.items.append(line.strip())
def __len__(self):
return len(self.items)
def __getitem__(self, index):
return self.items[index]
class DemoTrainDataset(IterableDataset):
def __init__(self, data_file):
super().__init__()
self.data_file = data_file
def get_stream(self):
return cycle(self.parse_file_by_line())
def __iter__(self):
return self.get_stream()
def parse_file_by_line(self):
with open(self.data_file, 'rt', encoding='utf-8', buffering=10000) as file_obj:
for line in file_obj:
yield line.strip()
@dataclass
class DemoModelOutput(ModelOutput):
loss: Optional[torch.FloatTensor] = None
logits: torch.FloatTensor = None
class DemoModel(PreTrainedModel):
"""
token level binary classification
"""
def __init__(self, config):
super().__init__(config)
config.hidden_size = 100
config.label_num = 2
self.config = config
self.embedding = torch.nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=0)
self.mlp = torch.nn.Linear(config.hidden_size, config.label_num)
self.loss_fct = CrossEntropyLoss()
def forward(self, input_ids, labels=None):
logits = self.mlp(self.embedding(input_ids))
loss = None
if labels is not None:
loss = self.loss_fct(logits.view(-1, self.config.label_num), labels.view(-1))
return DemoModelOutput(
loss=loss,
logits=logits
)
def data_collator(tokenizer, features):
texts = features
texts_tokenized = tokenizer(texts, padding=True, return_tensors='np')
input_ids = texts_tokenized['input_ids']
batch_size, max_seq_len = input_ids.shape
labels = np.random.randint(2, size=(batch_size, max_seq_len))
return {
'input_ids': torch.tensor(input_ids, dtype=torch.long),
'labels': torch.tensor(labels, dtype=torch.long)
}
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("--local_rank", type=int, default=0)
parser.add_argument('--pretrained_model_path', required=True)
parser.add_argument('--output_dir', required=True)
parser.add_argument('--train_data_file', required=True)
parser.add_argument('--eval_data_file', required=True)
parser.add_argument('--train_batch_size', type=int, default=128)
parser.add_argument('--eval_batch_size', type=int, default=128)
parser.add_argument('--logging_steps', type=int, default=100)
parser.add_argument('--eval_steps', type=int, default=1000)
parser.add_argument('--save_steps', type=int, default=1000)
parser.add_argument('--learning_rate', type=float, default=5e-5)
parser.add_argument('--max_steps', type=int, default=100000)
args = parser.parse_args()
model_config = BertConfig.from_pretrained(args.pretrained_model_path)
model = DemoModel(model_config)
train_dataset = DemoTrainDataset(args.train_data_file)
eval_dataset = DemoEvalDataset(args.eval_data_file)
train_args = TrainingArguments(
output_dir=args.output_dir,
per_device_train_batch_size=args.train_batch_size,
per_device_eval_batch_size=args.eval_batch_size,
do_train=True,
do_eval=True,
evaluation_strategy="steps",
logging_steps=args.logging_steps,
eval_steps=args.eval_steps,
save_steps=args.save_steps,
overwrite_output_dir=True,
save_total_limit=1,
local_rank=int(os.environ.get('LOCAL_RANK', -1)),
learning_rate=args.learning_rate,
metric_for_best_model='eval_loss',
fp16=True,
max_steps=args.max_steps,
dataloader_num_workers=10 # here !!!
)
bert_tokenizer = BertTokenizer.from_pretrained(args.pretrained_model_path)
trainer = Trainer(
model=model,
args=train_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=lambda features: data_collator(bert_tokenizer, features),
)
trainer.train()
```
run.sh
```bash
python -m torch.distributed.launch \
--nproc_per_node=8 \
demo_train.py \
--pretrained_model_path $PRETRAINED_MODELS_DIR/roberta \
--output_dir $CHECKPOINT_DIR \
--train_data_file $DATA_DIR/large_file.txt \
--eval_data_file $DATA_DIR/dev_file.txt \
--learning_rate 5e-5 \
--max_step 300000
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| 01-26-2022 05:24:24 | 01-26-2022 05:24:24 | DO NOT USE `from itertools import cycle`<|||||>@Ethan-yt It works👍
```
class DemoTrainDataset(IterableDataset):
def __init__(self, data_file):
super().__init__()
self.data_file = data_file
def __iter__(self):
while True:
with open(self.data_file, 'rt', encoding='utf-8', buffering=10000) as file_obj:
for line in file_obj:
yield line.strip()
```

<|||||>@sgugger
When I use IterableDataset and dataloader_num_workers > 0, should I manually handle multiprocessing in the __iter__ follow the https://pytorch.org/docs/stable/data.html#multi-process-data-loading

<|||||>Yes, that's your responsibility, as the PyTorch doc highlights.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,339 | closed | how can i write tensorboard when i train model with the script "run_speech_recognition_ctc.py" | I found that there was no "outputdir/runs/" directory created during training ,is there any params to write tensorboard?
@sgugger @patrickvonplaten @anton-l
## Environment info
transformers 4.16.0.dev0
| 01-26-2022 03:42:15 | 01-26-2022 03:42:15 | Hey @JucyCherry,
Thanks for the issue! I think all you need to do is the following:
1. Install `tensorboard`: https://pypi.org/project/tensorboard/
2. Run training. The Trainer in the `run_speech_recognition_ctc.py` script automatically correctly uses the `tensorboard`.
You could try out the explanation to check if it works: https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#how-to-finetune-an-acoustic-model<|||||>thanks a lot ! |
transformers | 15,338 | closed | preprocessing_num_workers missing in run_summarization_no_trainer | Hello, I find that in script: https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py
The `preprocessing_num_workers` does not exist. It just appears in the arguments, but does not take effect in dataset map operations.
So is it intentionally missing ?
| 01-26-2022 01:46:32 | 01-26-2022 01:46:32 | Seems like an oversight. Do you want to make a PR to fix this?<|||||>> Seems like an oversight. Do you want to make a PR to fix this?
Sorry, I don't know how to make a PR. |
transformers | 15,337 | closed | Fix table formatting in SegFormer docs | # What does this PR do?
This PR fixes the Markdown table on the main SegFormer doc page. It was missing the line below the header that makes it a valid Markdown table.
Fixes #15334
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger, @NielsRogge | 01-26-2022 01:45:36 | 01-26-2022 01:45:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,336 | closed | Perplexity VERY high but generated text coherent | I have trained GPT2 from scratch on a minority language which I tokenized with BPE.
When I finished training, the perplexity on the eval set was `678.80`, so I thought the model was really bad.
I however tried to sample from the model using very different sampling procedures, and finally got very coherent texts with the following parameters:
`model.generate(input_ids, do_sample=True, top_k=950, repetition_penalty=1.2, eos_token_id=0)`
- Is the model generating good texts because my top_k is larger than the perplexity?
- Is the perplexity perhaps so high because my vocab size is 50256?
Is there perhaps a bug in how the perplexity is computed after training? This is the final value I got from the last epoch and also stored in `eval_results.json`.
| 01-26-2022 00:48:45 | 01-26-2022 00:48:45 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,335 | closed | Fix code format for Accelerate doc | This PR fixes the format of externally linked code objects so they don't render like this:

Also replaces the side-by-side image with a code block because the image isn't easily visible to users. | 01-25-2022 23:23:02 | 01-25-2022 23:23:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,334 | closed | Documentation for SegFormer includes improperly-formatted table | ## Environment info
- `transformers` version: 4.13.0 - present (4.15.0)
@sgugger , this is a documentation issue
## Information
The [SegFormer doc page](https://huggingface.co/docs/transformers/model_doc/segformer) has an improperly-formatted table for the different model variants.
I think the fix should be to add a dividing line between the header and the body of the table.
```markdown
| ------------- | ------ | ------------ | ------------------- | ---------- | ----------------- |
```
The final version would be something like this:
```markdown
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
| ------------- | ------ | ------------ | ------------------- | ---------- | ----------------- |
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
```
This should render as:
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
| ------------- | ------ | ------------ | ------------------- | ---------- | ----------------- |
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
A slightly nicer-formatted version would be
```markdown
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
| :-----------: | ------ | ------------ | :-----------------: | :--------: | :---------------: |
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
```
which looks like this:
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
| :-----------: | ------ | ------------ | :-----------------: | :--------: | :---------------: |
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
I would be happy to submit a PR for this. Let me know. | 01-25-2022 20:57:49 | 01-25-2022 20:57:49 | cc @NielsRogge What do you think?<|||||>Hi, I indeed saw that the table needs an update as we updated all docs from rst to markdown.
Would be great if you could open a PR for this! Thanks. |
transformers | 15,333 | closed | Fine-tune wave2vec trained checkpoint | The tutorial of wav2vec says that the encoder model was originally trained on the common voice dataset. but all the fine-tuning tasks were trained on datasets from common voice. When fine-tuning wav2vec2 model, we fine-tuning the final linear classification layer only, right?
If this is the case, why don't you release the lm.weights and lm.biese for each language?
I am asking because I have a fine-tuned wav2vec2 model and it is doing very well. I want to fine-tune only the last liner layer, without initializing with random values. How can I load the lm.weights and lm.biese from a checkpoint?
I tried to load the pre-trained checkpoint like this
```
import torch
state_dict = torch.load('./eng_asr/pytorch_model.bin', map_location='cpu')
```
Then initialize a new pre-trained model
```
import torch
model = Wav2Vec2ForCTC.from_pretrained(f'facebook/wav2vec2-large-xlsr-53',
vocab_size=processor.tokenizer.vocab_size,
pad_token_id=processor.tokenizer.pad_token_id)
model.freeze_feature_extractor()
model.load_state_dict(state_dict)
```
At this stage, I'm loading the weights of my checkpoint to facebook/wav2vec2-large-xlsr-53.
Then, I set the training arguments
```
training_args = TrainingArguments(
output_dir=repo_name,
group_by_length=True,
length_column_name = 'input_length',
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
gradient_accumulation_steps=1,
evaluation_strategy="steps",
num_train_epochs=10,
fp16=True,
save_steps=500,
eval_steps=500,
logging_steps=500,
learning_rate=5e-4,
warmup_steps=500,
save_total_limit=4,
)
```
then, when I initialize Trainer, it takes like 3G of the GPU memory
```
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=test_data,
tokenizer=processor.feature_extractor
)
```
When I hit train, regardless of the batch size, I get out of memory. I have 24G GPU RAM. I don't understand why the memory gone in one second! | 01-25-2022 20:21:58 | 01-25-2022 20:21:58 | cc @anton-l <|||||>Hi @Omarnabk! The model is actually pretrained with an unsupervised objective (it never saw the target transcriptions) which is why there are no official pretrained LM heads available :slightly_smiling_face:
As far as I can see, you're doing everything correctly, but the audio clips in your dataset might be too long for the model to process on a 24G GPU. Try chunking your data into sentences, or filtering speech samples that are larger than about 20sec.
Let me know if I understood your case correctly :slightly_smiling_face: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,332 | closed | Fix missing eps arg for LayerNorm in ElectraGeneratorPredictions | # What does this PR do?
`ElectraGeneratorPredictions` doesn't specify `eps`
https://github.com/huggingface/transformers/blob/0501beb84601651bf6a44c5a16ab0e5b98948f78/src/transformers/models/electra/modeling_electra.py#L650
but`TFElectraGeneratorPredictions` does
https://github.com/huggingface/transformers/blob/0501beb84601651bf6a44c5a16ab0e5b98948f78/src/transformers/models/electra/modeling_tf_electra.py#L572
This causes the differences of logits/loss in PT/TF `ElectraForMaskedLM` as high as `1e-3`.
With this PR, the difference is in the range `5e-7 ~ 2e-6`.
This PR also makes the new added test introduced #15256 pass without error.
----
It's unclear to me why PyTorch `ElectraGeneratorPredictions` doesn't specify `eps` though. I assume it is a mistake.
I saw many other PyTorch models use this arg for `nn.LayerNorm`
## Question:
Should I add some deprecation warning in this case?
| 01-25-2022 19:56:13 | 01-25-2022 19:56:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for checking all these issues @ydshieh 💯
I've seen a few examples where the TF LayerNorm eps is specified and the PT eps is not, merely because PT's default value is assumed (e.g. in BART: [PT example](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L280); [TF example](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_tf_bart.py#L288)).
But if it is to be loaded from a config file and produces different results, then it seems like a different story -- i.e. this PR makes total sense. @LysandreJik can you confirm (or point to someone who can confirm)? :) |
transformers | 15,331 | closed | Using HuggingFace Models for Text Translation of sensitive data | Hello folks,
Wish you all a belated happy new year! 🎉
First of all, thanks a ton for creating such transformers which have "transformed" the entire NLP landscape. 🤗
### Objective 🎯
I am working on developing a web application for performing multilingual translation using mBart and transformers of textual information found in documents - PDFs, word files etc. A basic prototype has been implemented [here.](https://github.com/prateekralhan/Multilingual-Translator)
### Question/Doubt ⁉
I am planning to scale this up and use it for handling/processing extremely sensitive data at enterprise level. My question is to get an understanding how do the APIs are storing the responses returned by the models which are present in the HF servers.
1. Is the data copied to the backend servers via the browser, and contained in a BytesIO buffer in Python memory (i.e. RAM, not disk) and the data will persist in RAM until my app re-runs from top-to-bottom??
2. Is the data erased from memory when the user uploads another file, replacing the original one or when the user closes the browser tab/exits the web app?
3. Or is there any time period for which the data gets stored in the backend servers and then gets erased post that duration?
Would love to get complete idea on this.
### Who can help
@patrickvonplaten
Sincere apologies from my end should I unknowingly followed wrong protocol for raising such a doubt. Please let me know if you need any further information from my end.
Cheers,
Prateek | 01-25-2022 18:20:14 | 01-25-2022 18:20:14 | You're using the inference API, right? cc @Narsil <|||||>Hi @prateekralhan ,
Your demo code doesn't use the API but seems to be using Streamlit, so I guess you envision using Spaces ?
If that's the case, I suggest you look a bit more into the API https://api-inference.huggingface.co/docs/python/html/index.html since Spaces currently has no commercial offering.
For the API.
Currently, all requests are logged in our secure servers.
Access is only granted to API maintainers, to help debug actual bugs (somes bugs are caused by specific requests, so we need to know what they are to trigger them again and fix them). It's also used as a backup for accounting so we can always recount the API use of a given user should the need arise (didn't so far to my knowledge).
The logs are never used for any other reason than that, not sold, not examinated, not used as a dataset source, and access is limited to a handful of employees.
That being said, if you plan to use the API extensively, being on a custom plan, you can also asks for logs to not be written on your account. You can reach out to [email protected] (This isn't something you can do self served).
As for the sensitivity of your data, you seem to want extra security, this is something that should better discussed during a call with our teams I think.<|||||>@Narsil and @LysandreJik , thank you for the clear explanation. This answers my doubt! 😄 |
transformers | 15,330 | closed | Deepspeed Wav2vec xlsr bug | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: ubuntu
- Python version: 3.9
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: deepspeed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@stas00 @patrickvonplaten @anton-l
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below) speech recognition
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) commonvoice 7
* [ ] my own task or dataset: (give details below)
## To reproduce
the deepspeed config is the same as used in the tests in this repo
```bash
0%| | 1/23536 [00:02<19:09:26, 2.93s/it][2022-01-25 18:45:36,939] [INFO] [stage_1_and_2.py:1644:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 65536
0%| | 2/23536 [00:06<19:47:41, 3.03s/it][2022-01-25 18:45:40,036] [INFO] [stage_1_and_2.py:1644:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 65536, reducing to 32768.0
0%| | 5/23536 [00:18<25:50:42, 3.95s/it][2022-01-25 18:45:55,473] [INFO] [stage_1_and_2.py:1644:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 32768.0, reducing to 16384.0
0%| | 7/23536 [00:24<22:18:14, 3.41s/it][2022-01-25 18:45:58,619] [INFO] [stage_1_and_2.py:1644:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 16384.0, reducing to 8192.0
0%| | 8/23536 [00:27<20:39:24, 3.16s/it][2022-01-25 18:46:01,240] [INFO] [stage_1_and_2.py:1644:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 8192.0, reducing to 4096.0
0%| | 10/23536 [00:35<24:11:12, 3.70s/it]{'loss': 0.0, 'learning_rate': 3e-05, 'epoch': 0.0}
0%| | 11/23536 [00:39<25:07:36, 3.85s/it]Traceback (most recent call last):
File "/home/aware/projects/asr/run_speech_recognition_ctc.py", line 742, in <module>
main()
File "/home/aware/projects/asr/run_speech_recognition_ctc.py", line 696, in main
train_result = trainer.train()
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/transformers/trainer.py", line 1365, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/transformers/trainer.py", line 1940, in training_step
loss = self.compute_loss(model, inputs)
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/transformers/trainer.py", line 1972, in compute_loss
outputs = model(**inputs)
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1588, in forward
loss = self.module(*inputs, **kwargs)
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1755, in forward
loss = nn.functional.ctc_loss(
File "/home/aware/anaconda3/envs/asr/lib/python3.9/site-packages/torch/nn/functional.py", line 2460, in ctc_loss
return torch.ctc_loss(
RuntimeError: CUDA error: an illegal memory access was encountered
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 01-25-2022 17:51:48 | 01-25-2022 17:51:48 | It's very possible the problem is not related to deepspeed as it fails inside `modeling_wav2vec2.py`, but it could be related just as well.
I don't think any of these newly added scripts were ever tested with Deepspeed, so I have no idea whether it's supposed to work or not. I don't know why the tests weren't ported out of `research_projects`, so they never run.
The test that I wrote: `examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py` tests `examples/research_projects/wav2vec2/run_asr.py`, so none of the new `examples/pytorch/speech-*/run*` are being tested with Deepspeed.
It should be very easy to create new tests to exercise the new functionalities created in the the wav2vec2 domain based on the test I wrote by just swapping in the new example scripts to replace `run_asr.py`,and adjusting the cmd line args. At this moment I have zero free time to do that, but if someone tries and runs into problems please ping me and I will try to help. But it should be a trivial task, since the test just verifies that it can train/validate and doesn't do anything fancy. So it's literally changing the example script name and adjusting the cmd line args to adapt for the new scripts.
Remember that anything under `examples/research_projects` is ignored under CI. So you want deepspeed tests outside of `examples/research_projects`.
the only thing I can vouch for is `examples/research_projects/wav2vec2/run_asr.py` since the tests all pass at least on my machine as of this writing with transformers@master.
```
$ RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pyt examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero2_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero2_robust
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero3_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero3_robust
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero2_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero2_robust
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero3_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero3_robust
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero2_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero2_robust
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero3_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero3_robust
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero2_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero2_robust
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero3_base
PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero3_robust
SKIPPED [2] ../../../../../home/stas/anaconda3/envs/py38-pt110/lib/python3.8/unittest/case.py:118: test requires multiple GPUs
```<|||||>I am not using the research folder, the script is from the pytorch/speech-recognition<|||||>That's exactly what I was trying to say. When I ported wav2vec2 to work with Deepspeed I wrote a set of tests to validate it continues working.
When continued work on wav2vec2 was done, those tests weren't adopted to the new scripts. So I have no idea whether the new functionality requires some changes in the model or the error you have encountered has nothing to do with using Deepspeed itself.
Bottom line: let's wait for @anton-l or @patrickvonplaten to follow up since they are the maintainers of this "domain" and perhaps they have encountered this issue outside of Deepspeed.
If not then the new examples need to be tested first under Deepspeed to ensure that the model works. <|||||>It would indeed by very nice to add tests for DeepSpeed and the official speech recognition examples. I think I've kinda dropped the ball here. Thanks a lot for opening the PR - I'll help you through it @flozi00 :-)<|||||>I have found the error.
When I removed apex as fp16 backend everything worked again<|||||>great to hear that you found a solution, @flozi00
perhaps if you could share the failing sd_config and cmd line for posterity? you said staple ds config so I wonder how it was getting activated. Thank you.
Also I'm not even testing apex/deepspeed as it's kind of pointless since amp is better, but perhaps someone with an old pytorch will want it... Perhaps I could test that. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,329 | closed | [WIP][Doctests] Fix rst -> mdx | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In this PR, the already doc tests are enabled. The changes to `quickstart.mdx` show the necessary changes to make the doc tests work
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-25-2022 16:26:30 | 01-25-2022 16:26:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15329). All of your documentation changes will be reflected on that endpoint.<|||||>@LysandreJik @sgugger, It'd be great if you could take a first look here.
There are essentially two things that I'd like to verify/discuss here:
1. The seperation line `===PT-TF-SPLIT===` in Python code ```python is interpreted as an output which then leads the doctest to fail with:
```
Expected: ===PT-TF-SPLIT===
Got: Nothing
```
In this PR I replaced `===PT-TF-SPLIT===` with `>>> # ===PT-TF-SPLIT===` so that the split is considered as a comment by doctests and they pass. I don't really see another way here. @sgugger - could we adapt the doc-builder to correctly parse `>>> # ===PT-TF-SPLIT===` instead of `===PT-TF-SPLIT===`?
2. The second problem is that if the code snippet ends with ``` directly after the last python command, the doctests also interpret this as an expected output. This is why I added a newline before every ``` in the end which is not really a good idea IMO. I think it might make sense to just a quick preprocessing of all files that we run the doc tests on to add an empty space before the final ``` -> and then run the doc tests. Think this is better than adding a new line before every ``` which also doesn't go well with @sgugger's style_doc.py file. Also see: https://stackoverflow.com/questions/61163110/python-doctests-embedded-in-readme-md-expected-got-nothing here<|||||>Closing this one as it's more or less a duplicate of the one we merged last week and requires the `PT-TF` switch to be solved before. Will open a new one then :-) |
transformers | 15,328 | closed | improve saving strategy of sentencepiece tokenizer | # What does this PR do?
The slow tokenizer based on sentencepiece until now needed to access the original files (like `xxx/spiece.model`) that had been used to initialize them when we wanted to save them.
Since the version `0.1.91` of sentencepiece, there is a new method `serialized_model_proto` available which allows to create directly the sentence piece model file from the python object used by our tokenizer.
This PR proposes to modify all the tokenizers based on sentencepiece to use this new method if the original file(s) is not be accessible anymore. A new test also test this new capability.
## Additional comments
In this PR I also modified:
- `BartphoTokenizer` so that some special tokens are not hardcoded anymore
- `M2M100Tokenizer` and `MarianTokenizer`, so that their saving method looks more like the other tokenizers
## Motivation
I think that when it is possible it is good to be able to save our object even if some files used during the initialization do not exist anymore.
Moreover, this addition allows me to make easier the creation of other tests (like the test of [this PR](https://github.com/huggingface/transformers/pull/15319)).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. I would in particular love to read your thoughts @LysandreJik or @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-25-2022 15:39:59 | 01-25-2022 15:39:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,327 | closed | Push to hub save | # What does this PR do?
This PR changes the documentation of the `push_to_hub` training argument to be in sync with the actual behavior, and also makes sure that when `push_to_hub=True`, a push is done every time the model is saved.
Fixes #15313 | 01-25-2022 15:02:35 | 01-25-2022 15:02:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think you're missing the fact that, apart from a push asked explicitly with `trainer.push_to_hub()` (or now `trainer.save_model()`) all other pushes are only done if the previous one is completely finished (see [here](https://github.com/huggingface/transformers/blob/637e81752aad01738d36ef816fadee21fa392317/src/transformers/trainer.py#L2712)). So, there are no frequent pushes (except if the user asks for it by putting `trainer.save_model()` in a loop but I don't think that will happen).<|||||>Then that's perfect :) <|||||>Ugh, sorry @osanseviero I was convinced I had accepted your suggestions before merging (must have skipped one click), will make a new commit with you as Co-author... |
transformers | 15,326 | closed | Improve DistilBert attn mask | # What does this PR do?
Current (PT) `DistilBert ` use `-float("inf")` in self attention layer:
```
scores = scores.masked_fill(mask, -float("inf")) # (bs, n_heads, q_length, k_length)`
```
When a sequence contains all 0s as attention mask, this will gives `-inf` and `nan` for `scores` and `weights` respectively (for that sequence) in the following block. This will cause loss to be `nan` if labels is passed.
_(It's very unlikely that a user will have examples with all 0s as attention mask in a sequence. But we have these cases generated in some tests.)_
https://github.com/huggingface/transformers/blob/637e81752aad01738d36ef816fadee21fa392317/src/transformers/models/distilbert/modeling_distilbert.py#L207-L209
`TFDistilBert` use a large but finite `-1e30`, which prevent `nan`.
PyTorch `BERT` uses `-1e9` or `-1e4`:
https://github.com/huggingface/transformers/blob/637e81752aad01738d36ef816fadee21fa392317/src/transformers/modeling_utils.py#L239-L242
TF `BERT` uses `-1e4`:
https://github.com/huggingface/transformers/blob/637e81752aad01738d36ef816fadee21fa392317/src/transformers/models/bert/modeling_tf_bert.py#L854
This PR tries to use `BertModel`'s way of using attn mask to avoid inconsistency and failed test (introduced in #15256)
## Code snippet
```
import torch
from transformers import DistilBertModel
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
device = "cpu"
input_ids = torch.tensor([[1, 1, 1]]).to(device)
# attention_mask = torch.tensor([[1, 1, 1]]).to(device)
# Use all `0` for `attention_mask`
attention_mask = torch.tensor([[0, 0, 0]]).to(device)
inputs = {"input_ids": input_ids, "attention_mask": attention_mask}
outputs = model(**inputs)
print(outputs)
```
### Output with master
```
BaseModelOutput(last_hidden_state=tensor([[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]],
grad_fn=<NativeLayerNormBackward0>),
```
### Output with this PR
```
BaseModelOutput(last_hidden_state=tensor([[[ 0.2162, -0.0174, 0.2730, ..., 0.1114, 0.2780, -0.1252],
[ 0.2946, 0.0555, 0.2918, ..., 0.1332, 0.3355, -0.2091],
[ 0.3039, 0.0415, 0.2762, ..., 0.1108, 0.3250, -0.1843]]], ...)
```
gives all `nan` on current master.
## Future Improvement
It might be a good idea to handle all these `-1e4`/`-1e9`/`-1e30`/`-inf` things in a unified way across the models & frameworks (in a future PR). See #14859
| 01-25-2022 15:01:26 | 01-25-2022 15:01:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15326). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This could be treated in a future PR that will work with all models/frameworks. |
transformers | 15,325 | closed | OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned. | Based on [SO post](https://stackoverflow.com/q/70850015/17840900).
I'm using Jupyter Labs on AWS SageMaker.
Kernel: `conda_pytorch_p36` and did Restart & Run All.
I `git cloned` this [repo](https://huggingface.co/textattack/albert-base-v2-MRPC/tree/main).
Attempt at installing `git-lfs`:
```
!curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash
!sudo yum install git-lfs -y
!git lfs install
```
Running `fit lfs fetch` or `git lfs pull` after doesn't change Traceback.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained('albert-base-v2-MRPC')
```
Traceback:
```
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1363 try:
-> 1364 state_dict = torch.load(resolved_archive_file, map_location="cpu")
1365 except Exception as e:
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
592 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 593 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
594
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
761
--> 762 magic_number = pickle_module.load(f, **pickle_load_args)
763 if magic_number != MAGIC_NUMBER:
UnpicklingError: invalid load key, 'v'.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-15-34a92ef6f41b> in <module>
2
3 # load model
----> 4 model = AutoModelForSequenceClassification.from_pretrained(configs.output_dir) # "textattack/albert-base-v2-MRPC"
5 #model = AlbertForSequenceClassification.from_pretrained(configs.output_dir)
6 model.to(configs.device)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
439 elif type(config) in cls._model_mapping.keys():
440 model_class = _get_model_class(config, cls._model_mapping)
--> 441 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
442 raise ValueError(
443 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1368 if f.read().startswith("version"):
1369 raise OSError(
-> 1370 "You seem to have cloned a repository without having git-lfs installed. Please install "
1371 "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder "
1372 "you cloned."
OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned.
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1363 try:
-> 1364 state_dict = torch.load(resolved_archive_file, map_location="cpu")
1365 except Exception as e:
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
592 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 593 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
594
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
761
--> 762 magic_number = pickle_module.load(f, **pickle_load_args)
763 if magic_number != MAGIC_NUMBER:
UnpicklingError: invalid load key, 'v'.
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-15-34a92ef6f41b> in <module>
2
3 # load model
----> 4 model = AutoModelForSequenceClassification.from_pretrained(configs.output_dir) # "textattack/albert-base-v2-MRPC"
5 #model = AlbertForSequenceClassification.from_pretrained(configs.output_dir)
6 model.to(configs.device)
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
439 elif type(config) in cls._model_mapping.keys():
440 model_class = _get_model_class(config, cls._model_mapping)
--> 441 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
442 raise ValueError(
443 f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1368 if f.read().startswith("version"):
1369 raise OSError(
-> 1370 "You seem to have cloned a repository without having git-lfs installed. Please install "
1371 "git-lfs and run `git lfs install` followed by `git lfs pull` in the folder "
1372 "you cloned."
OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned.
```
**albert-base-v2-MRPC/**
```
config.json log.txt pytorch_model.bin README.md special_tokens_map.json spiece.model tokenizer_config.json train_args.json
```
Please let me know if there's anything else I can add to post. | 01-25-2022 14:01:00 | 01-25-2022 14:01:00 | I've now **installed and initialised GIT LFS in cloned folder**.
Terminal:
```
sh-4.2$ git lfs install
Git LFS initialized.
sh-4.2$ git clone https://huggingface.co/textattack/albert-base-v2-MRPC
Cloning into 'albert-base-v2-MRPC'...
remote: Enumerating objects: 27, done.
remote: Counting objects: 100% (27/27), done.
remote: Compressing objects: 100% (25/25), done.
remote: Total 27 (delta 7), reused 0 (delta 0)
Unpacking objects: 100% (27/27), done.
sh-4.2$ cd albert-base-v2-MRPC/
sh-4.2$ git lfs install
Updated git hooks.
Git LFS initialized.
sh-4.2$
```
|
transformers | 15,324 | closed | [Tests] Fix test | # What does this PR do?
This PR fixes the failing SwinModelIntegrationTest.test_inference_image_classification_head.
It also makes sure the appropriate device is used in one of ViLT's integration tests, although this test still fails due to the non-deterministic behaviour of the model. Setting torch.manual_seed(2) didn't help (it passes locally for me, but not on another machine).
Also removes a print statement in ViTMAE's test file. | 01-25-2022 13:49:30 | 01-25-2022 13:49:30 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,323 | closed | MarianMT models translating valid Chinese sentences to empty string | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: Linux-3.10.0-1160.53.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
cc @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Helsinki-NLP/opus-mt-zh-en
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run script
2. Observe equivalent English sentences are empty
3. Profit
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from tqdm import tqdm
def chunk(it, batch_size=32):
for i in range(0, len(it), batch_size):
yield it[i:i+batch_size]
if __name__ == "__main__":
# A few of these sentences are "weird" (one Japanese, one pinyin, several have emoji), but I think the model should be robust to OOV...
comments = """ ☆☆☆☆☆——《大圣归来》,五星好评,本来剧情方面较弱,情感没有打动我的片子并不值得五星,但是是国产动画,居然是国产动画,足以与皮克斯迪斯尼分庭抗礼的国产动画,妥妥的。小女孩挺可爱,可惜片子的最后并没有后续的发展。
✺◟(∗❛ัᴗ❛ั∗)◞✺喜欢科幻片✺◟(∗❛ัᴗ❛ั∗)◞✺喜欢喜欢喜欢
ストーリー:★★★★、オリジナリティ:★★★★、作画:★★★★★、演出:★★★★☆、キャラクター:★★★★、声優:★★★★、音楽:★★★☆、歌:★★★★☆。
狐兔大法好( • ̀ω•́ )✧
力赞国产动画良心出品。
和平时代音乐剧版卡萨布兰卡(ง •̀_•́)ง
恭喜彭于晏终于成长为能够与张涵予相媲美的台湾第一MAN!
wojiushibuxiangkandaonaocanshuijunshuachulaidefen
盾冬嘤嘤嘤~spider boy ant-man都好可愛~😍😎😁我就静静地看着teamiron还有teamcap打架。。
一部合格的主旋律正能量公安部电影,票房已经要破十亿了也是棒棒的~PS:为啥要杀了我的哮天犬啊呜呜呜...再另,有木有人跟我一样,觉得这部戏中彭于晏的改装扮相怪怪的...跟前几部林超贤片子中荷尔蒙爆棚的感觉差多了
☆☆☆☆——《小时代3》,当一部电影已经成为一个现象,它的分数总会比较奇葩。是绚丽的画面、华丽的衣服和水嫩的妹子们让我在惊艳的同时觉得自己原来还是这么肤浅的人啊。给这么高的分数一部分是为了平衡那些一星党,另一方面是给郭碧婷妹子,黑长直,太漂亮了!其他的全都黯然失色了!!
⌒/º●這素硪看過旳最棒旳★慶春★電影,↗仿佛回到了那個▲肥豬流▼時代,☆狠美好☆狠懷念,♀我悶的慶春你們卟動,■何老師卟是為了錢,※是情懷你們不懂毬你們不要瞰不要侮辱牠了✔
☆☆☆——《后会无期》,本来只有三星,为了韩寒加一星。偶有佳句,未有佳篇。看电影跟看韩寒的小说一模一样的。女演员们都很漂亮,尤其是王珞丹么么哒。最后的结局依然不明白是真是假。钟汉良的角色真是神来之笔,虽然我一直在期待他会把车还给主角。
imax效果好到爆……陈坤黄渤都是演技派!""".splitlines()
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en").to('cuda:0')
translations = []
for batch in tqdm(chunk(comments, batch_size=32)):
comments_tokenized = tokenizer(batch, return_tensors='pt', padding=True).to('cuda:0')
en_comments = model.generate(**comments_tokenized)
for comment in en_comments:
translations.append(tokenizer.decode(comment, skip_special_tokens=True))
for original, translation in zip(comments, translations):
print(original, translation)
```
## Expected behavior
Sentences should be translated | 01-25-2022 11:51:11 | 01-25-2022 11:51:11 | Hey @erip,
I can reproduce the problem! To me it seems to be a modeling issue. I've adapted the code snippet a bit so that we can see the actual tokens that are generated well:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from tqdm import tqdm
def chunk(it, batch_size=32):
for i in range(0, len(it), batch_size):
yield it[i:i+batch_size]
if __name__ == "__main__":
# A few of these sentences are "weird" (one Japanese, one pinyin, several have emoji), but I think the model should be robust to OOV...
comments = """ ☆☆☆☆☆——《大圣归来》,五星好评,本来剧情方面较弱,情感没有打动我的片子并不值得五星,但是是国产动画,居然是国产动画,足以与皮克斯迪斯尼分庭抗礼的国产动画,妥妥的。小女孩挺可
爱,可惜片子的最后并没有后续的发展。
✺◟(∗❛ัᴗ❛ั∗)◞✺喜欢科幻片✺◟(∗❛ัᴗ❛ั∗)◞✺喜欢喜欢喜欢
ストーリー:★★★★、オリジナリティ:★★★★、作画:★★★★★、演出:★★★★☆、キャラクター:★★★★、声優:★★★★、音楽:★★★☆、歌:★★★★☆。
狐兔大法好( • ̀ω•́ )✧
力赞国产动画良心出品。
和平时代音乐剧版卡萨布兰卡(ง •̀_•́)ง
恭喜彭于晏终于成长为能够与张涵予相媲美的台湾第一MAN!
wojiushibuxiangkandaonaocanshuijunshuachulaidefen
盾冬嘤嘤嘤~spider boy ant-man都好可愛~😍😎😁我就静静地看着teamiron还有teamcap打架。。
一部合格的主旋律正能量公安部电影,票房已经要破十亿了也是棒棒的~PS:为啥要杀了我的哮天犬啊呜呜呜...再另,有木有人跟我一样,觉得这部戏中彭于晏的改装扮相怪怪的...跟前几部林超贤片子中荷尔蒙爆棚
的感觉差多了
☆☆☆☆——《小时代3》,当一部电影已经成为一个现象,它的分数总会比较奇葩。是绚丽的画面、华丽的衣服和水嫩的妹子们让我在惊艳的同时觉得自己原来还是这么肤浅的人啊。给这么高的分数一部分是为了平衡那些
一星党,另一方面是给郭碧婷妹子,黑长直,太漂亮了!其他的全都黯然失色了!!
⌒/º●這素硪看過旳最棒旳★慶春★電影,↗仿佛回到了那個▲肥豬流▼時代,☆狠美好☆狠懷念,♀我悶的慶春你們卟動,■何老師卟是為了錢,※是情懷你們不懂毬你們不要瞰不要侮辱牠了✔
☆☆☆——《后会无期》,本来只有三星,为了韩寒加一星。偶有佳句,未有佳篇。看电影跟看韩寒的小说一模一样的。女演员们都很漂亮,尤其是王珞丹么么哒。最后的结局依然不明白是真是假。钟汉良的角色真是神来
之笔,虽然我一直在期待他会把车还给主角。
imax效果好到爆……陈坤黄渤都是演技派!""".splitlines()
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en").to('cuda:0')
translations = []
translations_raw = []
for batch in tqdm(chunk(comments, batch_size=32)):
comments_tokenized = tokenizer(batch, return_tensors='pt', padding=True).to('cuda:0')
en_comments = model.generate(**comments_tokenized)
for comment in en_comments:
translations.append(tokenizer.decode(comment))
for original, translation in zip(comments, translations):
print(100 * "=")
print("Original", original)
print("Translation", translation)
```
Running this script gives:
<details>
<summary>Output</summary>
```
====================================================================================================
Original ☆☆☆☆☆——《大圣归来》,五星好评,本来剧情方面较弱,情感没有打动我的片子并不值得五星,但是是国产动画,居然是国产动画,足以与皮克斯迪斯尼分庭抗礼的国产动画,妥妥的。小女孩挺可爱,可惜片子的最后并没有后续的发展。
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
====================================================================================================
Original ✺◟(∗❛ัᴗ❛ั∗)◞✺喜欢科幻片✺◟(∗❛ัᴗ❛ั∗)◞✺喜欢喜欢喜欢
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><p
ad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad
><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original ストーリー:★★★★、オリジナリティ:★★★★、作画:★★★★★、演出:★★★★☆、キャラクター:★★★★、声優:★★★★、音楽:★★★☆、歌:★★★★☆。
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><p
ad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original 狐兔大法好( • ̀ω•́ )✧
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original 力赞国产动画良心出品。
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original 和平时代音乐剧版卡萨布兰卡(ง •̀_•́)ง
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original 恭喜彭于晏终于成长为能够与张涵予相媲美的台湾第一MAN!
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original wojiushibuxiangkandaonaocanshuijunshuachulaidefen
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad
><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><p
ad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original 盾冬嘤嘤嘤~spider boy ant-man都好可愛~😍😎😁我就静静地看着teamiron还有teamcap打架。。
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><p
ad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad
><pad><pad><pad><pad>
====================================================================================================
Original 一部合格的主旋律正能量公安部电影,票房已经要破十亿了也是棒棒的~PS:为啥要杀了我的哮天犬啊呜呜呜...再另,有木有人跟我一样,觉得这部戏中彭于晏的改装扮相怪怪的...跟前几部林超贤片子中荷尔蒙爆棚的感觉差多了
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad
><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original ☆☆☆☆——《小时代3》,当一部电影已经成为一个现象,它的分数总会比较奇葩。是绚丽的画面、华丽的衣服和水嫩的妹子们让我在惊艳的同时觉得自己原来还是这么肤浅的人啊。给这么高的分数一部分是为了平衡那些一星党,另一方面是给郭碧婷妹子,黑长直,太漂亮了!其他的全都黯然失色了!!
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><p
ad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pa
d><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad
><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original ⌒/º●這素硪看過旳最棒旳★慶春★電影,↗仿佛回到了那個▲肥豬流▼時代,☆狠美好☆狠懷念,♀我悶的慶春你們卟動,■何老師卟是為了錢,※是情懷你們不懂毬你們不要瞰不要侮辱牠了✔
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad
><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><
pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><p
ad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
====================================================================================================
Original ☆☆☆——《后会无期》,本来只有三星,为了韩寒加一星。偶有佳句,未有佳篇。看电影跟看韩寒的小说一模一样的。女演员们都很漂亮,尤其是王珞丹么么哒。最后的结局依然不明白是真是假。钟汉良的角色真是神来之笔,虽然我一直在期待他会把车还给主角。
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
====================================================================================================
Original imax效果好到爆……陈坤黄渤都是演技派!
Translation <pad> ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ <pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad
><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
```
</details>
So we can see that essentially a lot of `??` are generated which seems to be the problem<|||||>`??` corresponds to the `unk_token_id` which seems to be the problem here. I'm not sure if we can do too much about this as it seems to be a modeling problem. We could try to use a different generation scheme (other beam_size, ...). Also gently pinging the author of the MarianMT models here @jorgtied to check if he has any ideas :-)<|||||>Yes, _everything_ being unk is somewhat surprising. It makes me wonder if there's some pretokenization that might better match the way the model was trained originally (e.g., with jieba)?
Thanks for digging, @patrickvonplaten!<|||||>Yeah not 100% sure either - hopefully @jorgtied has some pointers we could further look at<|||||>Good question and hard to say what really happens. One thing is that the model is trained to translate individual sentences and not arbitrary text snippets. You would need at least to divide the input into sentence-like units. Possibly some punctuation also influences the model quite a lot.
Another thing is that the training data is not perfect and the model might have some weird behavior with certain kind of input. Example translations are here and I cannot really judge how well it works even with those simple short sentences (https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt).
About preprocessing: There is no tokenization before training but some smaller generic cleanup is done to avoid problems. To be honest, I probably have to double-check those procedures a bit more in order to make sure that they don't break anything for specific languages. You could try to preprocess your data with some simple regexes like those ones:
```
sed -e 's/,/,/g' \
-e 's/。 */. /g' \
-e 's/、/,/g' \
-e 's/”/"/g' \
-e 's/“/"/g' \
-e 's/∶/:/g' \
-e 's/:/:/g' \
-e 's/?/\?/g' \
-e 's/《/"/g' \
-e 's/》/"/g' \
-e 's/)/\)/g' \
-e 's/!/\!/g' \
-e 's/(/\(/g' \
-e 's/;/;/g' \
-e 's/1/"/g' \
-e 's/」/"/g' \
-e 's/「/"/g' \
-e 's/0/0/g' \
-e 's/3/3/g' \
-e 's/2/2/g' \
-e 's/5/5/g' \
-e 's/6/6/g' \
-e 's/9/9/g' \
-e 's/7/7/g' \
-e 's/8/8/g' \
-e 's/4/4/g' \
-e 's/. */. /g' \
-e 's/~/\~/g' \
-e "s/’/\'/g" \
-e 's/…/\.\.\./g' \
-e 's/━/\-/g' \
-e 's/〈/\</g' \
-e 's/〉/\>/g' \
-e 's/【/\[/g' \
-e 's/】/\]/g' \
-e 's/%/\%/g' |
perl -C -pe 's/(?!\n)\p{C}/ /g;' |
perl -CIOE -pe 's/[\x{2060}\x{200B}\x{feff}]//g' |\
sed 's/ */ /g;s/^ *//g;s/ *$//g'
```
I would be interested in hearing whether any of this helps.<|||||>To quickly follow-up on my own comment above: The generic preprocessing pipeline (with scripts from moses also available at https://github.com/marian-nmt/moses-scripts/tree/master/scripts/tokenizer) that I apply is:
```
replace-unicode-punctuation.perl |\
remove-non-printing-char.perl |\
deescape-special-chars.perl |\
perl -CS -pe 'tr[\x{9}\x{A}\x{D}\x{20}-\x{D7FF}\x{E000}-\x{FFFD}\x{10000}-\x{10FFFF}][]cd;' |\
perl -CIOE -pe 's/[\x{2060}\x{200B}\x{feff}]//g' |\
perl -CS -pe 's/\&\s*\#\s*160\s*\;/ /g'
```
Some of this might be unnecessary. I need to re-check all of this at some point ...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Steps to reproduce:
```python
from transformers import MarianTokenizer
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh')
dst_texts = ['我的朋友有策床胡人']
dst_inputs = tokenizer(dst_texts, return_tensor='np')
print(tokenizer.convert_ids_to_tokens(dst_inputs['input_ids'][0], skip_special_tokens=True))
# output: ['▁']
```
Solution:
```sh
wget https://huggingface.co/Helsinki-NLP/opus-mt-en-zh/resolve/7b05ad2cd70ad11863b0fffc3327c13a26ab4f8a/source.spm
wget https://huggingface.co/Helsinki-NLP/opus-mt-en-zh/resolve/7b05ad2cd70ad11863b0fffc3327c13a26ab4f8a/target.spm
```
```python
from transformers import MarianTokenizer
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-zh', source_spm='target.spm')
dst_texts = ['我的朋友有策床胡人']
dst_inputs = tokenizer(dst_texts, return_tensor='np')
print(tokenizer.convert_ids_to_tokens(dst_inputs['input_ids'][0], skip_special_tokens=True))
# output: ['▁', '我的朋友', '有', '策', '床', '胡', '人']
```
For English, change to `source_spm='source.spm'`. |
transformers | 15,322 | closed | modeling_visual_bert in case of self.bypass_transformer=True | https://github.com/huggingface/transformers/blob/05fa1a7ac17bb7aa07b9e0c1e138ecb31a28bbfe/src/transformers/models/visual_bert/modeling_visual_bert.py#L830 The previous line should be changed from encoded_output --> encoder_outputs so that the return statement on line 861 works. Also, the following line https://github.com/huggingface/transformers/blob/05fa1a7ac17bb7aa07b9e0c1e138ecb31a28bbfe/src/transformers/models/visual_bert/modeling_visual_bert.py#L828 has an issue with indexing in dimension 2 (this will require more testing).
| 01-25-2022 09:29:41 | 01-25-2022 09:29:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,321 | closed | [Do not merge] Chase more pt tf inconsistency | # What does this PR do?
| 01-25-2022 09:17:31 | 01-25-2022 09:17:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15321). All of your documentation changes will be reflected on that endpoint.<|||||>In terms of running time
- Circle CI
- current : [56.11s](https://circleci.com/api/v1.1/project/github/huggingface/transformers/384755/output/112/0?file=true&allocation-id=62238c1d4f148c1110a1efa8-0-build%2F1610CB69)
- this PR : [61.21s](https://circleci.com/api/v1.1/project/github/huggingface/transformers/384731/output/112/0?file=true&allocation-id=622385347ccc3f3894ec7697-0-build%2F351FF724) |
transformers | 15,320 | closed | Fix `bad_word_ids` not working with sentencepiece-based tokenizers | # What does this PR do?
This fixes the problem models using sentencepiece-based tokenizers can not prevent bad words when decoding.
For sentencepiece-based tokenizers like T5Tokenizer, when creating `bad_words_ids` from `bad_words`, `add_special_tokens` must be set to False.
## Code to reproduce
```python
from transformers import T5Tokenizer, AutoModelForCausalLM, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small", use_fast=False)
model = T5ForConditionalGeneration.from_pretrained("t5-small")
bad_words = ["my", "will", "My", "you", "are", "I", "You", "it"] # words should not be generated
bad_words_ids = tokenizer(bad_words, add_prefix_space=True, add_special_tokens=False).input_ids # get bad words ids
input_context = "You are my friend"
# encode input context
input_ids = tokenizer(input_context, return_tensors="pt").input_ids
outputs = model.generate(input_ids=input_ids, max_length=20, do_sample=True, bad_words_ids=bad_words_ids, num_return_sequences=3)
gen_texts = tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Who can review?
@patrickvonplaten @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-25-2022 08:22:34 | 01-25-2022 08:22:34 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15320). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,319 | closed | fix the `tokenizer_config.json` file for the slow tokenizer when a fast version is available | # What does this PR do?
Following the diagnosis discussed and validated in the issue #15283, this PR proposes to modify `PreTrainedTokenizerBase` so that the `tokenizer_file` is no longer retrieved if the calling tokenizer class is of a slow type.
This PR also contains different changes:
- remove the key `"tokenizer_file"` from the global variables such as `VOCAB_FILES_NAMES` when it is a slow version or add it to the fast version when it was missing
- remove the `tokenizer_file` argument from the init of some tokenizer slow
- adapt the `test_tokenizer_mismatch_warning` test because now when someone tries to load files with the wrong tokenizer an error can be returned before the warning is run
- add a new test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Would love to have your feedbacks @LysandreJik and @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-24-2022 18:36:43 | 01-24-2022 18:36:43 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you so much for your review @sgugger !
Could you tell me more about why "I'm not entirely sure we should remove the filed from the global variables XXX_VOCAB_FILES_MAP as it would be breaking", I'm afraid I'm missing something. (Note, I'm only proposing to remove it from the global variables of the slow version of DPR - which also has a fast version - and from one of the possible slow tokenizer template)
About the signature of the slow tokenizer, this change was most about standardizing the code between the different slow tokenizer classes. This change concerns:
- `mbart` if we leave `tokenizer_file` in the signature of mbart, potentially if this argument is given at the time of the initialization of the object, the info could be saved in the `tokenizer_config.json` (and result in the same problem as the one points out in the issue).
- `herbert`: it's only for standartization. Here as the argument isn't passed to the` __init__` of the super class (`PreTrainedTokenizer`) it can't be saved in the `tokenizer_config.json`.
- all the other slow tokenizers don't have `tokenizer_file` in their signature<|||||>You are removing content from a public constant, that is a breaking change. Same for changing the signature of tokenizers. I understand that for the second part, it could lead to bugs, so ok to break if it fixes something, but for the first change that is purely cosmetic, maybe we should avoid breaking?
cc @LysandreJik let us know what you think.<|||||>I understand your point! I still have a little trouble knowing where to draw the line between a bugfix and a breaking change.<|||||>Agreed with @sgugger, but otherwise this looks like a very welcome change.<|||||>@sgugger, @LysandreJik , as adviced I have reverted my changes concerning global variables in slow files and changing signatures of the 2 slow tokeniers. :slightly_smiling_face: |
transformers | 15,318 | closed | Fixing support `batch_size` and `num_return_Sequences` in `text-generation` pipeline | # What does this PR do?
And `text2text-generation` too.
The bug was caused by the batch_size containing both the incoming batch
**and** the generated `num_sequences`.
The fix simply consists into splitting both of these again into
different dimensions.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #15316
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-24-2022 18:28:10 | 01-24-2022 18:28:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Friendly ping @LysandreJik |
transformers | 15,316 | closed | GPT-Neo batch inferencing with sampling results unexpected output | ## Environment info
- `transformers` version: 4.13.0
- Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@cccntu @patil-suraj
Models:
GPT-Neo
Library:
- Pipelines: @Narsil
## Information
I want to speed up the text generation work by using batching, and at the same time generate more text by using sampling.
But the result is abnormal
## To reproduce
Steps to reproduce the behavior:
```python
texts = [
'Have a line of communication. You have two lines of communication.',
'Wanting this is bad. Tell me to go ahead.',
"I found a colony of bats in the steeple of St. Olaf's church while you were dating my brother.",
"Fight so you don't have to do it again"
]
model_name = 'EleutherAI/gpt-neo-1.3B'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, pad_token_id=tokenizer.eos_token_id)
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device=0)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = 'left'
def test_batch(texts, batch_size=1):
results = pipe(
texts, batch_size=batch_size,
max_length=50,
pad_token_id=tokenizer.eos_token_id,
repetition_penalty=2.0,
do_sample = True,
num_return_sequences = 8,
)
results = [ri['generated_text'] for r in results for ri in r]
return results
test_batch(texts, 4)
'''
['Have a line of communication. You have two lines of communication. One being when somebody is on the way to you or your home and another is over there, waiting for an answer. You want',
'Wanting this is bad. Tell me to go ahead.o lines of communication. The first is the person who gave birth to you. The other is the person who has been listening to you and observing you for',
"I found a colony of bats in the steeple of St. Olaf's church while you were dating my brother.can give you the results in your report. Second, you need to understand the results or we are done",
'Fight so you don\'t have to do it again two lines of communication. The first is the "official"\nsocial media account and the other is a personal one. Most organizations, government entities,\n']
'''
```
## Expected behavior
Generate normal text.
| 01-24-2022 15:22:48 | 01-24-2022 15:22:48 | Hi @callzhang you are perfectly correct.
This issue is related to both arguments not working nice with each other, I created a PR to add 2 tests for both `text-generation` and `text2text-generation` pipelines (both were affected).
Thanks for reporting this ! <|||||>@Narsil Thanks for the prompt reply and fix. Really appreciate that!<|||||>@callzhang Did this fix your issue?
I'm trying to understand how the `text-generation` pipeline works with inputting batches (as you properly set `tokenizer.padding_side = 'left'`) but the pipeline itself has `padding=False`:
https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/pipelines/text_generation.py#L175
Maybe @Narsil can provide further insight?
Thanks in advance.
BTW: I'm currently trying to do it with GPT-J6B without luck, `batch_size=1` works great, for larger batches I'm getting very weird output (seem like inputs are mixing together). |
transformers | 15,315 | closed | [WIP] Add Maskformer to the library | This WIP PR adds [MaskFormer](https://arxiv.org/abs/2107.06278) a new model for any segmentation task. This model ranks [on the top 10 of almost every task](https://paperswithcode.com/paper/per-pixel-classification-is-not-all-you-need)
A total of 8 pre-trained checkpoints will be available, which are the checkpoints discussed in the official MaskFormer paper. They are not yet available on the hub but tested locally. The weights are all pairs of combinations between the 4 Swin Transformer variants (tiny, small, base and large) and two datasets ([ade20k-150](https://groups.csail.mit.edu/vision/datasets/ADE20K/) and [coco-panoptic](https://cocodataset.org/#home))
TODOs
- [x] converting script should be self-contained
- currently, the job of downloading the weights and configuration files is outsourced to the end-user
- [x] feature extractor
- all the parameters needed are there, but the class is missing
- the resizing needed is tricky, originally [ResizeShortestEdge](https://detectron2.readthedocs.io/en/latest/modules/data_transforms.html#detectron2.data.transforms.ResizeShortestEdge) from detectron is used
- all the post processing should be inside it
- [x] backbone
- recently we added [Swin Transformer](https://github.com/huggingface/transformers/pull/15085), the backbone should depend on that implementation
- ported inside maskformer with the required changes
- [x] loss
- originally, the loss is computed by passing a list of dictionaries representing targets. However, this approach is subefficient since different targets may have different sizes making it hard to process the batch in a single go.
- [x] padding
- padding is handled in the forward pass using `NestedTensor`, this is a well know class used in a lot of implementations. This should be handled by the `FeatureExtractor` following [Niels implementation](https://github.com/huggingface/transformers/blob/c15bb3fe19b0b6c69a727812cdd3cd5597014667/src/transformers/models/detr/feature_extraction_detr.py#L633)
- [x] auxiliary loss is not yet implemented
- [x] output_hidden_states
- [x] doc
- [ ] tests:
- [x] FeatureExtractor
- [ ] MaskFormer
Currently, the model can be used as follows
```python
import torch
from transformers import (
MaskFormerModel,
MaskFormerForInstanceSegmentation,
MaskFormerConfig,
MaskFormerFeatureExtractor,
)
import numpy as np
feature_extractor = MaskFormerFeatureExtractor(do_resize=True)
inputs = feature_extractor(
[np.zeros((3, 400, 1200)), np.zeros((3, 750, 384))],
return_tensors="pt",
pad_and_return_pixel_mask=True,
)
config = MaskFormerConfig()
mask_former = MaskFormerModel(config=config)
out = mask_former(**inputs)
# out contains the hidden states of each submodule
mask_former = MaskFormerForInstanceSegmentation(config=config)
out = mask_former(**inputs)
# out contains the logits
seg = feature_extractor.post_process_segmentation(out)
# get the instance panoptic mask + segments
seg = feature_extractor.post_process_panoptic_segmentation(out)
| 01-24-2022 14:45:10 | 01-24-2022 14:45:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,314 | closed | [LayoutLMV2 Tests] Make sure input is on GPU | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Make sure input image is on GPU for testing.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-24-2022 14:12:44 | 01-24-2022 14:12:44 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,313 | closed | Push to hub training argument not pushing | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger @LysandreJik @lewtun
## Information
Using `push_to_hub=True` in `TrainingArguments` is not pushing to the Hub. Based on the documentation, if it's `True`, it should push the trained model after training ("Whether or not to upload the trained model to the hub after training").
## To reproduce
Here is a self contained [example notebook](https://colab.research.google.com/drive/1dVUGEAc8JAfuGbGqpOdXBHPsL7rG8bLS#scrollTo=pf8Oz8yx169g).
Most relevant part of code
```python
from transformers import Trainer, TrainingArguments
batch_size = 64
logging_steps = len(emotions_encoded["train"]) // batch_size
model_name = "debug-example"
training_args = TrainingArguments(output_dir=model_name,
num_train_epochs=2,
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
evaluation_strategy="epoch",
disable_tqdm=False,
logging_steps=logging_steps,
push_to_hub=True,
log_level="error")
trainer = Trainer(model=model, args=training_args,
compute_metrics=compute_metrics,
train_dataset=emotions_encoded["train"],
eval_dataset=emotions_encoded["validation"],
tokenizer=tokenizer)
trainer.train()
```
Output
```
Cloning https://huggingface.co/osanseviero/debug-example into local empty directory.
[500/500 07:28, Epoch 2/2]
Epoch | Training Loss | Validation Loss | Accuracy | F1
-- | -- | -- | -- | --
1 | 0.832700 | 0.306814 | 0.908500 | 0.905449
2 | 0.247700 | 0.219120 | 0.924500 | 0.924758
TrainOutput(global_step=500, training_loss=0.5401657867431641, metrics={'train_runtime': 449.3645, 'train_samples_per_second': 71.212, 'train_steps_per_second': 1.113, 'total_flos': 720342861696000.0, 'train_loss': 0.5401657867431641, 'epoch': 2.0})
```
Doing `cd debug-example && git log` shows there is a local commit that was not pushed, so the local repo is one commit ahead. Doing `trainer.push_to_hub()` afterwards works correctly.
## Expected behavior
The trained model would be pushed to the Hub automatically at the end of training.
| 01-24-2022 13:49:26 | 01-24-2022 13:49:26 | That's not the API that was decided. `push_to_hub=True` pushes to the Hub at every save, and you did not pick a `saving_strategy` that triggered a save (it defaults to `"steps"` and there were not enough steps in your training), but it does not push to the Hub at the end of training unless you explicitly asks for it with `trainer.push_to_hub()`<|||||>There was a save. If I do `cd debug-example && git log`, there was a save at step 500. So the model was indeed saved but this did not kicked off a push.
```
commit 9a12f873da1c9c84c2ada3a25a3d1baf917d9e43 (HEAD -> main)
Author: Omar Sanseviero <[email protected]>
Date: Mon Jan 24 13:36:00 2022 +0000
Training in progress, step 500
commit 54780a2baf1f8ad714362d890dd1f9e471f0f403 (origin/main, origin/HEAD)
Author: system <[email protected]>
Date: Mon Jan 24 11:56:59 2022 +0000
initial commit
```
I don't think [the docstring](https://huggingface.co/docs/transformers/v4.15.0/en/main_classes/trainer#transformers.TrainingArguments) reflects what you're saying about the behaviour. It says it will push after training. Is this something that should be changed?
* push_to_hub (bool, optional, defaults to False) — Whether or not to upload the trained model to the hub after training. If this is activated, and output_dir exists, it needs to be a local clone of the repository to which the Trainer will be pushed.
<|||||>Ok, so there is definitely something wrong going on with the save not being pushed. I don't get why the model was committed but not pushed since it's all done in the same line inside the `Trainer` by calling `Repository.push_to_hub`. The bug is thus probably inside `huggingface_hub`.
For the choice of API, this was mainly because people (for instance all examples) usually do one last evaluation before pushing to the Hub, and if pushing before it, the auto-generate model card does not have that evaluation info (and then the push ends up being rejected because of missing metadata).
We can either:
1. fix the documentation to reflect the reality
2. change the behavior to push at the end of training (with maybe some models not having model cards because the push ends up being rejected if there is no evaluation results)
In all cases, the example scripts will still push a version with the evaluation results afterward.
Think I'm leaning towards 2, but would love to have @LysandreJik and @patrickvonplaten thoughts as well.<|||||>I think in any way we should probably update the docstring because `push_to_hub` pushes the model depending on the `saving_strategy` and I think most people use `"steps"` meaning no? So I think we should state that `push_to_hub` uploads all saved checkpoints during the training to the hub.
In my opinion `push_to_hub` should be very closely tied to saving the model during training in the sense that if `push_to_hub=True` then every time the Trainer saves a file, those updates files should be uploaded, so that the repo online and the local dir are always in sync. <|||||>So, following that logic, the model should not be pushed at the end of training unless the user does `trainer.save_model()` (or `trainer.push_to_hub`) since there is no save at the end of the train method. That works for me too.<|||||>I'm fine with 2), but I wonder if there are use-cases where users don't want to save the model at the end of their training? Fine with me to clarify that `push_to_hub` pushes to hub everytime the model is saved, with an emphasis on the `saving_strategy` and the possibility to call `trainer.save_model()` at any point to push to the hub.<|||||>I currently have the following code:
training arguments:
```py
train_args = Seq2SeqTrainingArguments(
overwrite_output_dir=True,
output_dir="saves/models",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
weight_decay=0.01,
save_total_limit=2,
num_train_epochs=1,
predict_with_generate=True,
logging_steps=len(tokenized_datasets["train"]) // 4,
hub_strategy="every_save",
push_to_hub=True,
hub_model_id=config["MODEL_NAME"],
hub_private_repo=True,
hub_token=config["HUGGINGFACE_API_KEY"],
)
```
trainer:
```py
trainer = Seq2SeqTrainer(
model=model,
tokenizer=tokenizer,
args=train_args,
data_collator=data_collator,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
compute_metrics=compute_metrics
)
```
I then run:
```py
trainer.create_optimizer()
trainer.train()
```
The previous code alone does not save as suggested by the (2.) solution.
Only when I run the following, it saves to the hub:
```py
trainer.push_to_hub()
```
Any idea why it`s not saving at the end of the epoch? @osanseviero |
transformers | 15,312 | closed | Replace NystromformerTokenizer with AutoTokenizer | # What does this PR do?
This PR replaces `NystromformerTokenizer` with `AutoTokenizer` in `modeling_nystromformer.py` since `NystromformerTokenizer` does not exist.
## Who can review?
@NielsRogge | 01-24-2022 13:00:03 | 01-24-2022 13:00:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,311 | closed | Decoding with Wav2Vec2 with Language Model | Dear All,
I am trying to decode a model with a Language Model:
I made a LM using KenLM from the method provided at : https://huggingface.co/blog/wav2vec2-with-ngram
When decoding, I am getting the following error:
ValueError: Input logits of size 586, but vocabulary is size 44
An brief Introduction:
I am training fine-tuning wav2vec2_base in Urdu Language.
I trained it and tested it without using Language-Model and it was giving results.
Now I made a new processor using the following code:
```
from transformers import Wav2Vec2ProcessorWithLM
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="5gram_correct_1.arpa",
)
processor_lm = Wav2Vec2ProcessorWithLM(feature_extractor=feature_extractor, tokenizer=tokenizer , decoder=decoder)
```
The LM is made from the same Textual data which was used in training the model.
When I try to decode the model it is giving me the following error:
`ValueError: Input logits of size 586, but vocabulary is size 44`
Any idea why am I getting this error? | 01-24-2022 11:21:13 | 01-24-2022 11:21:13 | Have you checked that you are using the same vocabulary for both and what code are you using for inference?<|||||>> Have you checked that you are using the same vocabulary for both and what code are you using for inference?
Hi olafthiele,
I am making my arpa file from the same text I am using for training the model and generating the processor.
for inference code is following,
`import soundfile as sf
import torch
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import os
from transformers import AutoProcessor, AutoModelForCTC
processor = AutoProcessor.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model")
model = AutoModelForCTC.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model")`
`path = "urdu/"
#print(os.listdir(path))
for file in os.listdir(path):
# load audio
audio_input, sample_rate = sf.read(path + file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor.decode(predicted_ids[0])
# print(transcription)
print(transcription + " (" + file + ")" )
`
This code is working correctly and giving me output.
Now
When I use merge this model with my LM.arpa
`from transformers import AutoProcessor, AutoModelForCTC
processor = AutoProcessor.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model")
model = AutoModelForCTC.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model")`
`from transformers import Wav2Vec2ProcessorWithLM
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="yapak1994/Urdu_ASR_wav2vec2_base_Model/5gram_correct_1.arpa ",
)
processor_lm = Wav2Vec2ProcessorWithLM(feature_extractor=feature_extractor, tokenizer=tokenizer , decoder=decoder)`
Now when I try to repeat the same code for previous inference, I get error.
`
ValueError: Input logits of size 586, but vocabulary is size 44`
`for file in os.listdir(path):
# load audio
audio_input, sample_rate = sf.read(path + file)
# pad input values and return pt tensor
input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values
# INFERENCE
# retrieve logits & take argmax
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
# transcribe
transcription = processor_lm.decode(predicted_ids[0])
# print(transcription)
print(transcription + " (" + file + ")" )`
<|||||>You have some markdown problems, but have you tried building the arpa according to [the blog](https://huggingface.co/blog/wav2vec2-with-ngram)?
And then do inferencing [like I did here](https://github.com/huggingface/transformers/issues/15344). Works correctly now with the pool closing.<|||||>Thanks a lot,
I think there were error while building the file. Now it is working correctly.
Thanks a lot to @olafthiele and @patrickvonplaten for their wonderful help. |
transformers | 15,310 | closed | Update eval.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #15307
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-24-2022 10:37:34 | 01-24-2022 10:37:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,309 | closed | Add ASR CTC streaming example | # What does this PR do?
This enables the use of streamable datasets to fine-tune CTC models.
Example args:
```
--dataset_name="common_voice"
--model_name_or_path="ntu-spml/distilhubert"
--tokenizer_name_or_path="infinitejoy/wav2vec2-large-xls-r-300m-abkhaz"
--dataset_config_name="ab"
--output_dir="./dummy_ctc"
--overwrite_output_dir
--per_device_train_batch_size="4"
--gradient_accumulation_steps="1"
--learning_rate="5e-5"
--max_steps="3000"
--warmup_steps="500"
--evaluation_strategy="steps"
--text_column_name="sentence"
--save_steps="500"
--eval_steps="5"
--logging_steps="1"
--layerdrop="0.0"
--save_total_limit="1"
--mask_time_prob="0.3"
--mask_time_length="10"
--mask_feature_prob="0.1"
--mask_feature_length="64"
--freeze_feature_encoder
--chars_to_ignore=", ? . ! - \; \: \" % ‘ �"
--fp16
--do_train
--do_eval
--gradient_checkpointing
``` | 01-24-2022 10:19:35 | 01-24-2022 10:19:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>One thing left to add: support for infinite streaming if we reach the end of the dataset, so that `Trainer` doesn't stop until `max_steps` is reached.
Reference: https://github.com/huggingface/transformers/blob/48bf7e47a01f310ca8e76bd90be14e06dbd08329/examples/research_projects/codeparrot/scripts/codeparrot_training.py#L20
Otherwise this example yields:
```
There seems to be not a single sample in your epoch_iterator, stopping training at step 6! This is expected if you're using an IterableDataset and set num_steps (3000) higher than the number of available samples.
```<|||||>Maybe run the script for just 5 epochs both with streaming and without so that we have some exact numbers and can compare perf as well<|||||>Feel free to merge and announce this @anton-l once there have been some testing :-)<|||||>The ASR README now includes a streaming example+benchmark :)
A couple of last notes:
1. streaming will run in distributed mode only with the Trainer fixes from this PR (to support `IterableDatasetShard`)
2. https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 is very flaky (connection drops after a couple of epochs, possibly due to a temporary IP ban), so the benchmark only succeeded with `common_voice`<|||||>TODO: Merge this after https://github.com/huggingface/transformers/pull/15539 |
transformers | 15,308 | closed | Minimal model for writing tests. | # 🚀 Feature request
Add minimal models that satisfy different transformers interfaces.
## Motivation
I'd like to write some tests for a project that uses the transformers package in the background. To test my functionality I somehow need to use a transformers model (or a mock). However typically the models that are available are pretty huge, which typically causes bitbucket pipelines (or what ever CI pipeline) to fail due to memory limitations of the docker container.
Thus it would be really cool if you could provide minimal models that satisfy the transformers interfaces such that the `transformers.Trainer` object can use it for example.
If something like that is already available I couldn't really find it, but let me know if there is something.
Cheers :beer:
| 01-24-2022 09:58:14 | 01-24-2022 09:58:14 | Do the models available on https://huggingface.co/hf-internal-testing fit your use-case? That's what we use internally for our testing with dummy models.
These are randomly instantiated, however. They should only be used for mock testing.<|||||>Thanks, this is exactly what I was looking for! Are these not meant to be found, or did I just not search good enough? :smile: <|||||>They're used for our internal testing and not really intended for public usage, so we don't really communicate about it. Now that we're aware that some users may find them useful, we should do a better job of advertising them :)<|||||>Alright! Thanks for letting me know! :beer: <|||||>This is extremely useful, thank you! I was looking for a way to ensure that connecting a transformer's encoding to our downstream models was working without downloading a giant model, even if it's not giving useful results.
However, as of March 2023, the models do not show up in searches, even if you specifically search for `hf-internal-testing`.
This page is empty, for example:
https://huggingface.co/models?search=hf-internal-testing
As is this one:
https://huggingface.co/models?search=hf-internal-testing%20roberta
Is that something you could fix? I understand you probably don't want to show the models all of the time, such as when someone is searching for a bert they actually want to use. Perhaps it would work to change it so that if a search shows nothing at all, but some internal testing models matched, then the internal testing models show up |
transformers | 15,307 | closed | Robust Speech Challenge Evaluation File does not use entire dataset for metric calculation | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Using GPU in script?: Yes
### Who can help
@patrickvonplaten, @anton-l
## To reproduce
Steps to reproduce the behavior:
1. ~/transformers/examples/research_projects/robust-speech-event/eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config sv-SE --split test --log_outputs
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. Evaluation is done only on 10 examples [https://github.com/huggingface/transformers/blob/master/examples/research_projects/robust-speech-event/eval.py#L71 ](https://github.com/huggingface/transformers/blob/master/examples/research_projects/robust-speech-event/eval.py#L71 ) It should be done on entire dataset provided
2. To avoid any ambiguity WER and CER are generally reported in percentages but [https://github.com/huggingface/transformers/blob/master/examples/research_projects/robust-speech-event/eval.py#L26](https://github.com/huggingface/transformers/blob/master/examples/research_projects/robust-speech-event/eval.py#L26) will print the results in range of 0-1 | 01-24-2022 09:56:28 | 01-24-2022 09:56:28 | Thanks a lot for opening this issue @anuragshas! Indeed one should comment out this line when evaluating the model actually on the full dataset! I've left it here so that one can easily test the script. Will comment it out on master now :-) |
transformers | 15,306 | closed | Remove old debug code leftover. | # What does this PR do?
Remove old debug code leftover.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-24-2022 09:27:03 | 01-24-2022 09:27:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,303 | closed | Nan when training LayoutLM_V2 Model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.13.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU) : 1.10.0+cu111
- Tensorflow version (GPU): 2.7.0
- Flax version: not installed
- Jax version: not installed
- JaxLib version: not installed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
@NielsRogge
## Information
The model used is LayoutLMv2:
The problem arises when using:
* [x] my own modified scripts:
The tasks I am working on is:
* [x] Document streaming segmentation
In my script, I try to determine when starts a new document with the objective to divide into segments a streaming of folders, so each segment can be interpreted as an independent document.
## To reproduce
Steps to reproduce the behavior:
1. Access to the colab notebook created to train LayoutLM_V2 (https://colab.research.google.com/drive/1MsEkj_WlGYDOs3vFcm1JxmMNLWj_Se78?usp=sharing)
2. Execute every cell in order
3. In the training loop, Accuracy, loss, and output will be printed, and there will be a moment when the output, Accuracy and Loss will become Nan.
## Expected behavior
The model trains, and despite if it accomplishes its task or not, the training loop ends without any Nan.
| 01-23-2022 19:20:08 | 01-23-2022 19:20:08 | Hi @Asocsar,
For training-related questions, can you use the [forum](https://discuss.huggingface.co/)?
I'm suspecting this has to do with the custom loss function being defined, rather than the model itself.
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,302 | closed | RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding) | ## Environment info
- `transformers` version: 4.12.5
- Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyTorch version (GPU?): 1.9.1 (True)
- Tensorflow version (GPU?): 2.6.2 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik @vanpelt @arfon
## Information
I am using Captum for interpreting the attributions of the tokens in each layer using Layer-Conductance
`lc = LayerConductance(predict, model.bert.encoder.layer[i])`
Now, in the line
`layer_attributions = lc.attribute(inputs=input_ids, baselines=ref_input_ids, additional_forward_args=(attention_mask,))`
RuntimeError is generating.
A helper function to perform forward pass of the model and make predictions.
`def predict(input_ids, attention_mask=None):
outputs, attention_weights = model(input_ids=input_ids, attention_mask=attention_mask)
preds = torch.softmax(outputs, dim = 1)[0][1].unsqueeze(0)
return preds`
Model I am using: "google/muril-base-cased"
#2952


## Expected behavior
Code is working fine during training and prediction but raising errors while interpreting the layers with captum.
Any help would be greatly appreciated.
| 01-23-2022 17:39:29 | 01-23-2022 17:39:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@neeraj1909 I am facing the same issue, did you manage to figure this out by any chance? |
transformers | 15,301 | closed | Getting error while saving model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: linux
- Python version : 3.6
- PyTorch version (GPU?): gpu
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
-
- @sgugger Need help with trainer module
Models:
- BERT
- I am using BERT model. The problem arises in trainer module
#15300 File "~/lib/python3.6/site-packages/transformers/trainer.py", line 1608, in save_model
ShardedDDPOption.ZERO_DP_2 in self.args.sharded_ddp or ShardedDDPOption.ZERO_DP_3 in self.args.sharded_ddp
TypeError: 'in <string>' requires string as left operand, not ShardedDDPOption
I am training a Bert Model for a multi-class classification task
## To reproduce
Steps to reproduce the behavior:
1. Code
```python
import logging
import os
from statistics import mean, stdev
import sys
from typing import Callable, Dict
import pandas as pd
import numpy as np
from pprint import pformat
from scipy.special import softmax
import tensorboard
import torch
from transformers import (
AutoTokenizer,
AutoConfig,
HfArgumentParser,
Trainer,
EvalPrediction,
set_seed
)
from multimodal_args import ModelArguments, MultiModalDataArguments, MultiModalTrainingArguments
from evaluation import calc_classification_metrics, calc_regression_metrics
from load_dataset import load_datadir
from config import TabularConfig
from auto_fusion_model import AutoModelFusion
from utils import create_dir_if_not_exists
os.environ['COMET_MODE'] = 'DISABLED'
logger = logging.getLogger(__name__)
def main():
#Define text and tabular features
text_cols = ['keywords',"browse_node_name","pod","ORDERING_GL_PRODUCT_GROUP","gl_product_group_desc"]
label_col = 'label'
cat_features = []
non_num_col = text_cols + ["shipping_address_id","postal_code","browse_node_id","label","asin","customer_id","order_day"]
#features = pd.read_csv("/efs/avimodi/static_model/feature_importance_static_rhm.csv")
#features_list = features.head(50)["Feature"].to_list()
logger.info("Reading sample File")
sample = pd.read_csv("/efs/avimodi/.MultiModal_Model/input_sample/val.csv")
features_list = sample.columns.to_list()
num_features = [col for col in features_list if col not in non_num_col]
logger.info(len(num_features))
label_list = ["0","1","2"] # what each label class represents
column_info_dict = {
'text_cols': text_cols,
'num_cols': num_features,
'cat_cols': cat_features,
'label_col': 'label',
'label_list': ["0","1","2"]
}
model_args = ModelArguments(
model_name_or_path='bert-base-uncased'
)
data_args = MultiModalDataArguments(
data_path='/efs/avimodi/.MultiModal_Model/input_sample',
fusion_method='attention',
features_info=column_info_dict,
task='classification',
numerical_encoding='min-max',
categorical_encoding = 'none'
)
training_args = MultiModalTrainingArguments(
output_dir="/efs/avimodi/unified_model/run_sample/output",
logging_dir="/efs/avimodi/unified_model/run_sample/logs",
overwrite_output_dir=True,
do_train=True,
do_eval=True,
per_device_train_batch_size=256,
per_device_eval_batch_size=256,
num_train_epochs=10,
evaluate_during_training=True,
logging_steps=25,
eval_steps=500,
save_steps=500,
debug_dataset=True,
report_to = ["tensorboard"],
)
set_seed(training_args.seed)
# Setup logging
create_dir_if_not_exists(training_args.output_dir)
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
level=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,
datefmt="%m/%d/%Y %H:%M:%S",
filename = os.path.join(training_args.output_dir,'train_log.txt'),
filemode = 'w+'
)
logger.info(f"======== Model Args ========\n{(model_args)}\n")
logger.info(f"======== Data Args ========\n{(data_args)}\n")
logger.info(f"======== Training Args ========\n{(training_args)}\n")
tokenizer = AutoTokenizer.from_pretrained(
model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
train_dataset, val_dataset, test_dataset = load_datadir(
data_args.data_path,
data_args.features_info['text_cols'],
tokenizer,
label_col=data_args.features_info['label_col'],
label_list=data_args.features_info['label_list'],
categorical_cols=data_args.features_info['cat_cols'],
numerical_cols=data_args.features_info['num_cols'],
categorical_encoding=data_args.categorical_encoding,
numerical_encoding=data_args.numerical_encoding,
sep_text_token_str=tokenizer.sep_token,
max_token_length=training_args.max_token_length,
debug=training_args.debug_dataset
)
train_datasets = [train_dataset]
val_datasets = [val_dataset]
test_datasets = [test_dataset]
train_dataset = train_datasets[0]
num_labels = len(np.unique(train_dataset.labels)) if data_args.num_classes == -1 else data_args.num_classes
def compute_metrics_fn(p: EvalPrediction):
if data_args.task == "classification":
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds_labels = np.argmax(preds, axis=1)
if p.predictions.shape[-1] == 2:
pred_scores = softmax(preds, axis=1)[:, 1]
else:
pred_scores = softmax(preds, axis=1)
return calc_classification_metrics(pred_scores, preds_labels,
p.label_ids)
elif data_args.task == "regression":
preds = np.squeeze(p.predictions)
return calc_regression_metrics(preds, p.label_ids)
else:
return {}
total_results = []
for i, (train_dataset, val_dataset, test_dataset) in enumerate(zip(train_datasets, val_datasets, test_datasets)):
logger.info(f'======== Fold {i+1} ========')
config = AutoConfig.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
cache_dir=model_args.cache_dir,
)
tabular_config = TabularConfig(
num_labels=num_labels,
cat_feat_dim=train_dataset.cat_feats.shape[1] if train_dataset.cat_feats is not None else 0,
numerical_feat_dim=train_dataset.numerical_feats.shape[1] if train_dataset.numerical_feats is not None else 0,
**vars(data_args)
)
config.tabular_config = tabular_config
model = AutoModelFusion.from_pretrained(
model_args.config_name if model_args.config_name else model_args.model_name_or_path,
config=config,
cache_dir=model_args.cache_dir
)
if i == 0:
logger.info(tabular_config)
logger.info(model)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=val_dataset,
compute_metrics=compute_metrics_fn
)
if training_args.do_train:
train_result = trainer.train(
resume_from_checkpoint=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
)
metrics = train_result.metrics
# max_train_samples = (
# data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)
# )
metrics["train_samples"] = 500 if training_args.debug_dataset else len(train_dataset)
trainer.save_model() # Saves the tokenizer too for easy upload
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
# Evaluation
eval_results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
eval_result = trainer.evaluate(eval_dataset=val_dataset)
logger.info(pformat(eval_result, indent=4))
output_eval_file = os.path.join(
training_args.output_dir, f"eval_metric_results_{task}_fold_{i+1}.txt"
)
if trainer.is_world_master():
with open(output_eval_file, "w") as writer:
logger.info("***** Eval results {} *****".format(task))
for key, value in eval_result.items():
logger.info(" %s = %s", key, value)
writer.write("%s = %s\n" % (key, value))
eval_results.update(eval_result)
if training_args.do_predict:
logging.info("*** Test ***")
predictions = trainer.predict(test_dataset=test_dataset).predictions
output_test_file = os.path.join(
training_args.output_dir, f"test_results_{task}_fold_{i+1}.txt"
)
eval_result = trainer.evaluate(eval_dataset=test_dataset)
logger.info(pformat(eval_result, indent=4))
if trainer.is_world_master():
with open(output_test_file, "w") as writer:
logger.info("***** Test results {} *****".format(task))
writer.write("index\tprediction\n")
if task == "classification":
predictions = np.argmax(predictions, axis=1)
for index, item in enumerate(predictions):
if task == "regression":
writer.write("%d\t%3.3f\t%d\n" % (index, item, test_dataset.labels[index]))
else:
item = test_dataset.get_labels()[item]
writer.write("%d\t%s\n" % (index, item))
output_test_file = os.path.join(
training_args.output_dir, f"test_metric_results_{task}_fold_{i+1}.txt"
)
with open(output_test_file, "w") as writer:
logger.info("***** Test results {} *****".format(task))
for key, value in eval_result.items():
logger.info(" %s = %s", key, value)
writer.write("%s = %s\n" % (key, value))
eval_results.update(eval_result)
del model
del config
del tabular_config
del trainer
torch.cuda.empty_cache()
total_results.append(eval_results)
aggr_res = aggregate_results(total_results)
logger.info('========= Aggr Results ========')
logger.info(pformat(aggr_res, indent=4))
output_aggre_test_file = os.path.join(
training_args.output_dir, f"all_test_metric_results_{task}.txt"
)
with open(output_aggre_test_file, "w") as writer:
logger.info("***** Aggr results {} *****".format(task))
for key, value in aggr_res.items():
logger.info(" %s = %s", key, value)
writer.write("%s = %s\n" % (key, value))
def aggregate_results(total_test_results):
metric_keys = list(total_test_results[0].keys())
aggr_results = dict()
for metric_name in metric_keys:
if type(total_test_results[0][metric_name]) is str:
continue
res_list = []
for results in total_test_results:
res_list.append(results[metric_name])
if len(res_list) == 1:
metric_avg = res_list[0]
metric_stdev = 0
else:
metric_avg = mean(res_list)
metric_stdev = stdev(res_list)
aggr_results[metric_name + '_mean'] = metric_avg
aggr_results[metric_name + '_stdev'] = metric_stdev
return aggr_results
if __name__ == '__main__':
main()
```
2. Error
Traceback (most recent call last):
File "run.py", line 289, in <module>
main()
File "run.py", line 191, in main
trainer.save_model() # Saves the tokenizer too for easy upload
File "/home/avimodi/anaconda3/envs/chakanik_transformer/lib/python3.6/site-packages/transformers/trainer.py", line 1608, in save_model
**ShardedDDPOption.ZERO_DP_2 in self.args.sharded_ddp or ShardedDDPOption.ZERO_DP_3 in self.args.sharded_ddp
TypeError: 'in <string>' requires string as left operand, not ShardedDDPOption**
Killing subprocess 122966
Traceback (most recent call last):
File "/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File /lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/avimodi/anaconda3/envs/chakanik_transformer/lib/python3.6/site-packages/torch/distributed/launch.py", line 340, in <module>
main()
File "/home/avimodi/anaconda3/envs/chakanik_transformer/lib/python3.6/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/lib/python3.6/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 01-23-2022 17:22:50 | 01-23-2022 17:22:50 | The error comes from the value you have in `args.sharded_ddp` where `args` is your `MultiModalTrainingArguments` object. Since you did not share the code of that class, there is little we can do to help fix the issue.
Also, please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep the issues for bugs and feature requests only :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,300 | closed | Added missing code in exemplary notebook - custom datasets fine-tuning | # What does this PR do?
Added missing code in tokenize_and_align_labels function in the exemplary notebook on custom datasets - token classification.
The missing code concerns adding labels for all but the first token in a single word.
The added code was taken directly from huggingface official example - this [colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb).
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| 01-23-2022 16:46:35 | 01-23-2022 16:46:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I don't understand what you mean, the notebook has the same code as the example: they are automatically synced at each merge in the Transformers repo.<|||||>You're right. I pasted the wrong link so the comparison made no sense. The link should have been to the colab notebook:
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb
In the colab notebook `tokenize_and_align_labels` has the `else` clause which is missing in the notebook in Github and hence is missing here: https://huggingface.co/docs/transformers/custom_datasets#token-classification-with-wnut-emerging-entities<|||||>Those are two different tutorials, it's normal they have different code. The one in the main documentation is left as simple as possible on purpose.<|||||>In the main documentation, there is a mention about "Only labeling the first token of a given word. Assign -100 to the other subtokens from the same word.". However, it is not the case with the code below, the `tokenize_and_align_labels` function, where not only are the other subtokens assigned with the true labels but also `previous_word_idx` is not being updated. Such contradiction was ambiguous for me. Only after having dug deeper (the colab notebook), I understood that this part of code was missing from the official documentation. I do not think that these few lines were omitted on purpose. If you don't think it makes such a difference, close this pr (or let me know if I should be the one to close it).<|||||>I adjusted your comments. I guess I could have been more precise from the beginning. Thanks. |
transformers | 15,299 | closed | [WIP] Positive Constraint Decoding PR #1 | # Disjunctive Positive Constraint Decoding
@patrickvonplaten @LysandreJik @sgugger @patil-suraj @yjernite @thomwolf
Fixes #14081.
I apologize if this isn't a proper way to deal with feature contributions, but this is an **incomplete PR request**. I simply thought this was a good place to check-in and checkpoint on the progress & direction of the implementation. We can just work by keep adding commits to this PR request and progress until it's ready for final merge right?
Steps left:
- [ ] Applying positive constraints **disjunctively**.
- [ ] Writing tests
Here is an example of how one could use this functionality:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers.generation_beam_constraints import (
PhrasalConstraint
)
device = "cuda"
model = GPT2LMHeadModel.from_pretrained("gpt2").to(device)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
force_text = " big monsters"
force_text_2 = " crazy"
force_tokens = tokenizer.encode(force_text, return_tensors="pt").to(device)[0]
force_tokens_2 = tokenizer.encode(force_text_2, return_tensors="pt").to(device)[0]
constraints = [
PhrasalConstraint(force_tokens),
PhrasalConstraint(force_tokens_2)
]
input_text = ["The baby is crying because"]
model_inputs = tokenizer(input_text, return_tensors="pt")
for key, value in model_inputs.items():
model_inputs[key] = value.to(device)
k = model.generate(
**model_inputs,
constraints=constraints,
num_beams=7,
num_return_sequences=7,
no_repeat_ngram_size=2
)
for out in k:
print(tokenizer.decode(out))
```
For some example outputs:
```
The baby is crying because she's been told crazy big monsters are going to come and kill her.
The baby is crying because she's been told crazy big monsters are coming for her.
The baby is crying because she's been told crazy big monsters are going to come after her.
```
# 1. General Constraint Framework
Users can define their own constraints by inheriting the `Constraint` interface class and this framework is ensured to work as desired, because the `Constraint` class is quite strictly defined. If an implementation passes the `self.test()` function of this interface then it necessarily works as desired. An incorrect implementation will lead to an error.
```python
# https://github.com/cwkeam/transformers/blob/master/src/transformers/generation_beam_constraints.py#L16
class Constraint(ABC):
r"""Abstract base class for all constraints that can be applied during generation.
It must define how the constraint can be satisfied.
All classes that inherit Constraint must follow the requirement that
```
completed = False
while(not completed):
_, completed = constraint.update(constraint.advance())
```
will always terminate (halt).
"""
def __init__(self):
# test for the above condition
self.test()
def test(self):
'''
Tests whether this constraint has been properly defined.
'''
counter = 0
completed = False
while not completed:
if counter == 1:
self.reset()
advance = self.advance()
assert self.does_advance(advance)
stepped, completed, reset = self.update(advance)
counter += 1
if counter > 10000:
raise Exception("update() does not fulfill the constraint.")
assert self.remaining() == 0
def advance(self):
'''
When called, returns the token that would take this constraint
one step closer to being fulfilled.
returns:
token_ids(`torch.tensor`): Must be a tensor of a list of indexable tokens, not some integer.
'''
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can be called."
)
def does_advance(self, token_id: int):
"""
Reads in a token and returns whether it creates progress.
"""
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can be called."
)
def update(self, token_id: int):
"""
Reads in a token and returns booleans that indicate the progress made by it.
This function will update the state of this object unlikes `does_advance(self, token_id: int)`.
This isn't to test whether a certain token will advance the progress; it's to update its state
as if it has been generated. This becomes important if token_id != desired token
(refer to else statement in PhrasalConstraint)
Args:
token_id(`int`):
The id of a newly generated token in the beam search.
returns:
stepped(`boolean`):
Whether this constraint has become one step closer to being fulfuilled.
completed(`boolean`):
Whether this constraint has been completely fulfilled by this token being generated.
reset (`boolean`):
Whether this constraint has reset its progress by this token being generated.
"""
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can be called."
)
def reset(self):
"""
Resets the state of this constraint to its initialization.
We would call this in cases where the fulfillment of a constraint is abrupted by an unwanted token.
"""
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can be called."
)
def remaining(self):
'''
Returns the number of remaining steps of `advance()` in order to complete this constraint.
'''
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can be called."
)
def copy(self, stateful=False):
'''
Creates a new instance of this constraint.
Args:
stateful(`boolean`): Whether to not only copy the constraint for new instance, but also its state.
Returns:
constraint(`Constraint`): The same constraint as the one being called from.
'''
raise NotImplementedError(
f"{self.__class__} is an abstract class. Only classes inheriting this class can be called."
)
```
For now, I've defined `TokenConstraint` for forcing the generation of a specific token and `PhrasalContstraint` for forcing the generation of a sequence of tokens that are not broken in the output. The example use of the latter is in the example code above.
# 2. `model.generate()` Mixin
```python
# https://github.com/cwkeam/transformers/blob/master/src/transformers/generation_utils.py#L780
def generate(
self,
inputs: Optional[torch.Tensor] = None,
max_length: Optional[int] = None,
...
stopping_criteria: Optional[StoppingCriteriaList] = StoppingCriteriaList(),
constraints: Optional[List[Constraint]] = None,
output_attentions: Optional[bool] = None,
...
**model_kwargs,
)
```
Leads to:
```python
#https://github.com/cwkeam/transformers/blob/master/src/transformers/generation_utils.py#L1077
# 6. determine generation mode
is_constraint_gen_mode = constraints is not None
is_greedy_gen_mode = (num_beams == 1) and (num_beam_groups == 1) and do_sample is False and constraints is None
is_sample_gen_mode = (num_beams == 1) and (num_beam_groups == 1) and do_sample is True and constraints is None
is_beam_gen_mode = (num_beams > 1) and (num_beam_groups == 1) and do_sample is False and constraints is None
is_beam_sample_gen_mode = (num_beams > 1) and (num_beam_groups == 1) and do_sample is True and constraints is None
is_group_beam_gen_mode = (num_beams > 1) and (num_beam_groups > 1) and constraints is None
```
Which ends up defining a `ConstrainedBeamSearchScorer` and initiates the beam search:
```python
elif is_constraint_gen_mode:
if num_return_sequences > num_beams:
raise ValueError("`num_return_sequences` has to be smaller or equal to `num_beams`.")
if stopping_criteria.max_length is None:
raise ValueError("`max_length` needs to be a stopping_criteria for now.")
# 10. prepare beam search scorer
constrained_beam_scorer = ConstrainedBeamSearchScorer(
constraints=constraints,
batch_size=batch_size,
...,
)
# 11. interleave input_ids with `num_beams` additional sequences per batch
input_ids, model_kwargs = self._expand_inputs_for_generation(
input_ids, expand_size=num_beams, is_encoder_decoder=self.config.is_encoder_decoder, **model_kwargs
)
# 12. run beam search
return self.constrained_beam_search(
input_ids,
constrained_beam_scorer=constrained_beam_scorer,
...
)
```
# 3. Future Steps
## 1. Disjunctive Constraints
This doesn't yet do the *Disjunctive* decoding explained in the Issue #14081 . But this can be very easily implemented by simply defining a new `Constraint` sub-class. I will follow up with this with another commit.
## 2. Tests
I was unsure how to approach testing this generation function, especially since it's almost identical to the existing approaches, with just another step included that guides the generation.
| 01-23-2022 11:46:30 | 01-23-2022 11:46:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten Sorry for the confusion but I've closed this one and opened a new PR from a new branch with a lot more updates here #15416. Safe to leave this closed. |
transformers | 15,298 | closed | Fix the inconsistency of loss calculation between PT/TF XLNetLMHeadModel | # What does this PR do?
The loss calculation in `XLNetLMHeadModel` doesn't cut the logits/labels (unlike other models say `BertLMHeadModel`).
However, `TFXLNetLMHeadModel` works as other TF Causal LM models, and cut the logits.
This causes the loss difference higher than `4e-2` sometimes.
I believe `XLNet` works somehow differently from the usual causal LM models, and the provided labels and computed logits shouldn't be cut, as `XLNetLMHeadModel` does.
This PR fixes this inconsistency. | 01-23-2022 10:26:09 | 01-23-2022 10:26:09 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Could @Rocketknight1 or @gante give this a quick look?<|||||>This looks good! Should it be folded into PR #15256 though, or do you think it's better separate?<|||||>> This looks good! Should it be folded into PR #15256 though, or do you think it's better separate?
Hi, @Rocketknight1
#15256 we still cut the logits to compute the loss, but we will return the complete logits
But for this PR, we want to **use the complete logits to compute** the loss (and then return the complete one).
(that's my understanding)
They are different, and would be better to be separate PRs. There will be a conflict once one version is merged, but I can take care of resolving it.
<|||||>Nice finding @ydshieh!
I agree with @ydshieh, these are two separate issues -- the other one is general to causal models, this one pertains to the TF/PT mismatch in XLNet. <|||||>> 🚀 (please wait for @Rocketknight1's approval before merging)
Sure, I can't merge anyway 😅<|||||>Sorry for the delay! I looked at this earlier in the week and thought it looked good, and didn't realize you were waiting on me now. I'm happy to merge at this point - @ydshieh is there anything else you want to add, or should I go ahead?<|||||>> Sorry for the delay! I looked at this earlier in the week and thought it looked good, and didn't realize you were waiting on me now. I'm happy to merge at this point - @ydshieh is there anything else you want to add, or should I go ahead?
You can go ahead :-) Thank you!<|||||>Merging then 👍
Thank you for the PR, @ydshieh! |
transformers | 15,297 | closed | Fix and improve REALM fine-tuning | # What does this PR do?
This PR
1. adds `block_embedding_to` function to `RealmForOpenQA`, which allows users to send `block_emb` variable to a specific device.
3. adds an input argument `block_mask` for reader that replaces previous used `token_type_ids` as the candidate mask. As a result, the reader logits at the second `[SEP]` position of the sequence will be masked out, which would prevent `[SEP]` from occurring on the predicted answer spans and matches the original TF implementation.
Their difference:
```
sentence: "[CLS] Hello How are you? [SEP] Good Job [SEP]"
token_type_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1] # the last 1 is the second [SEP] token
block_mask = [0, 0, 0, 0, 0, 0, 0, 1, 1, 0]
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-23-2022 09:52:43 | 01-23-2022 09:52:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15297). All of your documentation changes will be reflected on that endpoint.<|||||>@patrickvonplaten - The inference time of NQ evaluation dataset is `8:25` when `block_emb` is on GPU, while the inference time is `21:07` when `block_emb` is on CPU, which is roughly 3 times slower than on GPU. The test was performed on single RTX 2080Ti GPU.
Do you think we should provide an option to let users decide whether to send `block_emb` to GPU? Considering that much slower inference speed...<|||||>> evaluation
That is indeed a pretty big slow down...given such inference times I think we should have `block_emb` by default on the same device than the other parameters (so we should not overwrite the `.to(...)` method I think). I would be happy with adding a `def block_embedding_to(device)` method that lets someone easily move only `block_emb` to CPU/GPU - what do you think?<|||||>Considering most users would expect the model can be fine-tuned on single gpu with 12GiB memory as the paper states, I'm a bit worried that if we by default have `block_emb` on the same device (let's say a 2080Ti) as other parameters, when users call `block_embedding_to` to send it back to CPU, the cuda memory occupied by `block_emb` might not be able to be released as expected.
To my knowledge, `torch.Tensor.to` will have the original tensor untouched and create a new tensor on the new device (so I think it's more like a copy rather than a transfer), and the way I know to release the memory occupied by the original tensor is delete the tensor until it is cleared by Python GC or by manually calling `torch.cuda.empty_cache()`. However, because I'm not very familiar with how PyTorch cuda context works, and I've seen in some circumstances that even though the tensor object was deleted and manually cleared, occupied spaces are still not be freed up, I don't have 100% confidence that the model can be trained on the GPU with 12GiB memory normally.
So to sum up, I think the most assured approach would be overriding `model.to` and adding a `def block_embedding_to(device)` as you suggest, this way users can choose whether they want to move `block_emb` to the GPU if they have one with big enough memory.
Does this make sense to you?<|||||>@patrickvonplaten - I noticed that the `block_records.npy` file has not been uploaded to model repos having an `openqa` suffix. Could you help with this so that people are able to fine-tune the model? Thanks.<|||||>>
Hey @qqaatw,
Is it ok if we first move the repos to the official Google org and then add the indices? This should be a bit easier :-)
<|||||>@patrickvonplaten Sure, but can I keep these checkpoints also on my namespace for testing (I can set private for them)? I will not be able to update checkpoints within Google org namespace right?<|||||>Hi @patrickvonplaten, do the above comments make sense to you? or I can just follow your guide. Sorry for the late update. |
transformers | 15,296 | closed | Question: how to evaluate word similarity with context? | ## ❓ Questions & Help
Hello,
I am trying to evaluate **word similarity** (particularly named entities).
Example:
`I flew to Paris last month.`
`I met a famous celebrity, Paris Hilton.`
Most named entity recognition models (e.g. the default `pipeline('ner')`) will return `Paris` and `Paris Hilton` as named entities.
They look similar, but they represent different entities. My question is:
### How can I determine that two hyponymic words/phrases are different (from context)?
I tried this solution:
https://discuss.huggingface.co/t/generate-raw-word-embeddings-using-transformer-models-like-bert-for-downstream-process/2958/2
and then compared with cosine similarity, but as mentioned in this issue https://github.com/huggingface/transformers/issues/2298, cosine similarity does not work well with BERT (and similar like models) embeddings.
I also looked into the topic of WSD (Word Similarity Disambiguation), but these solution work rather for "standard" hyponyms (like `baseball bat` and `cave bat`), but named entities are usually _special_ and are NOT present in dictionary.
Any help or suggestion will be more than welcome! 🤗 | 01-23-2022 09:14:31 | 01-23-2022 09:14:31 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) as well?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,295 | open | feat(flax): leave restored weights on CPU | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When restoring a flax model with `.from_pretrained()`, leave the weights on CPU.
The removed section was linked to issue https://github.com/google/flax/issues/1261 which is now closed.
When calling `jnp.array()`, the tensors are converted from numpy arrays and placed on default device (typically GPU or TPU), which can cause issues when loading very large models that don't fit on one single instance.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patil-suraj @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-23-2022 01:55:05 | 01-23-2022 01:55:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15295). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks @borisdayma,
This looks good to me! @patil-suraj - could you take a look? :-)<|||||>Yeah maybe we can have a `load_on_cpu` argument somewhere.
Actually for training large models my approach is currently the following:
* force loading the model on CPU (whether pre-trained or not)
* overwrite `model._params` with their frozen version so they are compatible with `pjit` (and 2 copies would be excessive)
* create `params` from `model._params` by moving them to the correct devices (potentially with model parallelism)
* remove `model._params` to save space
I do something similar when restoring my optimizer state.
Maybe I could create a separate issue to discuss the proper strategy.
I'm thinking that loading a model should return both model and weights separately and follow JAX stateless approach…<|||||>What do you think of this approach: https://github.com/huggingface/transformers/compare/master...borisdayma:flax-cpu2
I could add this in this PR if you like it.
Basically following @patil-suraj experimental branch:
* we have an argument `abstract_init` so we don't initialize the parameters (mainly for the case where we will load a pretrained model)
* we have an argument `load_on_cpu` so we can pjit the model later if desired
This is the approach I'm currently using.<|||||>A few notes on `load_on_cpu`:
* it is based on [this comment](https://github.com/google/flax/discussions/1690#discussioncomment-1715766)
* it works but is a bit slow due to compilation<|||||>I took some time to think about it and I think we kind of have two problems here:
1) User should be able to force weights to be initialized on CPU. I fully agree that we should allow this especially since large model training is one of the main goals of Flax in Transformers. This applies to both **pretraining from scratch** and **fine-tuning**
2) We should not create allocate memory for weights that are loaded anyways when doing fine-tuning. This only applies to **fine-tuning**.
To tackle 1) I think the best solution is actually to add not initialize the weights at all with a flag `do_init_weights` which we'll default to `True`, but that users can disable. I think in general the current design is not really in the spirit of JAX anyways as usually the user has full control over how to initialize the weights and allocating memory is normally not done when just "configuring the model" in JAX. So it's not really intuitive for JAX users that `bert = FlaxBertModel(config)` allocates memory for random weights anyways I think. However we had to also align Flax with PyTorch in the library which is why the current design makes sense from a Transformers point of view. I think the best solution here would be to allow the following:
```python
bert = FlaxBertModel(config, init_weights=False) # it should be possible to infer the expected shapes here without allocating any memory no?
random_weights = jax.jit(bert.init_weights, static_argnums=(1,), backend="cpu")(input_shape)
bert.params = random_weights # this setter method should verify that the params have the correct shape
```
=> this design should work no? Also for `from_pretrained(...)` no? Also there is a function in JAX that allows one to infer the expected shapes with 0 Flops and without allocating memory no so that we can get the expected shapes.
2) For 2 I think we should all Flax models would benefit from a mechanism that only inits weights that we need to actually init. Think we already talked about this @patil-suraj no. Not sure how feasible / easy to do this is, but I'm sure there is a way (worst case we add `init_head(...)` functions to all heads that are usually not included in the pretrained weighs.
Keen to hear your opinions here @borisdayma @patil-suraj <|||||>> there is a function in JAX that allows one to infer the expected shapes with 0 Flops and without allocating memory no so that we can get the expected shapes.
Yes, `jax.eval_shape(fn, *fn_args…)` which will return the output with only shape and dtype.
What is cool is the inputs only need shape and dtype as well.
When using it we may not need to use `jax.jit(…, backend='cpu')` which I believe may add extra time through compilation (To be confirmed) so it's really just when training from scratch.
> We should not create allocate memory for weights that are loaded anyways when doing fine-tuning. This only applies to **fine-tuning**.
Solved with `jax.eval_shape()` which returns you the required structure to use `from_bytes()` later.
> I think in general the current design is not really in the spirit of JAX anyways as usually the user has full control over how to initialize the weights and allocating memory is normally not done when just "configuring the model" in JAX.
I was thinking that we could have the params separated from the model (in JAX spirit). I would for example do this:
```python
# load model & params
model, params = bert.from_pretrained(model_name, load_on_cpu=True/False, init_weigths=True/False)
# create state (loaded on TPU)
state = TrainState.create(model, params, optimizer)
# create state (sharded)
pjit_create = pjit(TrainState.create, in_axis_resources=…, out_axis_resources=…, donate_argnums=(…))
state = pjit_create(model, params, optimizer)
```
We could even directly load a sharded model and not go through the CPU loading with something like:
```
pjit_from_pretrained = pjit(bert.from_pretrained…, static_argnums=...)
```
I would need to test but the most efficient way is probably to pjit as a single function model loading + state creation and have params only within the state.
Separating params and model gives more control because sometimes we need `freeze(params)` with sharded spec so then you could have 2 copies of the params and I often end up doing hacky things like:
```
# use frozen dict instead of regular dict
model._params = freeze(model.params)
# create state
state = TrainState.create(model._params)
# save RAM
del model._params
```
I think the current approach (using non-frozen dict + adding params to model) is due making it more accessible to Pytorch users but as you scale up, it may add complexity to JAX logic. |
transformers | 15,294 | closed | Fix loss calculation in TFXXXForTokenClassification models | # What does this PR do?
The current loss calculation in `TFFunnelForTokenClassification` (and other TF token classification models) don't align with the loss in `FunnelForTokenClassification`
TF
https://github.com/huggingface/transformers/blob/6ac77534bfe97c00e0127bb4fc846ae0faf1c9c5/src/transformers/models/funnel/modeling_tf_funnel.py#L1709
PT
https://github.com/huggingface/transformers/blob/6ac77534bfe97c00e0127bb4fc846ae0faf1c9c5/src/transformers/models/funnel/modeling_funnel.py#L1470-L1481
(which further uses the `attention_mask`).
This PR aims to fix this. Currently only `TFFunnelForTokenClassification` is fixed. I would like to have some feedback first before fixing all involved models.
## More information 1
Check the loss difference between PT/TF version 300 times,
For `(TF)BertForTokenClassification`
```
without this PR:
max_diff: 0.022978603839874268
mean_diff: 0.006492746472358704
with this PR (applied locally):
max_diff: 1.7881393432617188e-07
mean_diff: 3.3974647521972653e-08
```
For `(TF)FunnelForTokenClassification`
```
without this PR:
max_diff: 0.2821081280708313
mean_diff: 0.05486685564120611
with this PR:
max_diff: 3.5762786865234375e-07
mean_diff: 6.020069122314453e-08
```
## More information 2
Current transformers's version doesn't have a PT-TF test that tests the equivalence in the cases where `labels` is passed to the models. In PR #15256, such a test is introduced (to make sure more PT/TF equivalence). The inconsistency of loss calculation in PT/TF `XXXForTokenClassification` models requires the error tolerance to be not smaller than `0.2` in order to pass the test (and this doesn't even guarantee).
https://github.com/huggingface/transformers/blob/6134cc69527aa9df9a4f2bc9e545222456a34524/tests/test_modeling_tf_common.py#L413-L441
It would be better to address this inconsistency, and lower that threshold once this PR is merged. | 01-22-2022 15:35:59 | 01-22-2022 15:35:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I think in this instance it's the PyTorch model that is incorrect, and it should use the -100s in the labels instead of the attention mask to determine the logits/labels to ignore.
I have the same thought on the role/responsibility of `labels`, but was too afraid to change the PyTorch codes.
I can do it the other way, but the same situation happens for `BertForTokenClassification`.
It would be great if HF can confirm this change (for PyTorch models) is welcomed.
<|||||>This is a breaking change in case the attention mask does not align with the `-100` labels, but I think that's okay.
The token classification models are the only models that have this feature, which (AFAICT) isn't mentioned in the docs anywhere, so imo it isn't a fully supported feature and it's fine to remove it.
Would also appreciate @patrickvonplaten's comment.<|||||>There are also `FunnelForPreTraining` and `ElectraForPreTraining` which use attention_mask
```
if attention_mask is not None:
active_loss = attention_mask.view(-1, discriminator_sequence_output.shape[1]) == 1
active_logits = logits.view(-1, discriminator_sequence_output.shape[1])[active_loss]
active_labels = labels[active_loss]
loss = loss_fct(active_logits, active_labels.float())
```
(I saw it during the search for the last commit)
`BertForPreTraining` doesn't use it for loss in the code
```
total_loss = None
if labels is not None and next_sentence_label is not None:
loss_fct = CrossEntropyLoss()
masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1))
next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1))
total_loss = masked_lm_loss + next_sentence_loss
```
<|||||>Before merging, would it be better to add a test? There is a more general test in #15256 (not merged yet, see below), which will fail (due to other PT/TF differences).
I can add that test in this PR, but restrict to `...ForTokenClassification` models here. The test can be made more general along the way of fixing other PT/TF differences.
https://github.com/huggingface/transformers/blob/47a5d26b58afbfae2366328fe4242de9bb662fcb/tests/test_modeling_tf_common.py#L414-L461<|||||>Well, `FunnelForPreTraining` and `ElectraForPreTraining` use binary classification for pretraining objective.
```
- 0 indicates the token is an original token,
- 1 indicates the token was replaced.
```
and `nn.BCEWithLogitsLoss` doesn't support `ignore_index`.
The best one can do it probably just to make `TFFunnelForPreTraining` and `TFElectraForPreTraining` return loss (currently, they don't compute loss)
(or open a PR in PyTorch repo to modify it ...😲)<|||||>Also cc @patil-suraj <|||||>I think what @sgugger said makes sense, https://github.com/huggingface/transformers/pull/15294#discussion_r794484953, and considering https://github.com/huggingface/transformers/pull/15294#discussion_r794507876, I think I won't have any action for now until some more feedback from HF side (once you get sometime :-) )<|||||>Reverted the change to research projects / Tests good / Ready to go :-) / Thanks!<|||||>Thank *you* for your contribution :-) |
transformers | 15,293 | closed | remove references to PDF reading via PIL | As stated in the PIL documentation
[here](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html?highlight=pdf#write-only-formats),
it can only write PDF's, not read them. Remove references to reading
PDF's via PIL from this page to avoid confusion.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| 01-22-2022 14:17:05 | 01-22-2022 14:17:05 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15293). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Not stale, just needs a quick review and a merge.<|||||>No problem, Niels. I think the wording you suggested is still a bit confusing, but since you want to mention PDF I'll send back a counter-edit :) <|||||>@NielsRogge I have built on your comment, making it explicit that PDFs have to be converted to images first.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Looks like I need to come back and style the file I changed.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge I fixed the line lengths for the lines that we actually changed. The line length test is still failing, but it's because there are other lines in the file that are longer than 119 characters. |
transformers | 15,292 | closed | Padding idx in modeling RoBERTa | There's a padding_idx for positional embeddings in RoBERTa
https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L98
```
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
```
This is most definitely a mistake:
First of all, why do you need a padding index for positional embeddings
And second of all, the index of padding is taken from vocab, so if you have a padding token in the end of your vocabulary, this will fail with
`AssertionError: Padding_idx must be within num_embeddings` | 01-22-2022 08:44:03 | 01-22-2022 08:44:03 | Hi,
Not sure what you mean. If we provide a sentence like "hello world" to RoBERTa that we pad up to a certain length:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
text = "hello world"
encoding = tokenizer(text, padding="max_length", max_length=10, return_tensors="pt")
print(tokenizer.decode(encoding.input_ids.squeeze())
```
Then it will look like:
```
<s>hello world</s><pad><pad><pad><pad><pad><pad>
```
The position_ids are created internally by the model using the `create_position_ids_from_input_ids` function as seen [here](https://github.com/huggingface/transformers/blob/6ac77534bfe97c00e0127bb4fc846ae0faf1c9c5/src/transformers/models/roberta/modeling_roberta.py#L108). Let's take a closer look by checking out the position IDs created:
```
from transformers.models.roberta.modeling_roberta import create_position_ids_from_input_ids
position_ids = create_position_ids_from_input_ids(encoding.input_ids, padding_idx=tokenizer.pad_token_id)
print(position_ids)
```
This prints:
```
tensor([[2, 3, 4, 5, 1, 1, 1, 1, 1, 1]])
```
So as we can see, the position IDs of padding tokens are set to 1. This is also the `padding_idx` of the position embedding layer. The `padding_idx` is explained in the [docs](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html) of PyTorch's `nn.Embedding`:
> padding_idx (int, optional) – If specified, the entries at padding_idx do not contribute to the gradient; therefore, the embedding vector at padding_idx is not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed Embedding, the embedding vector at padding_idx will default to all zeros, but can be updated to another value to be used as the padding vector.
Hence, for the padding tokens, no position vector will be learned.
<|||||>Don't we zero out gradients for paddings using masks?
And also if you are right, this still is a problem that padding token should be synchronized between word embeddings and position embeddings
As i'm saying, if you learn a tokenizer and then add special tokens in the end, this code breaks <|||||>My biggest concern is:
Here we use a padding index id from our word token vocab
```
self.padding_idx = config.pad_token_id
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
```
I really don't understand why it has to be this way, especially if the padding id is bigger then `max_position_embeddings`
I also guess i have to mention that i'm talking about a custom RoBERTa model<|||||>Bump
I'm still struggling with this problem :( <|||||>Please refer to the [forum](https://discuss.huggingface.co/) for training-related questions, as we'd like to keep Github issues for bugs/feature requests.
To answer your question, if you have special tokens defined, then the position embeddings will be created properly by the `create_position_ids_from_input_ids` function. No position embedding vector will be learned for padding tokens.
<|||||>Sorry if i'm being too persistent, but i'm fairly certain that it's a bug. Let me bring everything in one comment, so it will draw a full picture. I can try and make a minimal breaking example, but it's fairly complicated, to be honest.
So imagine we have a RoBERTa tokenizer with vocab size of 30000 and maximum length of 512. We trained the tokenizer agnostic to the model, so we add necessary special tokens in the end of the vocab, so our vocab now has tokens
```
{
"<pad>": 30000,
"<s>": 30001,
"<\s>": 30002,
"<mask>": 30003
}
```
I might've forgot something, but it doesn't matter in this example.
Now we initialize RobertaModel and RobertaEmbeddings inside of it
https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L70
We get padding token from our vocab and it's index is 30000
https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L97
` self.padding_idx = config.pad_token_id`
And then we define position embeddings using this index like so:
https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L98
```
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
```
Which turn into this line, if we consider out parameters:
```
self.position_embeddings = nn.Embedding(
512, 768, padding_idx=30000
)
```
Which is going to give us an error from torch` AssertionError: Padding_idx must be within num_embeddings` because we tried to set the index of padding token as the index of it in the word vocab. This index is out of scope for positional embeddings.
<|||||>Hey @AlexUmnov, thanks for opening an issue! Jumping in here to let you know that we aim to have implementations that are as similar as possible to the original implementation.
Here, for RoBERTa, we followed Fairseq's implementation in order to obtain as close as possible the same results. I invite you to follow their implementation and see that they use the `paddind_idx` in the same manner that we do:
https://github.com/pytorch/fairseq/blob/fcca32258c8e8bcc9f9890bf4714fa2f96b6b3e1/fairseq/models/transformer/transformer_encoder.py#L67-L76
-->
https://github.com/pytorch/fairseq/blob/fcca32258c8e8bcc9f9890bf4714fa2f96b6b3e1/fairseq/modules/positional_embedding.py#L25
-->
https://github.com/pytorch/fairseq/blob/fcca32258c8e8bcc9f9890bf4714fa2f96b6b3e1/fairseq/modules/learned_positional_embedding.py#L53-L61
If you find there is an issue with this implementation, we invite you to open an issue in Fairseq's repository. Thanks for your understanding.<|||||>Hey @LysandreJik, thanks for explaining that!
Yeah, it seems to me like the original design is flawed.
But, being honest, it seems like i'd be better of just fixing my own model to fit in those weird constraints.<|||||>@AlexUmnov did you finally solve your issue?
I came across the same error. In my case, I got the IndexError because of the function:
`
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
mask = input_ids.ne(padding_idx).int()
incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask
return incremental_indices.long() + padding_idx
`
So, when you have a sequence without any padding, this function will return an array of 2:512 positions which is out of range. <|||||>@symelaz unfortunately it was more relevant for my project to just abandon that. <|||||>Umm, I think one easy way is to remove the padding setting for position embedding.
I believe that the padding position itself in position embedding is learned by word embedding.
Compared to the original implementation, I think the learning load of padding in position embedding will increase, but this is semantically the same as `create_position_ids_from_inputs_embeds`, and it feels easier than creating data that always contains PAD.
- `create_position_ids_from_inputs_embeds`
https://github.com/huggingface/transformers/blob/39b4aba54d349f35e2f0bd4addbe21847d037e9e/src/transformers/models/roberta/modeling_roberta.py#L139
- image of fix points (Not a little smart...)
- Creating position embeddings module
https://github.com/huggingface/transformers/blob/39b4aba54d349f35e2f0bd4addbe21847d037e9e/src/transformers/models/roberta/modeling_roberta.py#L95
```python
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size
)
```
- Creating position ids
https://github.com/huggingface/transformers/blob/39b4aba54d349f35e2f0bd4addbe21847d037e9e/src/transformers/models/roberta/modeling_roberta.py#L1562
```python
mask = input_ids.ne(padding_idx).int()
incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length)
return incremental_indices.long() - 1
```<|||||>Just wanted to point out I faced the same issue. The current design is non-sensical. It can only result in bugs. If your padding token is the token 3 instead of 1, your model can only accept 510 tokens, and your position embeddings start at 3 instead of 1. |
transformers | 15,291 | closed | [Fix doc example] fix missing import jnp | # What does this PR do?
Some doc examples in Flax models miss `import jax.numpy as jnp` (including the flax template).
This PR fixes it. | 01-22-2022 07:28:10 | 01-22-2022 07:28:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,290 | closed | Update CONTRIBUTING.md | Fix typo in doc
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @LysandreJik | 01-22-2022 06:37:53 | 01-22-2022 06:37:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,289 | closed | TrOCR small processors are all broken | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using TrOCR
The problem arises when using:
* [ Y] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the example at https://huggingface.co/microsoft/trocr-small-handwritten#how-to-use
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code:
```
from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import requests
# load image from the IAM database
url = 'https://fki.tic.heia-fr.ch/static/img/a01-122-02-00.jpg'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-small-handwritten')
model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-small-handwritten')
pixel_values = processor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
Error trace:
```
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'XLMRobertaTokenizer'.
The class this function is called from is 'RobertaTokenizer'.
Traceback (most recent call last):
File "/Users/samuel.warren/development/signature_detection/src/trocr_issue.py", line 9, in <module>
processor = TrOCRProcessor.from_pretrained('microsoft/trocr-small-handwritten')
File "/usr/local/Caskroom/miniconda/base/envs/sd39/lib/python3.9/site-packages/transformers/models/trocr/processing_trocr.py", line 109, in from_pretrained
tokenizer = RobertaTokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/usr/local/Caskroom/miniconda/base/envs/sd39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1747, in from_pretrained
return cls._from_pretrained(
File "/usr/local/Caskroom/miniconda/base/envs/sd39/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1882, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/Caskroom/miniconda/base/envs/sd39/lib/python3.9/site-packages/transformers/models/roberta/tokenization_roberta.py", line 166, in __init__
super().__init__(
File "/usr/local/Caskroom/miniconda/base/envs/sd39/lib/python3.9/site-packages/transformers/models/gpt2/tokenization_gpt2.py", line 180, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect it to run and output the text. | 01-21-2022 22:47:08 | 01-21-2022 22:47:08 | See Issue #14893. You'll have to pip install from the GitHub source: `pip install transformers git+https://github.com/huggingface/transformers.git` to get working small trocr transformers. <|||||>working now, thank you!<|||||>Closing, since this has been fixed (the small TrOCR models are only supported since the PR linked above, which will be included in the next release). |
transformers | 15,288 | closed | Update model share tutorial | First draft of the updated model sharing tutorial. Main updates include:
- Suggest converting a model to other frameworks before uploading it to the Hub.
- Advocate for `push_to_hub` as best practice for uploading files to the Hub. Most notably, the `transformers-cli` method has been removed. Would love your thoughts on whether we should include using the web interface to share models so this step is more accessible a wider background of users! | 01-21-2022 22:10:38 | 01-21-2022 22:10:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>yes i would advocate to include using the web interface to share models |
transformers | 15,287 | closed | Avoid using get_list_of_files | # What does this PR do?
This PR removes all uses of `get_list_of_files` which uses a non-optimized call on the Hub, to only rely on the API to get files (which is better optimized). It should make the general `from_pretrained` APIs much more stable (cc @julien-c ).
To refactor some of the code we often get to just open a config, this PR adds a new utility in file_utils that will download and cache a given file from a repository and return the resolved path, but only fails if the repository or the revision does not exist. When the file itself does not exist, the function returns None, which make it easy for us to test if a given file (like a tokenizer config) is present or not, without relying on `get_list_of_files`.
The other case where this function was used is when we have a versioned config/tokenizer config in a given repo. In this instance, it will now be necessary to include the list of available configs/tokenizers in the config/tokenizer config so that we don't have to call `get_list_of_files`. There is only one repo in the wild (tested every one) that used this API (which is the last recourse to fix some backward compatibility issues), [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base), and I have manually fixed its config (the whole change is backward compatible in the sense that this repo will still work with older versions of Transformers). | 01-21-2022 21:17:28 | 01-21-2022 21:17:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>We can definitely port it in a PR to `huggingface_hub`. I would leave it in this PR for now and wait for a release on `huggingface_hub` to replace it by the one there. |
transformers | 15,286 | closed | Fix a typo in tag addition | # What does this PR do?
This PR fixes a typo in the tag addition for the Keras Callback. | 01-21-2022 19:48:56 | 01-21-2022 19:48:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,285 | closed | fix: Vocabulary build issue in run_speech_recognition.py | # What does this PR do?
This PR fixes the issue of vocabulary building in run_speech_recognition_ctc.py (https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L332).
While building vocabulary we replace the blank token " " by pipe symbol as delimiter ("|"). But if the dataset already contains the pipe symbol, which is common in Hindi language as Hindi language uses a similar character "।" as a full stop, in this case we will get two pipe symbols in the vocabulary. This will mess up the further process of creating the dictionary which is adding special tokens. Specifically, the bos token "\<s\>" and the pad token "[PAD]" will get the same index in vocabulary. This will cause the index with [PAD] token to be replaced by \<s\> and the output of Wav2Vec2ForCTC model to contain the "\<s\>" token.
<!-- Remove if not applicable -->
Fixes #15275
## Who can review?
@patrickvonplaten , @anton-l , @Narsil | 01-21-2022 19:34:01 | 01-21-2022 19:34:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15285). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @shivamm7,
Thanks for your issue - I can see the problem. However, instead of deleting `"|"` could you maybe just overwrite:
```
--word_delimiter_token
```
with a token of your choice? E.g. you could do:
```bash
run_speech_recognition_ctc.py \
--word_delimiter_token "&"
```
in this case `"&" would be used as the word delimiter token<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,284 | closed | RagSequenceForGeneration without retriever | Hi Hugging Face Team.
First of all thank you for your work, I am a fan of Transformers. 😉
I'm opening this issue because I'm having trouble using the RagSequenceForGeneration, model that I'm particularly interested in. I do not want to use the default retriever but rather apply the generator on an input list of documents (retrieved using Elasticsearch). I have tried to dig into the code without success. I believe that the parameters are not correctly passed within the generator. I may have been mistaken about the use of the model API. I am looking for your help to generate text without using the retriever.
## Environment info
- `transformers` version: 4.16.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
@patrickvonplaten, @narsil
Models:
- RAG generator
## Information
I'm using RagSequenceForGeneration and RagTokenForGeneration.
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import RagTokenizer, RagTokenForGeneration, RagSequenceForGeneration
import torch
tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq")
model = RagSequenceForGeneration.from_pretrained(
"facebook/rag-sequence-nq", retriever=None)
q = "What is the capital of France ?"
documents = [
"Paris is the capital of France",
"Madrid is the capital of Spain",
"Hugging Face rocks :)",
]
with tokenizer.as_target_tokenizer():
query = tokenizer(q, return_tensors="pt")
target = tokenizer(
text = documents,
return_tensors = "pt",
padding = "longest"
)
doc_scores = torch.tensor([[0.4], [0.3], [0.3]])
generated = model.generate(
input_ids = query["input_ids"],
attention_mask = query["attention_mask"],
context_input_ids = target["input_ids"],
context_attention_mask = target["attention_mask"],
doc_scores = doc_scores,
n_docs = 1,
)
tokenizer.batch_decode(
generated,
skip_special_tokens = False,
clean_up_tokenization_spaces = True,
)
```
```python
AssertionError Traceback (most recent call last)
<ipython-input-3-23d39672d326> in <module>
28 doc_scores = torch.tensor([[0.4], [0.3], [0.3]])
29
---> 30 generated = model.generate(
31 input_ids = query["input_ids"],
32 attention_mask = query["attention_mask"],
~/opt/miniconda3/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
24 def decorate_context(*args, **kwargs):
25 with self.__class__():
---> 26 return func(*args, **kwargs)
27 return cast(F, decorate_context)
28
~/opt/miniconda3/lib/python3.8/site-packages/transformers/models/rag/modeling_rag.py in generate(self, input_ids, attention_mask, context_input_ids, context_attention_mask, doc_scores, do_deduplication, num_return_sequences, num_beams, n_docs, **model_kwargs)
1022 if input_ids is not None:
1023 new_input_ids = input_ids[index : index + 1].repeat(num_candidates, 1)
-> 1024 outputs = self(new_input_ids, labels=output_sequences, exclude_bos_score=True)
1025 else: # input_ids is None, need context_input_ids/mask and doc_scores
1026 assert (
~/opt/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/opt/miniconda3/lib/python3.8/site-packages/transformers/models/rag/modeling_rag.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, context_input_ids, context_attention_mask, doc_scores, use_cache, output_attentions, output_hidden_states, output_retrieved, exclude_bos_score, reduce_loss, labels, n_docs, **kwargs)
846 use_cache = False
847
--> 848 outputs = self.rag(
849 input_ids=input_ids,
850 attention_mask=attention_mask,
~/opt/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/opt/miniconda3/lib/python3.8/site-packages/transformers/models/rag/modeling_rag.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, doc_scores, context_input_ids, context_attention_mask, use_cache, output_attentions, output_hidden_states, output_retrieved, n_docs)
656 ).squeeze(1)
657 else:
--> 658 assert (
659 context_input_ids is not None
660 ), "Make sure that `context_input_ids` are passed, if no `retriever` is set. Alternatively, you can set a retriever using the `set_retriever(...)` function."
AssertionError: Make sure that `context_input_ids` are passed, if no `retriever` is set. Alternatively, you can set a retriever using the `set_retriever(...)` function.
```
## Expected behavior
```python
# q = "What is the capital of France ?"
["Paris"]
```
| 01-21-2022 18:59:52 | 01-21-2022 18:59:52 | Hi @raphaelsty I am definitely not familiar with Rag.
Looking briefly at the stack, there is indeed no `context_input_ids` passed to (line 1024), I tried to manually add them just to see, and other errors pop up (linked to tensor dimensions).
Since I am not super familiar I can't say if we shouldn't end in that branch, or if the code has indeed a bug.
Do you mind sharing where you have adapted your code from ?
Pinging @lhoestq which is also responsible for Rag apparently.
<|||||>I can take a look at it :-)<|||||>Hi @Narsil, @patrickvonplaten, thank you for your quick response,
I have just managed to use the RagTokenForGeneration without the retriever.
I had not correctly formed the `context_input_ids` and `context_attention_mask` fields in my previous post. The code below works correctly for me.
I relied on the [documentation](https://huggingface.co/docs/transformers/model_doc/rag#transformers.TFRagTokenForGeneration.generate.doc_scores) to write the code but I have found an example of RAG without the retriever in [Haystack](https://github.com/deepset-ai/haystack/blob/c6f23dce8897ab00fcb15e272282d459dcfa564a/haystack/nodes/answer_generator/transformers.py#L151).
```python
from transformers import RagTokenizer, RagTokenForGeneration, RagSequenceForGeneration, RagRetriever
import torch
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
model = RagTokenForGeneration.from_pretrained(
"facebook/rag-token-nq", retriever=None)
num_beams = 10
k = 2
q = "What is the capital of France ?"
documents = [
"The capital of germany is Berlin",
"The capital of France is Paris",
]
q_documents = [
f"{document} {model.config.doc_sep} {q}" for document in documents
]
context = tokenizer.generator.batch_encode_plus(
q_documents,
max_length= model.config.max_combined_length,
return_tensors="pt",
padding="max_length",
truncation=True,
)
doc_scores = torch.tensor([[0.9, 0.1]], dtype=torch.float)
# Get generated ids from generator
generator_ids = model.generate(
context_input_ids = context["input_ids"],
context_attention_mask = context["attention_mask"],
doc_scores = doc_scores,
num_return_sequences=k,
num_beams=min(num_beams, k),
max_length=40,
min_length=1,
n_docs=len(documents)
)
tokenizer.batch_decode(generator_ids, skip_special_tokens=True)
[' paris', " `` paris ''"]
```
I think we can close this issue :)
<|||||>Thanks a lot for posting a sample for future readers |
transformers | 15,283 | closed | Saved slow tokenizers cannot be loaded in `AutoTokenizer` after environment change | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.0.dev0
- Platform: Linux-5.16.1-arch1-1-x86_64-with-arch
- Python version: 3.6.15
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.5 (cpu)
- Jax version: 0.2.17
- JaxLib version: 0.1.69
### Who can help
@SaulLu, @LysandreJik
## Information
After saving a slow tokenizer locally, this tokenizer cannot be used with `AutoTokenizer` after changing environments. The reason is that the tokenizer saves a link to a local file in its `tokenizer_file` attribute of the `init_kwargs`, which then gets saved in the `tokenizer_config.json`.
The `AutoTokenizer` inspects that field in order to load the file, but if the environment has changed (for example, the tokenizer pushed to the hub and re-used on a different computer), then it is unable to do so and crashes.
## To reproduce
```py
In [2]: from transformers import AutoTokenizer, BertTokenizer
In [3]: tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
In [4]: tokenizer.save_pretrained("local_folder")
Out[4]:
('local_folder/tokenizer_config.json',
'local_folder/special_tokens_map.json',
'local_folder/vocab.txt',
'local_folder/added_tokens.json')
```
The `tokenizer_config.json` looks like this (see `tokenizer_file`):
```
{"do_lower_case": false, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "tokenizer_file": "/home/user/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6", "name_or_path": "bert-base-cased", "tokenizer_class": "BertTokenizer"}
```
If I update this value to something different to simulate a path saved on a different machine, I end up with the following:
```py
In [5]: AutoTokenizer.from_pretrained("local_folder")
Traceback (most recent call last):
File "/home/lysandre/transformers/.env/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-12add8db4ef2>", line 1, in <module>
AutoTokenizer.from_pretrained("local_folder")
File "/home/lysandre/transformers/src/transformers/models/auto/tokenization_auto.py", line 545, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/lysandre/transformers/src/transformers/tokenization_utils_base.py", line 1749, in from_pretrained
**kwargs,
File "/home/lysandre/transformers/src/transformers/tokenization_utils_base.py", line 1877, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/lysandre/transformers/src/transformers/models/bert/tokenization_bert_fast.py", line 188, in __init__
**kwargs,
File "/home/lysandre/transformers/src/transformers/tokenization_utils_fast.py", line 108, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: No such file or directory (os error 2)
```
---
An example of this happening in production is available here, with the TrOCR model (@NielsRogge):
```py
In [2]: from transformers import TrOCRProcessor
In [3]: processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten", revision="554a6621f60cba4f756f4bed2caaa7e6e5b0a2e3")
Downloading: 100%|██████████| 4.03k/4.03k [00:00<00:00, 5.35MB/s]
Downloading: 100%|██████████| 228/228 [00:00<00:00, 184kB/s]
Downloading: 100%|██████████| 1.28k/1.28k [00:00<00:00, 1.04MB/s]
Downloading: 100%|██████████| 878k/878k [00:00<00:00, 6.99MB/s]
Downloading: 100%|██████████| 446k/446k [00:00<00:00, 6.06MB/s]
Downloading: 100%|██████████| 772/772 [00:00<00:00, 1.02MB/s]
Traceback (most recent call last):
File "/home/lysandre/transformers/.env/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-3-2e8a6ceb8f5c>", line 1, in <module>
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten", revision="554a6621f60cba4f756f4bed2caaa7e6e5b0a2e3")
File "/home/lysandre/transformers/src/transformers/models/trocr/processing_trocr.py", line 110, in from_pretrained
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/lysandre/transformers/src/transformers/models/auto/tokenization_auto.py", line 545, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/lysandre/transformers/src/transformers/tokenization_utils_base.py", line 1749, in from_pretrained
**kwargs,
File "/home/lysandre/transformers/src/transformers/tokenization_utils_base.py", line 1877, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/lysandre/transformers/src/transformers/models/roberta/tokenization_roberta_fast.py", line 184, in __init__
**kwargs,
File "/home/lysandre/transformers/src/transformers/models/gpt2/tokenization_gpt2_fast.py", line 146, in __init__
**kwargs,
File "/home/lysandre/transformers/src/transformers/tokenization_utils_fast.py", line 108, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: Permission denied (os error 13)
```
I get a permission denied error because the `tokenizer_config.json` points to the following:
```
"tokenizer_file": "/root/.cache/huggingface/transformers/e16a2590deb9e6d73711d6e05bf27d832fa8c1162d807222e043ca650a556964.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730"
```
And here's how you can recreate the same issue:
```py
feature_extractor = ViTFeatureExtractor(size=encoder_config.image_size)
tokenizer = RobertaTokenizer.from_pretrained("roberta-large")
processor = TrOCRProcessor(feature_extractor, tokenizer)
processor.save_pretrained("local_folder")
``` | 01-21-2022 18:58:43 | 01-21-2022 18:58:43 | Intriguing behavior indeed! :male_detective:
By digging a little, I personally think that the problem comes from the fact that the slow version of the tokenizer can save a `"tokenizer_file"` key in the `tokenizer_config.json` file. Indeed, if the tokenizer is of a fast type then in this case, the key cannot be saved in the `tokenizer_config.json`. This is due to the fact that `"tokenizer_file"` is part of the dictionary attribute `vocab_files_names` of a `XxxTokenizerfast` instance but is not part of the attribute `vocab_files_names` of a `XxxTokenizer` instance.
https://github.com/huggingface/transformers/blob/4df69506a8250d4bd298d457090b321b26b0c77f/src/transformers/tokenization_utils_base.py#L2039-L2043
If we look closer, the `tokenizer_file` is added to the `init_kwargs` attribute at this point in the code:
https://github.com/huggingface/transformers/blob/4df69506a8250d4bd298d457090b321b26b0c77f/src/transformers/tokenization_utils_base.py#L1672-L1697
So, this leads to another question: why does a slow tokenizer need to know about a `tokenizer_file`? Personally, I think it's just a historical legacy from when tokenizers were not separated into slow and fast versions (see related PRs: #5056 and #7659) - but I could be wrong or I could also be missing the usefulness of the `tokenizer_file` for slow tokenizers. But after looking into it, I don't think knowing the location of the `tokenizer_file` file is useful for a slow version of the tokenizer.
So I would propose to remove the retrieval of this file when the calling class is a slow version: I have started to work on this change in the PR #15319 [which needs another PR #15328 to be merged to be functional].
This fix will avoid creating configuration files with `tokenizer_file` key the that would not be informative (and worse as you shown source of errors). What do you think? Does it address the problem you were pointing out?<|||||>I think you're correct, and this should definitely address the problem pointed out above. Thank you, @SaulLu!<|||||>I'm closing this issue as this should be fixed by #15319 :slightly_smiling_face: <|||||>+1 |
transformers | 15,282 | closed | Self-Attention Layers for Perceiver Decoder | This feature request is about the PerceiverIO codebase, specifically about the implemented decoders. It seems like there are no self-attention layers, only a single cross-attention layer. Is that by design? Several current published methods mention cross- and self-attention layers as part of the decoding process, would it be possible to include this functionality?
https://github.com/huggingface/transformers/blob/7799b6128feb17b63c47ed77a71fa367d26492d2/src/transformers/models/perceiver/modeling_perceiver.py#L2035 | 01-21-2022 16:55:24 | 01-21-2022 16:55:24 | Hi,
> It seems like there are no self-attention layers, only a single cross-attention layer. Is that by design?
Yes, the authors used only a single cross-attention layer.
If you want, you can fork the library and tweak `modeling_perceiver.py` to include additional self-attention layers in the decoder.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this issue, as I believe I've answered your question. Feel free to re-open if you have further questions. |
transformers | 15,281 | closed | pytorch NER example dataset deleted | The file examples/pytorch/token-classification/run_ner.py is looking for a hardcoded conll2003 dataset.
In the hardcoded link, the following change removed the dataset:
https://github.com/davidsbatista/NER-datasets/commit/9d8f45cc7331569af8eb3422bbe1c97cbebd5690
To fix, the hardcoded link needs to be changed to a stable location or not be hardcoded. | 01-21-2022 15:54:19 | 01-21-2022 15:54:19 | Closing this, as datasets has updated the URL (see https://github.com/huggingface/datasets/issues/3582). |
transformers | 15,280 | closed | Fix processors | # What does this PR do?
This PR aims to fix import checks for processors of multi-modal models, like those of LayoutLMv2, TrOCR and ViLT.
The bare minimum for a processor is PIL (assuming one creates a processor by combining a feature extractor and a slow tokenizer). If one combines a feature extractor and a fast tokenizer, both PIL and tokenizers should be available.
Questions:
- should I also update the general init file?
- should I add the availability to load both the slow and fast tokenizer to a processor? e.g. ViltProcessor currently only allows the fast one
To do:
- [ ] Fix AutoProcessor API for TrOCR (#14884)
- [ ] Fix the slow integration tests (by not using the `AutoTokenizer` API in the `from_pretrained` method) | 01-21-2022 15:11:48 | 01-21-2022 15:11:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15280). All of your documentation changes will be reflected on that endpoint.<|||||>A much better fix was provided in #15549. Therefore, closing. |
transformers | 15,279 | closed | run_tests_pipelines_torch is not deterministic | I am currently working on a PR and while making sure we pass all tests and checks, i ran into a pipeline test that failed in one run but succeeded afterwards with the same codebase (only a docstring has been changed).
I think this nondeterministic test-behaviour should be avoided in an automatic build pipeline.
Here it failed:
https://app.circleci.com/pipelines/github/huggingface/transformers/32972/workflows/ede22951-5fd9-446c-8417-09acd2caa979/jobs/347421
Here it worked:
https://app.circleci.com/pipelines/github/huggingface/transformers/33082/workflows/f297f4fa-bb81-417c-ae40-892747803400/jobs/348790
Feel free to request additional info, as I didn't know what else is needed. | 01-21-2022 15:01:41 | 01-21-2022 15:01:41 | Pinging @Narsil <|||||>Hi @kevinpl07 This PR should have fixed the issue. You test ran later, but were you rebased after this PR https://github.com/huggingface/transformers/pull/15154 ?
I ran many tests on a local box without any issue on master just to be sure.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,278 | closed | Fixes Benchmark example link | Fixes [#15267](https://github.com/huggingface/transformers/issues/15267)
* Fixed broken link in notebooks/README.md
Documentation: @sgugger | 01-21-2022 14:49:13 | 01-21-2022 14:49:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,277 | closed | Add ConvNeXT | # What does this PR do?
This PR adds [ConvNeXT](https://github.com/facebookresearch/ConvNeXt) to the library, a convnet inspired by Transformers.
To do:
- [x] rename nielsr to facebook to fix tests
- [x] remove inference.py script
- [x] remove PushToHubMixin hack from feature extractor
- [x] fix "gamma" in `from_pretrained` method. Right now, if a model includes a parameter named "gamma", they are not initialized from the hub, due to [this line](https://github.com/huggingface/transformers/blob/833635e25997a00117076c50bde5c45f9b883ada/src/transformers/modeling_utils.py#L1490).
- [x] discuss model output classes. It feels a bit weird to talk about "hidden states" for convolutional models, most people use the term "features" or "feature maps". These are of shape (batch_size, num_channels, a certain height, a certain width) at each stage, rather than (batch_size, seq_len, hidden_size) at each layer of a Transformer. For now, they are still called `hidden_states` in this PR. | 01-21-2022 13:49:52 | 01-21-2022 13:49:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,276 | closed | Fix | # What does this PR do?
Fixes some suggestions not addressed in [this](https://github.com/huggingface/transformers/pull/15085/) pull request.
I've fixed the typo and unnecessary imports, but running `make style` may not allow some parts of the code to stay in the same line.
## Who can review?
@sgugger | 01-21-2022 12:56:37 | 01-21-2022 12:56:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hello, the `check_code_quality` fails now. Running `make style` will revert all the changes I made to the modeling file, so some parts of the code will not stay in the same line. <|||||>In the previous commit, I removed trailing commas and put the code in the same line, but the check still fails. In the next commit, I'll run `make style`.<|||||>Thanks again! |
transformers | 15,275 | closed | Inference of finetuned wav2vec2-xls-r-300m model using the ASR pipeline does not remove special tokens. | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten , @anton-l , @Narsil
Models:
- Wav2Vec2-XLS-R
Library:
- Pipelines
Model hub:
- https://huggingface.co/shivam/xls-r-hindi
## Information
Model I am using (Bert, XLNet ...): XLSR-Wav2Vec2 (wav2vec2-xls-r-300m)
The problem arises when using:
* [ ] the official example scripts: ASR pipeline (https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/pipelines/automatic_speech_recognition.py)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: Automatic Speech Recognition (https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0)
## To reproduce
Steps to reproduce the behavior:
1. Finetune the wav2vec2-xls-r-300m model on Hindi language using the mozilla-foundation/common_voice_7_0 dataset using either (https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) or (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2)
2. Infer the model using the ASR pipeline or Huggingface demo.
3. After inference special tokens like \<s\> and [UNK] are not skipped and present in the output.
4. 
5. The tokenizer of wav2vec2-xls-r-300m is “Wav2Vec2CTCTokenizer” and in the asr pipeline if “CTC” is present in tokenizer class name then skip_special_tokens is set to False (transformers/automatic_speech_recognition.py at v4.15.0 · huggingface/transformers · GitHub), because of this special tokens are included in the output when using asr pipeline.
6. 
## Expected behavior
1. After inference the special tokens should be skipped in the output and the output should not contain special tokens.
| 01-21-2022 12:07:19 | 01-21-2022 12:07:19 | We need to not skip special tokens for CTC (wav2vec2 in particular) because of the [PAD] token.
HELLO can only be transcribed because the CTC tokens are H, E, L, PAD, L, L, L, O, O for instance.
It seems here that maybe `<s>` got confused with `<pad>` and hence is not properly skipped during decoding. Could that be it ? Also I am unsure to know how it would behave on the unicode scripts you are using (I hope if we remove `<s>` by properly using the `<pad>` token everything will work magically, but I can’t be sure.
That being said, making PAD be extra special in some regard, might be interesting.<|||||>Hey @shivamm7,
Could you post a code snippet that shows you error and that we could use to debug the model? :-)
It could be as simple as:
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "shivam/xls-r-hindi"
sample = next(iter(load_dataset("common_voice", "hi", split="test", streaming=True)))
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
prediction_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(prediction_ids)
# TODO: What do you expect here?
```<|||||>I found the issue. The issue is with the vocabulary building procedure in run_speech_recognition_ctc.py (https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L332).
While building vocabulary we replace the blank token " " by pipe symbol as delimiter ("|"). But if the dataset already contains the pipe symbol, which is common in Hindi language as Hindi language uses a similar character "।" as a full stop, in this case we will get two pipe symbols in the vocabulary. This will mess up the further process of creating the dictionary which is adding special tokens. Specifically, the bos token "\<s\>" and the pad token "[PAD]" will get the same index in vocabulary. This will cause the index with [PAD] token to be replaced by \<s\> and the output of Wav2Vec2ForCTC model to contain the "\<s\>" token.
I have created a pull request #15285 which fixes this issue.
Code snippet to recreate the issue:
```python
from datasets import Audio, Dataset, load_dataset, load_metric
from transformers import AutoFeatureExtractor, pipeline
dataset = load_dataset("mozilla-foundation/common_voice_7_0", "hi", split="test", use_auth_token=True)
# for testing: only process the first two examples as a test
dataset = dataset.select(range(10))
# load processor
feature_extractor = AutoFeatureExtractor.from_pretrained("shivam/xls-r-hindi")
sampling_rate = feature_extractor.sampling_rate
# resample audio
dataset = dataset.cast_column("audio", Audio(sampling_rate=sampling_rate))
# load eval pipeline
asr = pipeline("automatic-speech-recognition", model="shivam/xls-r-hindi")
# map function to decode audio
def map_to_pred(batch):
prediction = asr(
batch["audio"]["array"])
batch["prediction"] = prediction["text"]
batch["target"] = batch["sentence"]
return batch
# run inference on all examples
result = dataset.map(map_to_pred, remove_columns=dataset.column_names)
print(result["prediction"])
```
This is the same code used in eval.py (https://github.com/huggingface/transformers/blob/master/examples/research_projects/robust-speech-event/eval.py)<|||||>Answered on the PR: https://github.com/huggingface/transformers/pull/15285 :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey - this issue still exists in transformers `4.30.2`
```python
from transformers import pipeline
import torch
import requests
pipe = pipeline(
"automatic-speech-recognition",
model="Harveenchadha/vakyansh-wav2vec2-bhojpuri-bhom-60",
device="cuda:0",
torch_dtype=torch.float16,
)
audio = requests.get("https://storage.googleapis.com/dara-c1b52.appspot.com/daras_ai/media/54dc79c6-1c96-11ee-b5a1-02420a00015d/1min%201.wav").content
pipe(inputs=audio)
```
```
{'text': '<s>ा<s>ी<s>य<s>के<s> द<s> <s>आ<s>य<s> <s>ब<s>स<s> <s>ी<s> <s>स<s>े<s> <s>मि<s>ल<s>े<s> <s> <s>य<s>त<s>ना<s> <s>न<s> <s>आ<s>गा<s> <s>आ<s>ग<s> <s>ह<s> <s>द<s>ख<s>े<s> <s>े<s> <s>दि<s>दी<s> <s>क्य<s>ा<s>र<s>र<s>ह<s>ी<s> <s>ह<s> <s> <s>ह<s>म<s> <s>को<s> त<s>ो<s> <s>ज<s>ब<s> <s>कि<s>स<s>ा<s>न<s>ी<s> <s>े<s> <s>ह<s> क<s> ह<s>म<s> <s>क<s>र<s>र<s> <s> <s> <s>भ<s> <s>य<s>ा<s>थ<s>र<s>ा<s> <s>ध<s>ु<s>ने<s> आ<s>ह<s> <s>थ<s>र<s>ब<s>ब<s>ठ<s> <s>जा<s> <s>त<s>ो<s> <s>दी<s>दी<s> <s>ह<s>म<s> <s>स<s>ु<s>न<s>े<s> ह<s>ै<s> <s>कि<s> आ<s>प<s> <s>स<s>ि<s>र<s>ी<s> <s>भी<s> <s>धी<s> <s>से<s> भ<s>ी<s> <s>ख<s>ती<s> <s>क<s>र<s>त<s>े<s> <s>ह<s> <s>श<s>री<s> व<s>ि<s>धी<s>स<s>ो<s> <s>तो<s> <s>ह<s>म<s> <s>क<s>र<s> <s>र<s>ह<s>है<s> आ<s>प<s> <s>भी<s> <s>त<s>ो<s> <s>ख<s>े<s>ती<s> <s>क<s> र<s>ह ह<s>ैं त<s>ो<s> <s>अ<s>प<s>न<s> <s>ज<s>ा<s>न<s>ते<s> <s>ै<s> <s>न<s>ही<s> <s>न<s>ही<s> <s>ह<s> <s>तो<s> <s>ज<s>न<s> भी<s> <s>न<s>ही<s> <s>क<s>र<s>त<s>े<s> <s>है<s>ं<s>ख<s>त<s>ी<s>म<s>ने<s> <s>ह<s>म<s> <s>ल<s>ो<s>ग<s> <s>क<s>र<s>ते<s> ह<s>ै<s>ं<s> <s>त<s>ो<s>ब<s>ी<s>ज<s> उ<s>प<s>च<s>ा<s>र<s> <s>क<s>र<s>न<s>ा<s> <s>प<s>र<s>त<s>ा<s> है<s> <s>म<s>ी<s>ज<s>प<s>च<s>ा<s>र<s> <s>क<s>र<s>ना<s> <s>इ<s>स<s>के<s> <s>बा<s>रे<s> में न<s>हीं <s>प<s>ता<s> <s>है <s>प<s>को<s>न<s> <s>इ<s>स<s>क<s>े<s> <s>बा<s>रे<s> मे<s>ं <s>त<s> ह<s>ब<s>त<s>ा<s> <s>क<s>र<s>ना<s> <s>च<s>ा<s>हि<s> <s>प<s>ह<s>त<s>ना<s> <s>दि<s>न<s> स<s>े<s> <s>ख<s>े<s>ती<s> <s>क<s>र<s> र<s>ह<s>है<s> आ<s>प<s>को<s> <s>प<s>त<s>े<s> <s>न<s>ही<s> <s>ह<s>ै<s> <s>ज<s>ो<s> <s>बी<s>ज<s> उ<s>च<s>ा<s>र<s> <s>क<s>र<s> <s>के<s> <s>ख<s>े<s>त<s>ी<s> <s>क<s>र<s>ते<s> ह<s>ै<s>ं<s> <s>त<s>ो<s> <s>क<s>स<s>फ<s>स<s>ल<s> हो<s>दी<s> <s>र<s>ा<s>ई<s>ी<s> <s>के<s> <s>ब<s>ा<s>र<s>े<s> <s>मे<s>ं<s> <s>र<s> <s>के<s> <s>दे<s>ख<s>ा<s>द<s>े<s> <s>त<s>े<s> <s>तो<s> अ<s>च्<s>छ<s>ा<s> <s>र<s>ह<s>ा<s>'}
```<|||||>Here's my grand hack to make this work -
```python
import numpy as np
def postprocess(model_outputs):
final_items = []
key = "tokens"
for outputs in model_outputs:
items = outputs[key].numpy()
final_items.append(items)
items = np.concatenate(final_items, axis=1)
items = items.squeeze(0)
return { "text": pipe2.tokenizer.decode(items, skip_special_tokens=False) }
pipe.postprocess = postprocess
``` |
transformers | 15,274 | closed | [Robust Speech Challenge] Add timeline | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds timeline for the event
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-21-2022 11:48:56 | 01-21-2022 11:48:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,273 | closed | Remove Longformers from ONNX-supported models | # What does this PR do?
This PR removes Longformer from the list of ONNX-supported models due to a current limitation in the supported ops in JIT - see issue #15217
This slipped through our unit tests because we didn't include `longformer` in the list of models to be tested.
Long term, we'll either need to wait for JIT to support the problematic ops or tweak the Longformer implementation.
| 01-21-2022 11:26:17 | 01-21-2022 11:26:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,272 | closed | [PyTorch-nightly-test] Fix Wav2Vec2 LM & Phoneme tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Install packages for `pytorch-nightly` tests to not fail
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-21-2022 11:11:55 | 01-21-2022 11:11:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @patrickvonplaten, could you write a description in the PR on why the change is needed?<|||||>Thanks for the careful review @stas00 ! |
transformers | 15,271 | closed | Move BART + ONNX example to research_projects | # What does this PR do?
This PR moves the BART + ONNX example contributed in #14310 under the `examples/research_projects` folder to indicate that it is not actively maintained by the `transformers` team.
cc @fatcat-z
| 01-21-2022 10:44:25 | 01-21-2022 10:44:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,270 | closed | Prepare ONNX export for torch v1.11 | # What does this PR do?
This PR prepares the ONNX export for `torch` v1.11, where the following arguments from the [`torch.onnx.export()` ](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export)function will be removed:
* `use_external_data_format`
* `enable_onnx_checker`
A simple check on the `torch` version is used to ensure backwards compatibility with versions < 1.11
I've also tested that the slow tests pass on the nightly version of `torch` by running:
```
RUN_SLOW=1 pytest tests/test_onnx_v2.py
```
| 01-21-2022 10:12:05 | 01-21-2022 10:12:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,269 | closed | Whether instantiation is premature? | https://github.com/huggingface/transformers/blob/515ed3ad2a11a6b0cd9800b2ad4d3b313fdaea8c/src/transformers/feature_extraction_utils.py#L429
For example my code is:
```python3
processor = LayoutXLMProcessor.from_pretrained('microsoft/layoutxlm-base', apply_ocr=False)
```
I don't need to use OCR, but pytesseract still has to be installed.
Would it be a good solution to this kind of problem if the parameters were updated with kwargs before instantiation? | 01-21-2022 08:35:22 | 01-21-2022 08:35:22 | @NielsRogge any suggestions?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
Thanks for the issue and apologies for the late reply. I'll look into this.<|||||>cc'ing @sgugger.
The problem is here is that, 'microsoft/layoutxlm-base' on the hub has a `preprocessor_config.json` which has `apply_ocr = True`, hence the `from_pretrained()` method will raise the error that PyTesseract should be installed. However, the additional kwarg `apply_ocr=False` is only taken into account after this.<|||||>Yes, that is the `FeatureExtractionMixin` is coded. I suggest moving the `requires_backends(self, "pytesseract")` to the call method instead of the init to solve this issue. |
transformers | 15,268 | closed | [Fix doc example] TFLayoutLMForTokenClassification: missing import tf | # What does this PR do?
The following line in the doc requires `>>> import tensorflow as tf`
```
>>> bbox = tf.convert_to_tensor([token_boxes])
``` | 01-21-2022 08:05:56 | 01-21-2022 08:05:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,267 | closed | Benchmark link in transformers/notebooks/README.md is broken | Hi @sgugger,
I noticed the link in the transformers/notebooks/README.md (https://github.com/huggingface/transformers/tree/master/notebooks) for the Benchmarks notebook is broken.
I believe the correct link should be: https://github.com/huggingface/notebooks/blob/master/examples/benchmark.ipynb
| 01-21-2022 05:28:18 | 01-21-2022 05:28:18 | Thanks for flagging this! Do you want to a make a PR to fix it?<|||||>Sure, I'll send one shortly |
transformers | 15,266 | closed | Require `tokenizers>=0.11.1` | # What does this PR do?
I discovered this bug by running on `master` (specifically <https://github.com/huggingface/transformers/commit/515ed3ad2a11a6b0cd9800b2ad4d3b313fdaea8c>). When running the following code, I get the error `Ignored unknown kwarg option direction`:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("t5-base")
tokenizer("", truncation="longest_first")
```
This isn't a breaking error (the argument is ignored and training proceeds as normal), but given that I got the error message over 60,000 times in my log file that is 1,000 lines long with this message removed, it's very annoying to work with.
I did some digging, and it was caused when `direction` was added as an input parameter to `tokenizers.Tokenizer.enable_truncation()` in `src/transformers/tokenization_utils_fast.py` as part of <https://github.com/huggingface/transformers/commit/d33dc7966a8c7f04bbca7ae0ced75cbf26c38d9e>. However, the argument was just added to `tokenizers` in <https://github.com/huggingface/tokenizers/commit/152880ab3e5281003bdeee7d1406149e2af97951>, which was first released in `tokenizers==0.11.0`.
If support for `tokenizers>=0.10.1,<0.11` is still desired, I can create a different fix that modifies the code added in <https://github.com/huggingface/transformers/commit/08cb5718ec206bcf34fcd85a03e3e7cbfab8a9e6> to ensure unsupported arguments are filtered out. The code currently handles parameters that `tokenizers` uses but `transformers` does not. I would need to add code that also supports the opposite.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
tokenizers: @n1t0, @LysandreJik | 01-21-2022 04:46:01 | 01-21-2022 04:46:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@LysandreJik, should the reviewer list in [`.github/PULL_REQUEST_TEMPLATE.md`](https://github.com/huggingface/transformers/blob/38a10c6b52242b1244b510358820603f8a2be3d9/.github/PULL_REQUEST_TEMPLATE.md) be updated with the list in [`.github/ISSUE_TEMPLATE/bug-report.md`](https://github.com/huggingface/transformers/blob/42d57549b82014834706ca86515eb6cc6431b3cb/.github/ISSUE_TEMPLATE/bug-report.md)? I can see the latter has been updated substantially since the former was last modified.<|||||>Hey @aphedges, yes indeed! You're correct.<|||||>Re-pinging @SaulLu for review. (I know PR review requests can get missed easily. I completely understand.) |
transformers | 15,265 | closed | Movement Prune | Hi everyone. I expect to use the movement prune approach to get a pruned bert on a downstream task with this link
[https://github.com/huggingface/transformers/tree/master/examples/research_projects/movement-pruning](url)
Basically speaking, we have a bert, and downstream task( AG news/ Yahoo answers, classification tasks), we wanna get a pruned BERT fitting this task.
I am trying Iterative Magnitude Pruning in the link as above. But I am a little confused that what is the train-v1.1.json and dev-v1.1.json should look like?
Thanks! | 01-21-2022 04:10:47 | 01-21-2022 04:10:47 | Hi @JackqqWang these kind of questions are best suited for our [forums](https://discuss.huggingface.co/) since we use issue to triage bug reports or feature requests.
In any case, my suggestion would be to use our `nn_pruning` [library](https://github.com/huggingface/nn_pruning) instead of the `movement-pruning` project. The reason I suggest this is that @madlag found a clever way to extend movement pruning such that inference can be improved by factors of 2-3x. You can find various examples for text classification in that repo :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,264 | open | [Kernel Fusion] training benchmarks of AOTAutograd (multiple models) | Note to maintainers: We are using this PR to collaborate and there is no intention yet to merge anything, so please ignore unless you want to experiment with the latest auto-speedups.
We are experimenting with the latest https://github.com/pytorch/functorch against pytorch nightly to automatically speed up the execution and reduce memory usage:
So the idea is this. Given an existing `model`, you speed it up by doing just this:
```
from functorch.compile import memory_efficient_fusion
aot_model = memory_efficient_fusion(model)
with torch.jit.fuser("fuser2"):
train(aot_model)
```
So for example HF Trainer could automate this with just adding a new flag, like `--fusion aot`.
Notably, as long as the part being compiled with AOTAutograd is static, you can do whatever you want outside of the model, and autograd will still work with the AOTAutograd compiled model.
So, things like this work fine
```
foo = aot_model(*inps)
loss = foo.sum()
if loss < 0:
print("awesome")
loss.backward()
```
Here are some benchmarks:
-------------
A100 training (from @Chillee):
```
$ python scripts/aot_albert.py
Current memory requirement: 5.69 GB
eager 0.08250337839126587
Current memory requirement: 4.00 GB
aot 0.05442763566970825
Maximum output error: 9.5367431640625e-07
Maximum gradient error: 1.5096273273229599e-05
```
| model | dtype | name | time (s) | mem (GB) | time % | mem % |
|:----------------------|:---------------|:-------|-----------:|-----------:|---------:|--------:|
| AlbertForMaskedLM | torch.float32 | eager | 0.087 | 7.133 | 0 | 0 |
| AlbertForMaskedLM | torch.float32 | aot | 0.057 | 5.438 | -35 | -24 |
| AlbertForMaskedLM | torch.float16 | eager | 0.051 | 3.901 | 0 | 0 |
| AlbertForMaskedLM | torch.float16 | aot | 0.034 | 3.054 | -34 | -22 |
| AlbertForMaskedLM | torch.bfloat16 | eager | 0.053 | 3.931 | 0 | 0 |
| AlbertForMaskedLM | torch.bfloat16 | aot | 0.034 | 3.083 | -36 | -22 |
| GPT2LMHeadModel | torch.float32 | eager | 0.056 | 5.174 | 0 | 0 |
| GPT2LMHeadModel | torch.float32 | aot | 0.045 | 4.328 | -19 | -16 |
| GPT2LMHeadModel | torch.float16 | eager | 0.033 | 4.645 | 0 | 0 |
| GPT2LMHeadModel | torch.float16 | aot | 0.029 | 4.223 | -13 | -9 |
| GPT2LMHeadModel | torch.bfloat16 | eager | 0.034 | 4.965 | 0 | 0 |
| GPT2LMHeadModel | torch.bfloat16 | aot | 0.029 | 4.541 | -15 | -9 |
| BertForMaskedLM | torch.float32 | eager | 0.041 | 6.764 | 0 | 0 |
| BertForMaskedLM | torch.float32 | aot | 0.036 | 6.759 | -13 | 0 |
| BertForMaskedLM | torch.float16 | eager | 0.025 | 6.228 | 0 | 0 |
| BertForMaskedLM | torch.float16 | aot | 0.021 | 6.226 | -16 | 0 |
| BertForMaskedLM | torch.bfloat16 | eager | 0.026 | 6.505 | 0 | 0 |
| BertForMaskedLM | torch.bfloat16 | aot | 0.021 | 6.503 | -19 | 0 |
| LongformerForMaskedLM | torch.float32 | eager | 0.122 | 9.921 | 0 | 0 |
| LongformerForMaskedLM | torch.float32 | aot | 0.111 | 9.933 | -9 | 0 |
On rtx3090 (from @stas00):
| model | dtype | name | time (s) | mem (GB) | time % | mem % |
|:----------------------|:---------------|:-------|-----------:|-----------:|---------:|--------:|
| AlbertForMaskedLM | torch.float32 | eager | 0.173 | 7.078 | 0 | 0 |
| AlbertForMaskedLM | torch.float32 | aot | 0.125 | 5.382 | -28 | -24 |
| AlbertForMaskedLM | torch.float16 | eager | 0.089 | 3.829 | 0 | 0 |
| AlbertForMaskedLM | torch.float16 | aot | 0.064 | 2.982 | -28 | -22 |
| AlbertForMaskedLM | torch.bfloat16 | eager | 0.092 | 3.852 | 0 | 0 |
| AlbertForMaskedLM | torch.bfloat16 | aot | 0.064 | 3.005 | -30 | -22 |
| GPT2LMHeadModel | torch.float32 | eager | 0.112 | 4.822 | 0 | 0 |
| GPT2LMHeadModel | torch.float32 | aot | 0.094 | 3.977 | -16 | -18 |
| GPT2LMHeadModel | torch.float16 | eager | 0.060 | 4.013 | 0 | 0 |
| GPT2LMHeadModel | torch.float16 | aot | 0.051 | 3.591 | -15 | -11 |
| GPT2LMHeadModel | torch.bfloat16 | eager | 0.061 | 4.736 | 0 | 0 |
| GPT2LMHeadModel | torch.bfloat16 | aot | 0.051 | 4.313 | -16 | -9 |
| BertForMaskedLM | torch.float32 | eager | 0.086 | 6.343 | 0 | 0 |
| BertForMaskedLM | torch.float32 | aot | 0.076 | 6.338 | -11 | 0 |
| BertForMaskedLM | torch.float16 | eager | 0.046 | 5.717 | 0 | 0 |
| BertForMaskedLM | torch.float16 | aot | 0.041 | 5.714 | -11 | 0 |
| BertForMaskedLM | torch.bfloat16 | eager | 0.046 | 5.952 | 0 | 0 |
| BertForMaskedLM | torch.bfloat16 | aot | 0.040 | 5.950 | -13 | 0 |
| LongformerForMaskedLM | torch.float32 | eager | 0.209 | 9.080 | 0 | 0 |
| LongformerForMaskedLM | torch.float32 | aot | 0.194 | 9.092 | -7 | 0 |
------------
Instructions from @stas00 on how to build functorch to reproduce these results.
```
# install torch-nightly
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch-nightly
# install functorch (and reinstall after `git pull` later if need to sync up)
git clone https://github.com/pytorch/functorch
cd functorch
rm -rf build
pip install -e .[aot]
```
As this is a constantly evolving code-base, make sure to `git pull` and rebuild above if you have the version that is some days old. Or at least if you try the code and it fails the first thing to do is to update and rebuild `functorch` and then retry the benchmarks.
Note that there is currently a correctness issue on one of the gradients on PyTorch nightly, the above was run with this patch (https://github.com/pytorch/pytorch/pull/71542), which fixes the correctness issue.
--------------------------
Notes:
- AOT = Ahead of Time
- eager = normal python/pytorch code - i.e. the way our models are written now
Q: What is the pytree registration [here](https://github.com/huggingface/transformers/pull/15264/files#diff-a271afd7d9556b5f3a5aa2df08fa1d10114569272db373956b108446468f476fR25) for?
A: AOTAutograd tries to present the simplest possible graphs for backends, and so it primarily works with lists of tensors for both the input and the output. So, these pytrees are needed so that we can flatten the input data structures into a list, and unflatten the output back into the correct data structure. PS: This is very much inspired by Jax <3
Resources on AOTAutograd:
AOTAutograd: https://docs.google.com/presentation/d/1rTt0BR2KChDQQTks2hHUtvHxtHQKwgQHVNrmbhj0byk/edit?usp=sharing
Min-Cut recomputation: https://dev-discuss.pytorch.org/t/min-cut-optimal-recomputation-i-e-activation-checkpointing-with-aotautograd/467 | 01-21-2022 02:51:00 | 01-21-2022 02:51:00 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15264). All of your documentation changes will be reflected on that endpoint.<|||||>At least for now we should probably not assert on failure, since the rest of the benchmark then doesn't run and requires re-running the whole thing - perhaps reporting a mismatch but continuing the execution?
This is trying to re-run the updated script on my rtx-3090:
```
Traceback (most recent call last):
File "scripts/aot_albert.py", line 119, in <module>
torch.testing.assert_close(grad2, grad1, atol=atol, rtol=rtol)
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/testing/_comparison.py", line 1255, in assert_close
assert_equal(
File "/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/torch/testing/_comparison.py", line 1030, in assert_equal
raise error_metas[0].to_error()
AssertionError: Tensor-likes are not close!
Mismatched elements: 1 / 38597376 (0.0%)
Greatest absolute difference: 7.62038107495755e-05 at index (18699, 512) (up to 5e-05 allowed)
Greatest relative difference: 0.16173266498354133 at index (18699, 512) (up to 0.005 allowed)
```
And also usability-wise the error doesn't say for which config it failed, so probably want to dump the setup info before running it?
Thank you!
<|||||>@stas00 updated to accumulate the numerical failures and print them out at the end.
Numerical checking for whole models is super finicky, sadly.
<|||||>> @stas00 updated to accumulate the numerical failures and print them out at the end.
>
> Numerical checking for whole models is super finicky, sadly.
Great, thanks a lot! Probably still want to print out the combo as currently it prints:
```
The failure occurred for item [30]
```
which is meaningless to the user of the benchmark since 30 doesn't appear anywhere in the code or output :) a small nit.
I updated the OP with the output on RTX-3090<|||||>@stas00 It does print out the combo (of the name + dtype), although perhaps could be more obvious.
Knowing that it's `item [30]` is actually useful - it's the 30th gradient value in the list of gradient checks.<|||||>Oh, I see it now, I guess the double new line made me not look up as it was all the intermediary prints until then.
please don't worry about it, we can polish it at the end, now that I know where to look it's all good. |
transformers | 15,263 | closed | Add 🤗 Accelerate tutorial | First draft of the Accelerate tutorial for distributed training! | 01-21-2022 00:03:25 | 01-21-2022 00:03:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,262 | closed | Remove "inputs" in tf common test script (no longer required) | # What does this PR do?
The blocks appeared in `test_modeling_tf_common.py`
```
# need to rename encoder-decoder "inputs" for PyTorch
if "inputs" in pt_inputs_dict and self.is_encoder_decoder:
pt_inputs_dict["input_ids"] = pt_inputs_dict.pop("inputs")
```
was introduced in #3547.
The PR #8602 changed `inputs` back to `input_ids` (particularly in `TFT5` model). The above block is longer necessary.
This PR removes it to avoid confusion (in particular, not to be confused with the new introduced `inputs` in PyTorch `generate`, which is expected to be applied to TF's `generate` too in the near future). | 01-20-2022 21:06:35 | 01-20-2022 21:06:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Thank you @ydshieh !
>
> To double-check, before merging -- can you confirm that this particular test is successful for a few models? (e.g. `RUN_PT_TF_CROSS_TESTS=1 py.test tests/test_modeling_tf_bert.py)`
Hi, @gante I run it with BERT, BART and GPT2 --> all pass.
(I believe that these tests are already run by CI, see https://app.circleci.com/pipelines/github/huggingface/transformers/33021/workflows/63e4488b-7d35-46ee-9209-5e036adfe6ba/jobs/347995/parallel-runs/0/steps/0-112) |
transformers | 15,261 | closed | Refine errors for pretrained objects | # What does this PR do?
This PR breaks refines the error returned for pretrained objects using the error codes now returned by hf.co to clearly indicate to the user if:
- there is a problem in the repo name
- there is a problem in the revision
- there is a problem with a specific file
- there is a general connection error
Whenever we are in a situation where the model has a version in another framework, a helpful error message is shown.
Also note that this PR removes the hard failure on `get_list_of_files` whenever there is a connection problem, which should make the CI way less flaky. | 01-20-2022 20:25:45 | 01-20-2022 20:25:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,260 | closed | Fine Tunning the Pytorch script | I am trying a text summarization project mentioned in this link https://github.com/SatishDeshbhratar/transformers/tree/master/examples/pytorch/summarization
But I am facing the error related to hf argument parser as below
> (pytorch_nlp) PS D:\Docs\Course work\Final Project (Text Summarization)\Script> python run_summarization.py \ --train_file=/train.csv \ --validation_file=/val.csv \ --model_name_or_path=facebook/bart-large-cnn \ --learning_rate=3.0e-6 \ --per_device_train_batch_size=8 \ --per_device_eval_batch_size=4 \ --output_dir=/output \ --num_train_epochs=100 \ --overwrite_output_dir=True \ --max_source_length=1024 \ --max_target_length=512 \ --val_max_target_length=512 \ --pad_to_max_length=False \ --evaluation_strategy='steps' \ --eval_steps=1000 \ --metric_for_best_model='rouge2' \ --greater_is_better=True \ --load_best_model_at_end=True \ --save_strategy='steps' \ --save_steps=1000 \ --save_total_limit=1 \ --predict_with_generate=True \ --num_beams=4 --generation_num_beams=4 \ --generation_max_length=512 \ --overwrite_cache=True \ --do_train
Traceback (most recent call last): File "D:\Docs\Course work\Final Project (Text Summarization)\Script\run_summarization.py", line 698, in <module> main() File "D:\Docs\Course work\Final Project (Text Summarization)\Script\run_summarization.py", line 281, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "C:\ProgramData\Anaconda3\envs\pytorch_nlp\lib\site-packages\transformers\hf_argparser.py", line 215, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\', '\\'] (pytorch_nlp) PS D:\Docs\Course work\Final Project (Text Summarization)\Script> | 01-20-2022 19:03:41 | 01-20-2022 19:03:41 | Maybe you remove all the '\' in the code, the problem will be solved.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,259 | closed | Update fine-tune docs | This is a first draft of the updated fine-tune tutorial. Main updates include:
- Preprocess a different dataset (users are probably very used to seeing `imdb` now) to include more diverse examples so users don't feel like they are doing the same thing over and over.
- Update the `Trainer` example to include evaluating accuracy before you fine-tune. It seems like a good practice to include evaluating metrics during training, so we should present it first. Let me know if this is not common or generally applicable, and we can revert it!
- Update Keras example to use `to_tf_dataset` to convert a dataset to a TensorFlow format (cc @Rocketknight1 please feel free to let me know if I'm missing anything!) | 01-20-2022 18:55:23 | 01-20-2022 18:55:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,258 | closed | Fix crash when logs are empty because Keras has wiped them out of spite | In situations where the training history is empty, TF model card creation via `model.create_model_card()` or `model.push_to_hub()` no longer crashes. | 01-20-2022 18:13:06 | 01-20-2022 18:13:06 | Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15258). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment. |
transformers | 15,257 | closed | Fix code examples | # What does this PR do?
This PR fixes a few mistakes in code examples in the docs. | 01-20-2022 17:04:38 | 01-20-2022 17:04:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,256 | closed | Fix TF Causal LM models' returned logits | # What does this PR do?
Fix TF causal LM models' returned logits
## Details
In TF causal LM models, the returned logits is the one being cut
```
if inputs["labels"] is not None:
# shift labels to the left and cut last logit token
**logits = logits[:, :-1]**
labels = inputs["labels"][:, 1:]
loss = self.hf_compute_loss(labels=labels, logits=logits)
return TFCausalLMOutputWithPast(
loss=loss,
**logits=logits,**
```
While for PyTorch causal LM models, the original logits is returned
```
if labels is not None:
# Shift so that tokens < n predict n
**shift_logits = lm_logits[..., :-1, :].contiguous()**
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss()
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
return CausalLMOutputWithPast(
loss=loss,
**logits=lm_logits,**
```
This PR fixes this inconsistency + test the cases where `labels` is passed in PT/TF equivalence test. | 01-20-2022 16:47:47 | 01-20-2022 16:47:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Could @Rocketknight1 or @gante give this a look?<|||||>To summarize: In causal LM models, logits and labels are offset by 1 before loss computations. In PT, the pre-shift logits are returned in the output, but in TF the post-shift logits are returned instead. This seems like a huge bug - I don't understand how several tf-pt equivalence tests didn't fail before, and how things like `generate()` didn't completely fail as a result too. I need to investigate the code more deeply to understand how this ever worked.<|||||>Ah, actually, `generate()` would not pass labels and so the shift would never happen.<|||||>> tf-pt equivalence tests
tf-pt equivalence tests don't test with `labels` as far as I know. And this PR tries to add it too.<|||||>@ydshieh that makes sense! And yes, in most use-cases when people are passing `labels`, they're usually interested in the `loss` output and not the `logits` output. So that probably explains how this was invisible for so long. Great work!<|||||>I make this PR as draft again, because there are other **potential PT/TF inconsistency** uncaught.
(The test inputs are randomly generated, so we don't always have the same outputs each run)
Here is the failed output in the test
```
# Some models require extra condition to return loss. For example, `BertForPreTraining` requires both
# `labels` and `next_sentence_label`.
# Moreover, some PT models return loss while the corresponding TF/Flax models don't.
if tf_loss is not None and pt_loss is not None:
tf_loss = tf.math.reduce_mean(tf_loss).numpy()
pt_loss = pt_loss.numpy()
tf_nans = np.copy(np.isnan(tf_loss))
pt_nans = np.copy(np.isnan(pt_loss))
# the 2 losses need to be both nan or both not nan
# (`TapasForQuestionAnswering` gives nan loss here)
> self.assertEqual(tf_nans, pt_nans)
E AssertionError: array(False) != array(True)
```<|||||>> I make this PR as draft again, because there are other **potential PT/TF inconsistency** uncaught. (The test inputs are randomly generated, so we don't always have the same outputs each run)
Having random tests might actually uncover interesting corner cases, but failures risk being overlooked if we can't get the random seed back 🤔 @sgugger , what's our go-to policy with respect to tests -- as deterministic as possible, or is randomness embraced?<|||||>It would be best to keep those random indeed.<|||||>> It would be best to keep those random indeed.
We can discuss about this for another PR. Here the issues I can identify are more deterministic -- one issue is coming from a sequence with all attention mask being 0 (although randomly generated in this case).
So for that particular case, we can just add an extra test case (with such an attention mask).
See #15326 if you are interested - but it is not TF related.<|||||>Woow, great catch @ydshieh !<|||||>Hi, I reverted the change in `test_pt_tf_model_equivalence` - it requires some time to fix the issues I could find. I don't want to this PR being blocked due to this new version of test. If you feel OK, we can merge this PR for now. Thank you!<|||||>As always, thank you @ydshieh, this was a great contribution. Will you be working on the improved equivalence test? (If not, I might pick it up :) )<|||||>> As always, thank you @ydshieh, this was a great contribution. Will you be working on the improved equivalence test? (If not, I might pick it up :) )
I already have one version (that is used for hunting the bugs :-) ). Would definitely open a PR to include it once other issues are fixed (so the test won't fail)<|||||>basically, the new version checks in a recursively way:
tuple --> check each element inside it
tensor --> check shape, and the abs max diff of tensor elements
A common issue is that the tuples returned by PT/TF have different length. Another one is the tensors have different shape.
If you want to go ahead, I am fine with it though :-) <|||||>No, by all means, we are very grateful that you are working on it ❤️ |
transformers | 15,255 | closed | Tentative workflow improvement | This improves the workflow of the doc-builder bot by adding a single comment when the PR is opened.
When the PR is closed or merged, it edits that comment. If the PR is reopened, it re-edits that comment. | 01-20-2022 15:09:05 | 01-20-2022 15:09:05 | _The documentation is not available anymore as the PR was closed or merged._<|||||>See the edits of the comment above.<|||||>I'm ok with both solutions 1 and 3 - I agree that it might be odd for the comment to be deleted, and hiding the comment removes that peculiarity.
If it's ok with you, I'll merge this PR as it is, as it is already an improvement of the workflow. I'm very open to any PR you offer regarding 1 or 3.<|||||>**Seems it works again**
off topic - but today when I checked the link, I always get
```
404
Sorry, we can’t find the page you are looking for.
```
Yesterday, I saw it worked. (Maybe @LysandreJik you are already aware of this fact?)<|||||>Hey @ydshieh, I believe these are due to errors on our backend; we're currently setting up a system that would make for a better way to share that info with the contributors. In the meantime, please do not hesitate to ping us on the PRs themselves with the commit that failed to be deployed. Thank you! |
transformers | 15,254 | closed | Export LayoutLMv2 to TorchScript | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: MacOS
- Python version: 3.8
- PyTorch version (GPU?): 1.10 CPU
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using LayoutLMv2:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
model_dir = "microsoft/layoutlmv2-base-uncased"
model = AutoModelForTokenClassification.from_pretrained(model_dir)
processor = LayoutLMv2Processor.from_pretrained(model_dir)
image = Image.open('./image.jpg').convert("RGB")
encoded_input = processor(
image, return_tensors="pt"
)
traced_model = torch.jit.trace(func=model,
strict=False,
example_inputs=[encoded_input['input_ids'], encoded_input['bbox'],
encoded_input['image'], encoded_input['attention_mask'],
encoded_input['token_type_ids']])
```
Results in:
<img width="998" alt="Screenshot 2022-01-20 at 15 52 02" src="https://user-images.githubusercontent.com/22592860/150364030-c79e4a1e-b694-48e6-9ed7-175eb516a6ac.png">
```
Traceback (most recent call last):
File "/some_path/python3.8/site-packages/torch/jit/_trace.py", line 958, in trace_module
module._c._create_method_from_trace(
File "/some_path/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/some_path/python3.8/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/some_path/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 1179, in forward
outputs = self.layoutlmv2(
File "/some_path/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/some_path/python3.8/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/some_path/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 888, in forward
text_layout_emb = self._calc_text_embeddings(
File "/some_path/python3.8/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py", line 753, in _calc_text_embeddings
embeddings = inputs_embeds + position_embeddings + spatial_position_embeddings + token_type_embeddings
RuntimeError: The size of tensor a (241) must match the size of tensor b (290) at non-singleton dimension 1
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
1. the torch.jit.trace conversion should not crash
| 01-20-2022 15:04:58 | 01-20-2022 15:04:58 | cc @NielsRogge <|||||>I was looking into a bit more and I found out that the reason why the torchscript conversion is crashing are following:
1) torch.jit._trace.py ->line 958 module._c._create_method_from_trace is unsqueezing the inputs thus instead of passing original e.g. attention_mask input as {Tensor:(1,241)} it is passing {Tensor:(tensor(1),tensor(241)}
2) modeling_layoutlmv2.forward (line 804) is expecting input in form of {Tensor:(1,input_size)}. It would be great to add some check that would warn me that the input that I am passing in has invalid shape as `The size of tensor a (241) must match the size of tensor b (290)` is rather hard to track where exactly is the mistake.<|||||>Hi,
Currently, LayoutLMv2 is not supported out-of-the-box to work with TorchScript (see #14457). However, there was an effort to make it work (see #14462). Feel free to work further on this to make it available.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,253 | closed | Make sure to raise NotImplementedError with correct method name | # What does this PR do?
I've found a message raised with `NotImplementedError` by `PreTrainedModel._init_weights()` is not correct.
This PR resolves that.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 01-20-2022 14:39:38 | 01-20-2022 14:39:38 | Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15253). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment. |
transformers | 15,252 | closed | corrected typo in robust speech event README.md regarding OVH Cloud link | # What does this PR do?
Corrects a typo in robust speech event README that incorrectly directed US users to the Canadian OVH Cloud site.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@patrickvonplaten @anton-l
| 01-20-2022 14:34:41 | 01-20-2022 14:34:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>US OVHcloud service is sadly not supported by this event |
transformers | 15,251 | closed | Tentative workflow improvement | null | 01-20-2022 14:31:53 | 01-20-2022 14:31:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,250 | closed | Add PoolFormer | # What does this PR do?
This PR adds the PoolFormer model to the 🤗 repository.
I also opened an Issue for adding the model #14584
## Who can review?
@NielsRogge | 01-20-2022 13:42:50 | 01-20-2022 13:42:50 | |
transformers | 15,249 | closed | Soft length regulation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-20-2022 13:27:13 | 01-20-2022 13:27:13 | Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15249). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it. |
transformers | 15,248 | closed | Cannot load BART-base model | ## Environment info
- `transformers` version: 4.12.3
- Platform: Linux-5.4.0-1059-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
BART model @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
* [ ] the official example scripts: I'm using the example code for loading BART model with `from_pretrained`
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: XSUM
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoModel
model = AutoModel.from_pretrained("facebook/bart-base")
```
The log is as follows:
file facebook/bart-base/config.json not found
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 558, in get_config_dict
user_agent=user_agent,
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/file_utils.py", line 1506, in cached_path
raise EnvironmentError(f"file {url_or_filename} not found")
OSError: file facebook/bart-base/config.json not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 397, in from_pretrained
pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 558, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/configuration_utils.py", line 575, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for 'facebook/bart-base'. Make sure that:
- 'facebook/bart-base' is a correct model identifier listed on 'https://huggingface.co/models'
(make sure 'facebook/bart-base' is not a path to a local directory with something else, in that case)
- or 'facebook/bart-base' is the correct path to a directory containing a config.json file
-
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Could you please give me some idea about why this situation happens?
<!-- A clear and concise description of what you would expect to happen. -->
| 01-20-2022 11:58:43 | 01-20-2022 11:58:43 | Hey! Do you have a local folder with the same name as `facebook/bart-base`?<|||||>Hi, thank you for your quick reply!
Oh yes, I made this folder, but I was not aware of it before you mention! Thank you very much! |
transformers | 15,247 | closed | [Wav2Vec2ProcessorWithLM] improve multi processing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/kensho-technologies/pyctcdecode/issues/41
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-20-2022 11:35:22 | 01-20-2022 11:35:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the heads-up!<|||||>Failure is unrelated - merging this PR |
transformers | 15,246 | closed | Update README.md | Clarify OVH instruction
| 01-20-2022 10:40:17 | 01-20-2022 10:40:17 | Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15246). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment. |
transformers | 15,245 | closed | Add soft length regulation for sequence generation | # What does this PR do?
This PR enables users to softly regulate the length when using sampling method in model.generate(). We had the use case where we wanted to keep a generated sequence adequately short with no "hard" cutoff - we wanted the model to come to an end without just setting max_length, so that the end_token is also predicted at a meaningful position.
The SoftLengthLogitsProcessor exponentially increases the score of the eos_token_id until it is generated. It can be configured by these parameters inside model.generate():
- length_regulation_start
- length_regulation_factor
I am aware that there is already a length_penalty in place, but as far as I understood it is used in beam search and did not fulfil our need.
Usage:
```python
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=600, length_regulation_start=300, length_regulation_factor=1.01)
```
Example:
Here are the lengths of 5 samples generated for a length-regulated and default sequence-generation. The goal was to end the sequence between 300 and 400 tokens without a hard cutoff.
Code
```python
print("Regular output with max_length=600")
for index in range(5):
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=600, pad_token_id=50256)
print(gen_tokens.shape[1])
print("--------------")
print("Soft regulation (length_regulation_start=300, length_regulation_factor=1.01):")
for index in range(5):
gen_tokens = model.generate(input_ids, do_sample=True, temperature=0.9, max_length=600, length_regulation_start=300, length_regulation_factor=1.01, pad_token_id=50256)
print(gen_tokens.shape[1])
```
Results
```
Regular output with max_length=600
600
600
600
600
600
--------------
Soft regulation (length_regulation_start=300, length_regulation_factor=1.01)
361
365
381
370
396
```
Here we see one length-regulated example output. The <endoftext> token is being reasonably placed:
```
In recent events the conflict between Elon Musk and Donald Trump heated up. Trump criticised Musk after he was quoted by The New York Times saying that a ‘solution’ to the US’s problems in the technology industry might be “a bullet to the head”.
In return Musk sent the president a series of tweets where he called the president “a traitor”, and “an idiot” as well as said he wouldn’t take the challenge of building the president a $10 million Tesla Roadster.
“It’s not my style to engage in such juvenile behaviour — it’s never been my style to engage in such juvenile behaviour — but the stakes for positive change have become that important,” said Musk in an interview with Forbes magazine, which is to be published on Wednesday.
The interview was published a few days before Musk went public about buying the Boring Company, which he described in the interview as a “vertical tunnel boring machine.”
After a few hours after this, Musk issued a statement about his tweet regarding the Roadster and the $10 million challenge to the president.
“The idea that the government could take ownership of all privately owned infrastructure in the U.S., just because I want it, is absurd on its face.”
He said he was being “clearly sarcastic.”
This was enough for Trump and his advisors to get angry, with Trump telling CNN that Musk is “a very, very, different kind of person than me.”
“I just think that he’s very, very smart,” Trump added.<|endoftext|>
```
This works as well with other EOS tokens that may be used in few shot learning scenarios such as "\n".
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
| 01-20-2022 10:24:09 | 01-20-2022 10:24:09 | Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15245). All of your documentation changes will be reflected on that endpoint.<|||||>@patrickvonplaten, @LysandreJik<|||||>Hey @kevinpl07,
Thanks a lot for your PR! Do you know whether this method is used in any paper? Could you give me more on its scientific background?
In general, I think it's ok to add such a method here though<|||||>@patrickvonplaten
Regarding scientific background of this method. We honestly just used intuition to solve our problem. It could very well be that this kind of LengthDecay is used in some paper.
<|||||>Hey @kevinpl07,
Cool think we can move forward then with this logits processor. It would also be nice to add a test in the end :-)<|||||>@kevinpl07 - sorry I think the github commit history is broken. Could you resubmit a PR here? :-)<|||||>> @kevinpl07 - sorry I think the github commit history is broken. Could you resubmit a PR here? :-)
@patrickvonplaten I updated my PR, with the agreed changes and tests. I don't know if that fixed the commit history problem. Will squashing the commit upon merging fix this? Otherwise let me know how I can clean the history for you.<|||||>Hey @kevinpl07 - could you maybe rebase your branch and I think then we can merge :-)<|||||>@kevinpl07 ,
Be careful I think you added many unrelated files (687 were modified.).
If you're unsure how to fix it, tell us, we can probably help.<|||||>@patrickvonplaten changed the docstrings accordingly and rebased :)<|||||>Don't worry about the `run_tests_hub` test - not related to this PR ;-)<|||||>So everything good from my side? @patrickvonplaten <|||||>@sgugger anything from my side, that I can do support the merge?<|||||>Merging - thanks for your work here! |
transformers | 15,244 | closed | Evaluation Padding_idx Error when using BERTScore and Deepspeed Zero3 but not with Zero2 | ## Environment info
- `transformers` version: 4.16.0.dev0
- Platform: Linux-5.11.0-37-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
- Deepspeed: @stas00
## Information
Model I am using (google/pegasus-large):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Replace compute_metrics in transformers/examples/pytorch/summarization/run_summarization.py with the following:
```
from datasets import load_dataset, load_metric
# Metric
metric = load_metric("rouge")
# load rouge
rouge = load_metric("rouge")
# load bert_score
bert_score_metric = load_metric("bertscore")
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
if data_args.ignore_pad_token_for_loss:
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Some simple post-processing
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
rouge1_output = rouge.compute(predictions=decoded_preds, references=decoded_labels, rouge_types=["rouge1"])["rouge1"].mid
rouge2_output = rouge.compute(predictions=decoded_preds, references=decoded_labels, rouge_types=["rouge2"])["rouge2"].mid
rougel_output = rouge.compute(predictions=decoded_preds, references=decoded_labels, rouge_types=["rougeL"])["rougeL"].mid
result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)
# Extract a few results from ROUGE
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
bert_results = bert_score_metric.compute(predictions=decoded_preds, references=decoded_labels, lang="en", )
bert_f1 = sum(bert_results["f1"]) / len(bert_results["f1"])
bert_precision = sum(bert_results["precision"]) / len(bert_results["precision"])
bert_recall = sum(bert_results["recall"]) / len(bert_results["recall"])
bert_rogue = ((bert_f1*100) + result["rougeL"]) / 2
return {
"result":result,
"rogue1": {"rouge1_precision":round(rouge1_output.precision, 4)*100, "rouge1_recall":round(rouge1_output.recall, 4)*100, "rouge1_f1":round(rouge1_output.fmeasure, 4)*100},
"rogue2": {"rouge2_precision":round(rouge2_output.precision, 4)*100, "rouge2_recall":round(rouge2_output.recall, 4)*100, "rouge2_f1":round(rouge2_output.fmeasure, 4)*100},
"roguel": {"rougel_precision":round(rougel_output.precision, 4)*100, "rougel_recall":round(rougel_output.recall, 4)*100, "rougel_f1":round(rougel_output.fmeasure, 4)*100},
"bertscore": {"bert_precision":round(bert_precision, 4)*100, "bert_recall":round(bert_recall, 4)*100, "bert_f1":round(bert_f1, 4)*100},
"bert_rogue": round(bert_rogue, 4)
}
```
2. Run the following command:
```
deepspeed transformers/examples/pytorch/summarization/run_summarization.py \
--deepspeed transformers/tests/deepspeed/ds_config_zero3.json \
--model_name_or_path google/pegasus-large \
--per_device_train_batch_size 2 \
--output_dir output_dir \
--overwrite_output_dir \
--fp16 \
--do_train \
--predict_with_generate \
--report_to wandb \
--load_best_model_at_end True \
--greater_is_better True \
--evaluation_strategy steps \
--save_steps 1200 \
--eval_steps 50 \
--logging_steps 400 \
--max_train_samples 100 \
--max_eval_samples 10 \
--dataset_name samsum
```
3. You'll get the following error message:
```
Traceback (most recent call last):
File "run_summarization.py", line 769, in <module>
main()
File "run_summarization.py", line 713, in main
metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_seq2seq.py", line 70, in evaluate
Traceback (most recent call last):
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 2164, in evaluate
File "run_summarization.py", line 769, in <module>
main()
File "run_summarization.py", line 713, in main
metric_key_prefix=metric_key_prefix,
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 2400, in evaluation_loop
metrics = trainer.evaluate(max_length=max_length, num_beams=num_beams, metric_key_prefix="eval")
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_seq2seq.py", line 70, in evaluate
return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 2164, in evaluate
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "run_summarization.py", line 605, in compute_metrics
metric_key_prefix=metric_key_prefix,
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 2400, in evaluation_loop
bert_results = bert_score_metric.compute(predictions=decoded_preds, references=decoded_labels, lang="en", )
File "/usr/local/lib/python3.6/dist-packages/datasets/metric.py", line 405, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/workspace/.cache/huggingface/modules/datasets_modules/metrics/bertscore/5075ace4c0abac8c79f820cc5e8dabc1d8263bf873cf93296a8f34d2aad42eb6/bertscore.py", line 184, in _compute
batch_size=batch_size,
File "/usr/local/lib/python3.6/dist-packages/bert_score/scorer.py", line 222, in score
all_layers=self.all_layers,
File "/usr/local/lib/python3.6/dist-packages/bert_score/utils.py", line 529, in bert_cos_score_idf
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "run_summarization.py", line 605, in compute_metrics
sen_batch, model, tokenizer, idf_dict, device=device, all_layers=all_layers
File "/usr/local/lib/python3.6/dist-packages/bert_score/utils.py", line 408, in get_bert_embedding
model, padded_sens[i : i + batch_size], attention_mask=mask[i : i + batch_size], all_layers=all_layers,
bert_results = bert_score_metric.compute(predictions=decoded_preds, references=decoded_labels, lang="en", )
File "/usr/local/lib/python3.6/dist-packages/bert_score/utils.py", line 318, in bert_encode
File "/usr/local/lib/python3.6/dist-packages/datasets/metric.py", line 405, in compute
out = model(x, attention_mask=attention_mask, output_hidden_states=all_layers)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/workspace/.cache/huggingface/modules/datasets_modules/metrics/bertscore/5075ace4c0abac8c79f820cc5e8dabc1d8263bf873cf93296a8f34d2aad42eb6/bertscore.py", line 184, in _compute
batch_size=batch_size,
File "/usr/local/lib/python3.6/dist-packages/bert_score/scorer.py", line 222, in score
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/roberta/modeling_roberta.py", line 848, in forward
all_layers=self.all_layers,
File "/usr/local/lib/python3.6/dist-packages/bert_score/utils.py", line 529, in bert_cos_score_idf
sen_batch, model, tokenizer, idf_dict, device=device, all_layers=all_layers
File "/usr/local/lib/python3.6/dist-packages/bert_score/utils.py", line 408, in get_bert_embedding
past_key_values_length=past_key_values_length,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
model, padded_sens[i : i + batch_size], attention_mask=mask[i : i + batch_size], all_layers=all_layers,
File "/usr/local/lib/python3.6/dist-packages/bert_score/utils.py", line 318, in bert_encode
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/roberta/modeling_roberta.py", line 131, in forward
out = model(x, attention_mask=attention_mask, output_hidden_states=all_layers)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
inputs_embeds = self.word_embeddings(input_ids)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/roberta/modeling_roberta.py", line 848, in forward
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 147, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1897, in embedding
past_key_values_length=past_key_values_length,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/roberta/modeling_roberta.py", line 131, in forward
assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings"
inputs_embeds = self.word_embeddings(input_ids)
AssertionError: Padding_idx must be within num_embeddings
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py", line 147, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1897, in embedding
assert padding_idx < weight.size(0), "Padding_idx must be within num_embeddings"
AssertionError: Padding_idx must be within num_embeddings
```
## Expected behavior
If this command is run instead training and evaluation completes without an AssertationError:
```
deepspeed transformers/examples/pytorch/summarization/run_summarization.py \
--deepspeed transformers/tests/deepspeed/ds_config_zero2.json \
--model_name_or_path google/pegasus-large \
--per_device_train_batch_size 2 \
--output_dir output_dir \
--overwrite_output_dir \
--fp16 \
--do_train \
--predict_with_generate \
--report_to wandb \
--load_best_model_at_end True \
--greater_is_better True \
--evaluation_strategy steps \
--save_steps 1200 \
--eval_steps 50 \
--logging_steps 400 \
--max_train_samples 100 \
--max_eval_samples 10 \
--dataset_name samsum
```
I'm not sure what Zero3 is doing vs Zero2 that could cause this padding error. | 01-20-2022 09:59:18 | 01-20-2022 09:59:18 | Hi @KMFODA,
zero3 is different from zero2 in that it shards the params. zero3 magically gathers them on each gpu before the model's forward calls, so the model has no idea its weights were manipulated. But if you do other forward's that aren't part of the model and which are called before the model, deepspeed doesn't know about it and most likely the error comes from that.
Typically you then need to do your own gathering before accessing the params that aren't part of the model's forward, for example:
https://github.com/huggingface/transformers/blob/1fc0fa46174511a68889eacef72693c93dc00373/src/transformers/modeling_utils.py#L783-L787
Please let me know if that helps with your process. And of course if that helps please do share the outcome.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Thanks @stas00 for the response. Sorry for my delayed one. I ended up just stick to a ROUGE score which didn't need these changes in the interest of time. Will try this out next time I'm using zero3 and BertScore and report back her. Thank you! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.