repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 20,067 | closed | Update READMEs for ESMFold and add notebooks | This PR adds ESMFold to the main README and adds links to the protein LM and protein folding notebooks. | 11-04-2022 14:26:20 | 11-04-2022 14:26:20 | Woah, PyCharm murdered the formatting on that table. One sec!<|||||>Formatting fixed now, sorry about that!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,066 | closed | Add CLIPSeg | # What does this PR do?
This PR adds CLIPSeg, a nice extension of CLIP for zero-shot and one-shot (image-guided) image segmentation.
To do:
- [x] transfer checkpoints and update code
- [x] update base_model_prefix | 11-04-2022 14:09:32 | 11-04-2022 14:09:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Is the CLIPSeg yet to be released in the latest version?
|
transformers | 20,065 | closed | Update defaults and logic to match old FE | # What does this PR do?
Updates defaults and logic in image processors to match the previous feature extractors. Fixes some broken inference tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 11-04-2022 13:57:19 | 11-04-2022 13:57:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,064 | closed | [Trainer] Fix model name in push_to_hub | # What does this PR do?
Trainer's `push_to_hub` fails if `model_name` is specified in the kwargs.
The variable `model_name` is explicitly defined in the `push_to_hub` method:
https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/src/transformers/trainer.py#L3449
And then subsequently passed **alongside** the kwargs to the method `create_model_card`:
https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/src/transformers/trainer.py#L3471
This means if `model_name` is specified in the kwargs, it is passed **twice** to `create_model_card`, once from the variable and once from the kwargs, giving a `TypeError`:
```python
from transformers import Trainer, TrainingArguments, Wav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained("hf-internal-testing/tiny-random-wav2vec2")
training_args = TrainingArguments(output_dir="dummy_dir_for_issue")
trainer = Trainer(args=training_args, model=model)
trainer.push_to_hub(model_name="pretty-model-name")
```
**Traceback**
```
File ~/transformers/src/transformers/trainer.py:3471, in Trainer.push_to_hub(self, commit_message, blocking, **kwargs)
3469 # push separately the model card to be independant from the rest of the model
3470 if self.args.should_save:
-> 3471 self.create_model_card(model_name=model_name, **kwargs)
3472 try:
3473 self.repo.push_to_hub(
3474 commit_message="update model card README.md", blocking=blocking, auto_lfs_prune=True
3475 )
TypeError: create_model_card() got multiple values for keyword argument 'model_name'
```
Fixes #20058.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-04-2022 12:10:42 | 11-04-2022 12:10:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,063 | closed | fix(typo): Update README.md | @sgugger | 11-04-2022 10:58:50 | 11-04-2022 10:58:50 | |
transformers | 20,062 | closed | fix `tokenizer_type` to avoid error when loading checkpoint back | # What does this PR do?
1. fix `tokenizer_type` to avoid error when loading checkpoint back | 11-04-2022 08:28:35 | 11-04-2022 08:28:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,061 | closed | Change constant torch.tensor to torch.full | # What does this PR do?
Change `torch.tensor` to `torch.full` from GPT-2 to avoid CPU-GPU synchronization.
## Benchmarks with PyTorch Profiler

Here's a trace of a single GPT-2 training iteration with 12 GPT-2 blocks, 2 GPUs, and DDP.
From `_attn` function, there are two `torch.tensor` calls. Those invoke CPU to GPU memory movement, thus calling `cudaStreamSynchronize`.
## How to fix
From [PyTorch Recipes](https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html#avoid-unnecessary-cpu-gpu-synchronization), we can avoid CPU-GPU synchronization by directly calling `torch.full` instead of `torch.tensor` or `torch.to`. Since two `torch.tensor` create constant tensors, we can change those into `torch.full([], ...)`, and it will behave the same way.

After the patch, every `cudaStreamSynchronize` is gone, and the duration of a single iteration is reduced by 0.5%.
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
| 11-04-2022 08:20:03 | 11-04-2022 08:20:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>For FX, I think this is already tested in the CI so I guess it does not break things.
For the ONNX export, it's not tested but it should not break things IMO.<|||||>Following the change, the training with ONNX Runtime breaks as `mask_value` and `attn_weights` don't have the same dtype after being traced. Will open a PR to fix this issue.
```
======================================================================
ERROR: test_ort_trainer (__main__.TestORTTrainer) (model_name='gpt2', dataset_name='sst2', inference_with_ort=False)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_onnxruntime_train.py", line 131, in test_ort_trainer
train_result = trainer.train()
File "/workspace/optimum/onnxruntime/trainer.py", line 349, in train
return inner_training_loop(
File "/workspace/optimum/onnxruntime/trainer.py", line 615, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2523, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 2555, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py", line 371, in _forward
return ortmodule._torch_module.forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_utils.py", line 351, in _forward
return torch_module_ort._execution_manager(torch_module_ort.is_training()).forward(*inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py", line 273, in forward
self._fallback_manager.handle_exception(
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_fallback.py", line 162, in handle_exception
raise exception
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_training_manager.py", line 210, in forward
self._initialize_graph_builder()
File "/usr/local/lib/python3.8/dist-packages/onnxruntime/training/ortmodule/_graph_execution_manager.py", line 478, in _initialize_graph_builder
self._graph_builder.initialize(self._onnx_models.exported_model.SerializeToString(), grad_builder_config)
RuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:731 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(float) and tensor(float16) in node (Where_223).
``` |
transformers | 20,060 | closed | added-bart-japanese-tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds support for pre-trained BART models for Japanese text.
The original pre-trained model was converted from the Fairseq checkpoint which contains an extra layer_norm layer after the encoder and decoder, therefore compatible with the MBart model. Details of the model can be found [here](https://huggingface.co/Formzu/bart-large-japanese).
Since Japanese language tokenization requires text segmentation, half-width character conversion, as well as special token compatibility with the existing checkpoint, a new tokenizer was implemented.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-04-2022 07:25:06 | 11-04-2022 07:25:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for your PR. This custom tokenizer should really go on the Hub in the repos that use it, using our [code on the Hub](https://huggingface.co/docs/transformers/custom_models) feature, instead of adding a new model though.<|||||>@sgugger Thanks for the advice. I added custom tokenizer code as well as AutoTokenizer support to the model on the Hub.
|
transformers | 20,059 | closed | Removing RobertaConfig inheritance from CamembertConfig | # What does this PR do?
Removes RobertaConfig dependencies from CamembertConfig
Related to https://github.com/huggingface/transformers/issues/19303
@sgugger can I please get some feedback on this. Thanks 😄
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-04-2022 06:51:53 | 11-04-2022 06:51:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Also make sure you run `make style` on your branch to fix the formatting.
`make style` changed some files in other folders as well related to other models. Since I didn't change them, I didn't add them in this PR, and only added the camembert_configuration.py file with the fixed style.
I was unsure if I should add style changes in other files in this PR, since this PR is about CamembertConfig. |
transformers | 20,058 | closed | Push to Hub fails with `model_name` | ### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
#common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train+validation", use_auth_token=True)
#common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test", use_auth_token=True)
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train[:1%]+validation[:1%]", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test[:1%]", use_auth_token=True)
print(common_voice)
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
print(common_voice)
from transformers import WhisperFeatureExtractor
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
print(common_voice["train"][0])
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
print(common_voice["train"][0])
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lengths and need different padding methods
# first treat the audio inputs by simply returning torch tensors
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
# get the tokenized label sequences
label_features = [{"input_ids": feature["labels"]} for feature in features]
# pad the labels to max length
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
# if bos token is appended in previous tokenization step,
# cut bos token here as it's append later anyways
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
"""Let's initialise the data collator we've just defined:"""
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
import evaluate
metric = evaluate.load("wer")
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
# replace -100 with the pad_token_id
label_ids[label_ids == -100] = tokenizer.pad_token_id
# we do not want to group tokens when computing the metrics
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
trainer.train()
"""Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):"""
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
#"model_name": "WhisperSmallSwedishBirgerMoell", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
trainer.push_to_hub(**kwargs)
from transformers import pipeline
import gradio as gr
pipe = pipeline(model="birgermoell/whisper-small-sv-test2") # change to "your-username/the-name-you-picked"
def transcribe(audio):
text = pipe(audio)["text"]
return text
iface = gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text",
title="Whisper Small SV",
description="Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.",
)
iface.launch()
```
### Expected behavior
The following script is a downloaded version of the colab notebook that follows the whisper fine-tuning tutorial.
https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb
One edit was that I removed the model name since I had an issue that it was complaining about two model names that made it impossible to upload. The script just runs on 1% of the dataset on 10 epochs.
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
#"model_name": "WhisperSmallSwedishBirgerMoell", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
https://huggingface.co/birgermoell/whisper-small-sv-test2
I also ran into similar issues when I trained a model on the whole dataset.
https://huggingface.co/birgermoell/whisper-small-sv
| 11-04-2022 04:57:40 | 11-04-2022 04:57:40 | Thanks for flagging this @BirgerMoell - should be fixed in the linked PR!<|||||>Thank you so much for resolving this issue. I managed to push the model to the hub through the script but I still get the original error.
`Can't load tokenizer using from_pretrained, please update its configuration: Can't load tokenizer for 'birgermoell/whisper-small-sv-test2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'birgermoell/whisper-small-sv-test2' is the correct path to a directory containing all relevant files for a WhisperTokenizer tokenizer.
`
Here is the trained test model and you get the error if you try running it either through the pipeline or through the online tool.
https://huggingface.co/birgermoell/whisper-small-sv-test2
Is there an example fine-tuned whisper model I can look at to check that I have all the right files in my folder?
<|||||>Okay - great push to Hub works. I wonder why the tokenizer is not saving 🤔 I'll try running your codesnippet!
Here's an example with all the files: https://huggingface.co/sanchit-gandhi/whisper-small-hi/tree/main<|||||>This is the code I ran. Identical except that I now use the model name
```
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
#common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train+validation", use_auth_token=True)
#common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test", use_auth_token=True)
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train[:1%]+validation[:1%]", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test[:1%]", use_auth_token=True)
print(common_voice)
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
print(common_voice)
from transformers import WhisperFeatureExtractor
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
print(common_voice["train"][0])
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
print(common_voice["train"][0])
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lengths and need different padding methods
# first treat the audio inputs by simply returning torch tensors
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
# get the tokenized label sequences
label_features = [{"input_ids": feature["labels"]} for feature in features]
# pad the labels to max length
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
# if bos token is appended in previous tokenization step,
# cut bos token here as it's append later anyways
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
"""Let's initialise the data collator we've just defined:"""
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
import evaluate
metric = evaluate.load("wer")
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
# replace -100 with the pad_token_id
label_ids[label_ids == -100] = tokenizer.pad_token_id
# we do not want to group tokens when computing the metrics
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
trainer.train()
"""Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):"""
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
"model_name": "WhisperSmallSwedishBirgerMoell", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
trainer.push_to_hub(**kwargs)
from transformers import pipeline
import gradio as gr
pipe = pipeline(model="birgermoell/whisper-small-sv-test2") # change to "your-username/the-name-you-picked"
def transcribe(audio):
text = pipe(audio)["text"]
return text
iface = gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text",
title="Whisper Small SV",
description="Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.",
)
iface.launch()
```
<|||||>Great thanks, running on an instance now to try and repro!<|||||>The issue is that `save_steps` < `max_steps`, so Trainer never gets to the number of steps required to save the checkpoint 😉 If you try with the following it'll work:
```python
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=1,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=5, # set to < max_steps
eval_steps=5, # set to < max_steps
logging_steps=1, # set to < max_steps
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
```
See https://huggingface.co/sanchit-gandhi/whisper-small-sv-test2/tree/main (I ignored the kwargs so the model card is a bit scratchy, but otherwise the same as your example with the updated training args)<|||||>The trained model you are linking to https://huggingface.co/sanchit-gandhi/whisper-small-sv-test2/tree/main has the same issue I'm still facing.
My guess is that not all the model files are uploaded correctly and when I try running it through the pipeline I get an error.
If you compare to the one you trained earlier, they both now get the same error.
This is when I tried out the models you trained.
https://huggingface.co/sanchit-gandhi/whisper-small-sv-test2
https://huggingface.co/sanchit-gandhi/whisper-small-hi/tree/main
<img width="1406" alt="Screenshot 2022-11-07 at 16 22 13" src="https://user-images.githubusercontent.com/1704131/200347366-36bada17-c19a-4904-9187-0deefeb72899.png">
<|||||>Ah I see! Sorry, you're absolutely right! There are files not pushed during training. We need to explicitly save the `processor` as this is not done by Trainer!
I've updated the notebook: https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb
All you need to do is add the line:
```python
processor.save_pretrained(training_args.output_dir)
```
before calling `trainer.train()`.
Sorry about that, my apologies!<|||||>Note that the `Trainer` will do it if you pass it `tokenizer=processor` instead of `tokenizer=processor.feature_extractor`.<|||||>Unfortunately it fails with `model_input_name=...`:
https://github.com/huggingface/transformers/blob/d44ac47bac4471703651675c8abd9d6e1b6c3db6/src/transformers/trainer.py#L788
as the processor does not have the attribute `model_input_names` that the `feature_extractor` has.
Will add a PR to fix this tomorrow!<|||||>Ah, indeed would be nice if the processors had that attribute!<|||||>Absolutely! Expect a PR tomorrow!<|||||>```
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
#common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train+validation", use_auth_token=True)
#common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test", use_auth_token=True)
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train[:1%]+validation[:1%]", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test[:1%]", use_auth_token=True)
print(common_voice)
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
print(common_voice)
from transformers import WhisperFeatureExtractor
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
print(common_voice["train"][0])
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
print(common_voice["train"][0])
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lengths and need different padding methods
# first treat the audio inputs by simply returning torch tensors
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
# get the tokenized label sequences
label_features = [{"input_ids": feature["labels"]} for feature in features]
# pad the labels to max length
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
# if bos token is appended in previous tokenization step,
# cut bos token here as it's append later anyways
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
"""Let's initialise the data collator we've just defined:"""
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
import evaluate
metric = evaluate.load("wer")
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
# replace -100 with the pad_token_id
label_ids[label_ids == -100] = tokenizer.pad_token_id
# we do not want to group tokens when computing the metrics
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=1,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=5, # set to < max_steps
eval_steps=5, # set to < max_steps
logging_steps=1, # set to < max_steps
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
processor.save_pretrained(training_args.output_dir)
trainer.train()
"""Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):"""
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
"model_name": "whisper-small-sv-test2", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
trainer.push_to_hub(**kwargs)
from transformers import pipeline
import gradio as gr
pipe = pipeline(model="birgermoell/whisper-small-sv-test2") # change to "your-username/the-name-you-picked"
def transcribe(audio):
text = pipe(audio)["text"]
return text
iface = gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text",
title="Whisper Small SV",
description="Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.",
)
iface.launch()
```
It worked. Here is working code and a working test model here.
https://huggingface.co/birgermoell/whisper-small-sv-test2
<|||||>That's great @BirgerMoell 🥳 Excited to see what the full training runs bring!<|||||>The full model training also worked :D
https://huggingface.co/birgermoell/whisper-small-sv-bm<|||||>Awesome! 19.6% is pretty good! You can deffo try training for longer and a bigger model checkpoint. Feel free to post updates on the forum https://discuss.huggingface.co |
transformers | 20,057 | closed | Timestamps in Whisper processor | ### Feature request
output_word_offsets argument in Whisper's processor.decode() function.
I want to get the timestamp of the start and end of each word.
### Motivation
I cannot use Whisper until it accommodates for word timestamps and long audio.
### Your contribution
With guidance, happy to submit it but will need guidance. Can do this in a month's time. | 11-04-2022 03:46:46 | 11-04-2022 03:46:46 | cc @sanchit-gandhi or @ArthurZucker <|||||>Related to #19887, in which timestamps for Whisper were discussed. Is this on your timeline @ArthurZucker as part of the Whisper integration? Otherwise I'll add it to my TODO's!<|||||>Hey, really sorry for being so late. Will focus on that next week! I'll ping you once a draft PR is ready! 🤗 <|||||>BTW, you can already have the `timestamp` generation using the model :
```python
```
```
tensor([[50258, 50265, 50359, 50364, 1456, 1804, 1021, 871, 368, 635,
32400, 368, 635, 32400, 1030, 4666, 2795, 70, 3201, 339,
892, 1531, 287, 311, 68, 368, 10384, 2023, 20071, 13,
50639, 50257]])
``` Where the timestamp tokens are `>50363`. You can also use a custom logit processor to be sure that they are correctly generated.
Moreover, the original paper used a simple rule that associates 0.02 seconds to each tokens, which means that without removing the special tokens you can already get the per_word timestamps. 😉 <|||||>@ArthurZucker can you provide an example of how to get the timestamp tokens like above with `WhisperForConditionalGeneration` by any chance?<|||||>Of course. BTW it is included in[ this notebook](https://colab.research.google.com/drive/1rS1L4YSJqKUH_3YxIQHBI982zso23wor#scrollTo=Ca4YYdtATxzo)
```python
from datasets import load_dataset
from transformers import WhisperForConditionalGeneration, WhisperProcessor
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to(device)
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
audio_sample = ds[3]
speech_data = audio_sample["audio"]["array"]
speech_file = audio_sample["file"] # used as an example for the pipeline
inputs = processor.feature_extractor(speech_data, return_tensors="pt", sampling_rate=16_000).input_features
generate_ids = model.generate(inputs, return_timestamps=True, task="translate")
print(generate_ids)
```
```python
tensor([[50258, 50266, 50358, 50364, 634, 575, 12525, 22618, 1968, 6144,
35617, 20084, 1756, 311, 589, 307, 534, 10281, 934, 439,
293, 50676, 50676, 393, 4411, 294, 309, 457, 707, 295,
33301, 286, 392, 6628, 13, 50836, 50257]])
>>> processor.tokenizer.decode(generate_ids[0], decode_with_timestamps=True)
<|startoftranscript|><|ja|><|translate|><|0.00|> He has grave doubts whether Sir Frederick Layton's work is really Greek after all and<|6.24|><|6.24|> can discover in it but little of rocky Ithaca.<|9.44|><|endoftext|>
```<|||||>Great. Thanks! I had a quick test and for this: `each` (1123) occurs at 7.0 seconds in the audio. However with each token representing 0.02s you can see it's at 0.34s. So it doesn't look like using the tokens like so cannot find breaks at a per word level.
```
tensor([[50257, 50363, 1649, 257, 3440, 1332, 318, 2067, 11, 281,
4554, 286, 257, 2836, 1398, 481, 307, 2727, 329, 1123,
50681, 50681,
```<|||||>That is not exactly the way to compute the time. You should be using the `tokenizer.decode(..., output_offset = True)`.
Also the 0.02s rule is to convert from a timestamp token to time. Here the end is at `50681`, you substract the timestamp begin so you have `50681 - 50363 = 318` which you then multiply by `0.02`, you get `6.36`s. |
transformers | 20,056 | closed | Unable to load CodeGenTokenizer | ### System Info
I did `pip install git+https://github.com/huggingface/transformers.git`
try to load the class with `from transformers import CodeGenTokenizer`
results in
```
ImportError: cannot import name 'CodeGenTokenizer' from 'transformers' (/usr/local/lib/python3.9/dist-packages/transformers/__init__.py)
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import CodeGenTokenizer
### Expected behavior
load class | 11-03-2022 23:45:41 | 11-03-2022 23:45:41 | You should double-check that the version of Transformers seen when executing your code is indeed a source install (you can print `transformers.__version__` after importing transformers), as it looks like an issue within your env.<|||||>Thx! |
transformers | 20,055 | open | Model resources contribution | Hi friends! 👋
There are a lot of cool existing resources for how to do *x* with *x* model, and we’d like to showcase and aggregate these resources on a model’s documentation. This’ll help users see how they can get started with a model for their own tasks since we know a lot of users check out the model documentation first. Take a look at a completed [resource section](https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#resources) for DistilBERT as an example.
I’ve identified the top 20 models by pageviews, and now I’d like to open it up to the community if anyone is interested in helping!
Anyone can contribute; you just need to comment and claim one of the models on this [list](https://github.com/huggingface/transformers/issues/19848). Contributing is super easy:
1. Once you've claimed a model from the list, collect the existing resources from:
- the Hugging Face [blog](https://huggingface.co/blog)
- relevant materials from the 🤗 Hugging Face [Course](https://huggingface.co/course/chapter1/1)
- the Hugging Face [example scripts](https://github.com/huggingface/transformers/tree/main/examples) and [notebooks](https://github.com/huggingface/transformers/tree/main/notebooks)
- @NielsRogge's Transformers Tutorials [repository](https://github.com/NielsRogge/Transformers-Tutorials)
- @philschmid's [blog](https://www.philschmid.de/)
- [notebooks](https://huggingface.co/docs/transformers/community) from the community ❤️
2. Organize the resources by model tasks or applications (like inference or deployment):
- Use the corresponding icons for each task (you can find the names for each icon [here](https://github.com/huggingface/doc-builder/blob/19ba9da2556294f1777c865793d13e9ea47f8716/kit/src/lib/PipelineIcon.svelte#L42-L71)):
```
<PipelineTag pipeline=”name-of-task”/>
```
- For certain categories, you can just do: 🚀 Deploy, ⚡️ Inference, or ⚗️ Optimization, etc.
- For community resources, add the 🌎 emoji at the end to indicate it’s not an official Hugging Face resource.
- Use this DistilBERT [file](https://github.com/huggingface/transformers/pull/19930/files) as a template. You can copy and paste the intro text and just replace DistilBERT with the name of the model you're working on.
3. Open a Pull Request with the new resources for your chosen model and ping me for a review (if you’re just getting started with contributing to an open-source project, check out @merveenoyan's awesome [GitHub Contribution Guide](https://www.notion.so/19411c29298644df8e9656af45a7686d)).
4. Congratulations, you just merged a PR into 🤗 Transformers, and your contribution will now help anyone who is looking at the model docs! 🎉
If you have any questions or need any help, don’t hesitate to ping me! 🤗❤️ | 11-03-2022 23:45:30 | 11-03-2022 23:45:30 | Hi @stevhliu, I want to work on OpenAI GPT!<|||||>Awesome! I'm looking forward to your contribution, and feel free to ping me if you have any questions! 🤗<|||||>@stevhliu
I have a question. Is there a good way to search GitHub and blog posts? I tried to find related repos and blog posts with the word `OpenAI GPT` but I couldn't find them because search function doesn't seem to work well... Should I search one by one repo or post?
I made a draft pull request although it doesn't have links of GitHub and blog. You can check it to see if my research has been good or not
https://github.com/huggingface/transformers/pull/20084<|||||>Hey @shogohida, thanks for starting on this!
The easiest way I've found for searching the blog posts is to go to the blog [repo](https://github.com/huggingface/blog) and search for mentions of `GPT` inside the repo. Then you can take a look at the results and see what's relevant!
For GitHub materials, you only have to look at the example scripts, and notebooks and *see what task* your model can be applied to. For example, `OpenAI GPT` is a casual language model, so you can link to example scripts for [causal language modeling](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and also [text generation](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-generation#language-generation). You can link the equivalent scripts in [TensorFlow](https://github.com/huggingface/transformers/tree/main/examples/tensorflow) and [Flax](https://github.com/huggingface/transformers/tree/main/examples/flax) if they're available.
After the scripts, you can hop over to the [notebooks](https://github.com/huggingface/transformers/tree/main/notebooks) and see what task your model can be applied to (language modeling, generate text) and do the same thing for the [community notebooks](https://huggingface.co/docs/transformers/community)!<|||||>@stevhliu
Thanks for your comment! It will take a lot of time to collect resources from scripts and notebooks because I'm not very familiar with OpenAI GPT but I'll do my best. I'll let you know if I have another question<|||||>Hi, I would like to take CLIP from the list you have mentioned. :)<|||||>That's great @ambujpawar! I'm looking forward to your contribution, and feel free to ping me if you have any questions! 🤗
<|||||>@stevhliu I would like to work on DeBERTa<|||||>Great, thanks for taking on DeBERTa @Saad135! 🤗<|||||>Hello, do you mind if I can tackle on ALBERT model? @stevhliu <|||||>For sure, looking forward to your contribution @JuheonChu! 🤗<|||||>Hi! Could I try ViT? It might take me some time though as have some work projects to complete too.<|||||>Hi, I would like to work on XLM-RoBERTa! @stevhliu<|||||>Hey @stanleycai95, that would be great! Feel free to work on it when you have the time :)
Awesome, XLM-RoBERTa is all yours @hazrulakmal!<|||||>Hi, I would like to work on GPT-J! @stevhliu <|||||>Yay thanks for taking on GPTJ @adit299! Let me know if you have any questions or need any help 🤗 <|||||>Hi, could I work on OPT? :) @stevhliu<|||||>OPT is all yours @alissadb! 🤩 <|||||>Let me round out the list @stevhliu . TrOCR<|||||>Awesome, thanks for finishing this off @Laxmaan! 🎉 <|||||>Hello @stevhliu . I'd love to contribute in documentation. I see all models are assigned, is there any other I can help with?
Thank you 😊<|||||>Hi @elabongaatuo, sorry for the late reply and thanks for your enthusiasm!
I think we are good with the model resource contributions for now. If you're looking for ways to contribute to the docs, feel free to open an issue for improving the docs (content that is unclear, missing, or inaccurate or fixing typos) and we can review it there. For more info about getting started with contributing, take a look at this [guide](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md)! 🤗<|||||>Hello @stevhliu . Thanks for getting back to me. I'll be on the lookout for docs that need improving. <|||||>Hi @JuheonChu and @Laxmaan, I wanted to check and see if you're still interested in making a model contribution. Totally cool if you aren't available anymore, I'll unassign the models you claimed and let others take a shot at it. Thanks! <|||||>Hi @stevhliu, I'd like to take a shot at one of the models if one of them becomes unassigned. Please let me know!<|||||>Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!<|||||>Hello @stevhliu. I'd like to take up ALBERT.<|||||>> Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!
I’d like to take TrOCR!<|||||>All yours! Happy contributing and feel free to let me know if you have any questions! 🤗<|||||>> Thanks for the interest; TrOCR, LayoutLMV2, and ALBERT are now available!
Hello!! @stevhliu I don't have any option I guess 😅. LayoutLMV2 for me then 🌏.<|||||>Hi @subham73, LayoutLMv2 is actually [done](https://github.com/huggingface/transformers/issues/19848) haha! <|||||>Hi @stevhliu are there any open issues to work on :)<|||||>Hi, thanks for your interest @Girish16!
Feel free to browse [Good First Issues](https://github.com/huggingface/transformers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22) for open issues to work on, and you can also check out the [Contribution](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#contribute-to--transformers) guide for more ways to contribute! 🤗 <|||||>Hi. Is ALBERT still available? <|||||>Hi @ENate, ALBERT is currently being worked on in #23685. If the original contributor is no longer interested in working on it, I'll let you know! 😄 <|||||>No worries thanks :) . <|||||>@stevhliu hello, @ENate can take it up. 😊<|||||>Okay then. Will proceed using the guidelines provided by @stevhliu and the example for DIstilBERT.<|||||>@stevhliu - I saw that there is a resource for ALBERT at:
```
https://huggingface.co/docs/transformers/main/en/model_doc/albert
```
which is similar to the resources for DistilBERT you mentioned in the guidelines above at:
```
https://huggingface.co/docs/transformers/main/en/model_doc/distilbert#resources
```
<|||||>Yeah ALBERT only has the task guides, and it doesn't go quite as in-depth as DistilBERT. For example, DistilBERT includes links to the course, notebooks, and scripts. You can probably just copy over most of the content from DistilBERT that is relevant to ALBERT (in other words, replace `DistilBERTForX` with `ALBERTForX`)!<|||||>Thanks :) @stevhliu <|||||>Hello @stevhliu is Jukebox still available? <|||||>Feel free to open a PR for Jukebox @daniela-basurto! 🤗 <|||||>Hello @stevhliu may I please take up **whisper** with a few of the OSSCA mentees?
Cc: tysm @ArthurZucker for the pointer! We'll start compiling models with incomplete resource tabs so our mentees can work on them.<|||||>Yes absolutely, thanks for your interest @wonhyeongseo!<|||||>I ran a simple `grep -wL * -e "## Resources"` command, and a total of **150 out of 222 documents** would benefit from this issue. I'm not sure if all of these are open for contributions though.
Below is the todo list with contributors I saw recently.
<summary>
Model Docs in need of resources [Updated 23-07-22]:
</summary>
<details>
- [ ] albert
- [ ] altclip
- [ ] bark
- [ ] barthez
- [ ] bartpho
- [ ] bert-generation
- [ ] bert-japanese
- [ ] bertweet
- [ ] big_bird
- [ ] bigbird_pegasus
- [ ] biogpt
- [ ] blenderbot-small
- [ ] blenderbot
- [ ] bridgetower
- [ ] byt5
- [ ] camembert
- [ ] canine
- [ ] chinese_clip
- [ ] clap
- [ ] codegen
- [ ] conditional_detr
- [ ] convbert
- [ ] cpm
- [ ] cpmant
- [ ] ctrl
- [ ] deberta-v2
- [ ] decision_transformer
- [ ] deplot
- [ ] dialogpt
- [ ] dinov2
- [ ] donut
- [ ] dpr
- [ ] efficientformer
- [ ] efficientnet
- [ ] electra
- [ ] encodec
- [ ] encoder-decoder
- [ ] ernie
- [ ] ernie_m
- [ ] esm
- [ ] flan-t5
- [ ] flan-ul2
- [ ] flaubert
- [ ] flava
- [ ] fnet
- [ ] focalnet
- [ ] fsmt
- [ ] funnel
- [ ] gpt-sw3
- [ ] gpt_bigcode
- [ ] gpt_neo
- [ ] gpt_neox
- [ ] gpt_neox_japanese
- [ ] gptsan-japanese
- [ ] graphormer
- [ ] herbert
- [ ] hubert
- [ ] ibert
- [ ] instructblip
- [ ] jukebox @daniela-basurto
- [ ] layoutxlm
- [ ] led
- [ ] llama @wonhyeongseo and OSSCA
- [ ] llama2
- [ ] longformer
- [ ] longt5
- [ ] luke
- [ ] lxmert
- [ ] m2m_100
- [ ] marian
- [ ] markuplm
- [ ] matcha
- [ ] mbart
- [ ] mega
- [ ] megatron-bert
- [ ] megatron_gpt2
- [ ] mgp-str
- [ ] mluke
- [ ] mms
- [ ] mobilebert
- [ ] mobilevitv2
- [ ] mpnet
- [ ] mra
- [ ] mt5
- [ ] musicgen
- [ ] mvp
- [ ] nezha
- [ ] nllb-moe
- [ ] nllb
- [ ] nystromformer
- [ ] owlvit
- [ ] pegasus
- [ ] pegasus_x
- [ ] perceiver
- [ ] phobert
- [ ] plbart
- [ ] prophetnet
- [ ] qdqbert
- [ ] rag
- [ ] realm
- [ ] reformer
- [ ] rembert
- [ ] roberta-prelayernorm
- [ ] roc_bert
- [ ] roformer
- [ ] rwkv
- [ ] sam
- [ ] sew-d
- [ ] sew
- [ ] speech-encoder-decoder
- [ ] speech_to_text
- [ ] speech_to_text_2
- [ ] speecht5
- [ ] splinter
- [ ] squeezebert
- [ ] swiftformer
- [ ] t5v1.1
- [ ] tapas
- [ ] timesformer
- [ ] todo
- [ ] tvlt
- [ ] ul2
- [ ] umt5
- [ ] unispeech-sat
- [ ] unispeech
- [ ] vilt
- [ ] vision-encoder-decoder
- [ ] vision-text-dual-encoder
- [ ] visual_bert
- [ ] vivit
- [ ] wav2vec2-conformer
- [ ] wav2vec2_phoneme
- [ ] wavlm
- [ ] whisper @wonhyeongseo and OSSCA
- [ ] xglm
- [ ] xlm-prophetnet
- [ ] xlm-roberta-xl
- [ ] xlm-v
- [ ] xlm
- [ ] xlnet
- [ ] xls_r
- [ ] xlsr_wav2vec2
- [ ] yoso
</details><|||||>Ok.<|||||>> I'm not sure if all of these are open for contributions though.
Thanks for checking @wonhyeongseo! I think it would be nice to eventually have Resources for all the models, so if you see other ones you're interested in contributing to, feel free to open a PR! I would focus on the more high-impact models first (like LLaMA) that get more pageviews/usage. For certain models (like [BORT](https://huggingface.co/docs/transformers/model_doc/bort)) that are in maintenance mode, we can skip those entirely.<|||||>Awesome @stevhliu , thank you so much for your warm reception.
- **May we please reserve LLaMA as well for the OSSCA team?**
- **In your opinion, when is the ideal time to start gathering resources after a model's release?**
For LLaMA2, since it's relatively new, there might not be many official resources yet. It will depend on a models impact as you described, but a rule of thumb would be useful.
- **Although I think this is already the case, would it be possible for you to sort these incomplete models and provide the top 20 sorted by impact or page views as of recent advances?**
I'm sure some will pique the interest of my team and our mentees.
<summary>
Thank you for the heads up for files under maintenance! I've deleted 8 of those from the above https://github.com/huggingface/transformers/issues/20055#issuecomment-1645188309 list by `grep -LZ "## Resources" * | xargs -0 grep -l "<Tip warning={true}>"` :
</summary>
<details>
```js
auto.md
bort.md
mctct.md
open-llama.md
retribert.md
tapex.md
trajectory_transformer.md
transfo-xl.md
```
</details>
Thank you so much for your support @stevhliu .
Hope you have a wonderful weekend!
Best regards,
Won Seo<|||||>> May we please reserve LLaMA as well for the OSSCA team?
For sure! 👍
> In your opinion, when is the ideal time to start gathering resources after a model's release?
I think maybe whenever you see some content, you can open a PR to add it to the model page. It's ok if it's just one guide/tutorial/blog post; we can gradually add to it as more content and resources get created. For example, Philipp has a [blog post](https://www.philschmid.de/sagemaker-llama2-qlora) about fine-tuning LLaMA 2 on SageMaker here that can be added :)
> Although I think this is already the case, would it be possible for you to sort these incomplete models and provide the top 20 sorted by impact or page views as of recent advances?
By downloads, here are the next top 20 models (its okay to skip some of the models if there aren't any available resources for them):
<details>
BART
CLIPSeg
Marian
MPNet
ELECTRA
ResNet
CamemBERT
HuBERT
LLaMA
Longformer
VisionEncoderDecoder
GPT NeoX
EnCodec
ConvBERT
mBART
GPT Neo
FNet
YOLOS
BLIP
BEiT
</details><|||||>ok<|||||>@stevhliu
Hello, I would like to put together some resources for Longformer, willing to look into CamemBERT as well if Longformer has already been taken. |
transformers | 20,054 | closed | Adapt PerceiverIO Multimodal class to work with arbitrary modalities | # What does this PR do?
The current codebase is excellent, but the multimodal classes are tightly coupled to the example in the paper where the modalities are video, audio, and binary class labels. This PR makes a few small changes to support arbitrary modalities, such as text and image.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
I'm not an experienced committer to this repo, so I'm very happy to take direction. My hope is to share the improvements I made to a wider audience and extend an already awesome package.
| 11-03-2022 23:10:22 | 11-03-2022 23:10:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger can you take a look at this PR? I'm happy to make any modifications to get this through. These changes are required to use the multimodal classes on modalities other than those that are currently hard-coded.<|||||>Thanks for your PR! Could you clarify which errors you run into with the current implementation that would be solved with this PR?<|||||>The biggest issue I had was that the signature of `forward` for the various Preprocessors isn't consistent. This mean I couldn't use the `TextPreprocessor` within the `PerceiverMultimodalPreprocessor` class. There were a couple other issues as well (e.g. `dict` instead of `ModuleDict`.) I'm happy to take feedback to improve this PR. Thanks for all your help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge any thoughts on these improvements?<|||||>Hi @NielsRogge, just checking in on this PR. I know you're probably super busy, so is there something I can do to make the review easier for you? I'm very happy to do what I can to incorporate feedback. I have additional changes I'd like to make (mostly around type hints), but I'm hoping to make these initial fixes first, which are more critical. Please let me know how I can help! Thanks again for all your work.<|||||>Hi, sorry for the late reply here. I'll take a look tomorrow <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@amyeroberts Could you have a look?<|||||>@stevenmanton I can see that some of the tests failing are unrelated to this PR. Can you rebase from main to make sure all upstream changes are included? <|||||>Thanks for your contribution! |
transformers | 20,053 | closed | Is there a no_trainer version for image pertaining | ### Feature request
I wonder if there is a run_no_trainer code for image pertaining? @NielsRogge
### Motivation
I usually use no trainer in my code because the trainer cannot should much detail.
### Your contribution
NA | 11-03-2022 22:01:59 | 11-03-2022 22:01:59 | @NielsRogge<|||||>Not yet, but it would be straightforward to add.
Marking this as a good first issue.<|||||>Hi @NielsRogge, I would like to try to add it.<|||||>Hi @atturaioe, awesome.
So in [this folder](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining), one could add a `run_mim_no_trainer.py` script, similar to the other `no_trainer.py` scripts in the examples folder.<|||||>Hi @atturaioe, are you still working on this? I would like to attempt this if you are not longer working on it.<|||||>Hi @Saad135, you can take this issues if you want, let me know.
<|||||>@atturaioe Sure, I will give it a go.<|||||>Is anyone still working on this? If not I'd quite like to pick it up.
I see from the PR that most of the work has been done but there's not been any acitivity on it recently 🤔 <|||||>@madt2709 I am working on it. Most of the work has been completed by @Saad135. His PR #20053 has been closed due to inactivity. I have taken it over in PR #23156 to complete it.
|
transformers | 20,052 | closed | Implement tf big bird port | # What does this PR do?
Solves #19430 by implementing BigBird in Tensorflow
I had an earlier demo PR up, but this one will be the main PR I use to hopefully get this merged in once it's all working :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Tagging @Rocketknight1 as they were kind enough to help me with this already with an issue I was running into with outputting attention weights using strided slices, but would appreciate any inputs from anyone!
Most of the tests are working as expected now, I am still running into a couple issues with two or three tests, that I need to look into:
1) Issue with TFAutoModel in test not raising a `ValueError` in one of the tests for mismatching sizes (but the `TFAutoModelForSequenceClassification` is working in this test 🤔).
2) Issue persisting and loading (`save_load` and `keras_load` tests aren't working fully yet)
I'm going to try and work on this some more this weekend, if anyone has any insights I'd love to know :D | 11-03-2022 19:33:39 | 11-03-2022 19:33:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20052). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,051 | closed | Add new terms to the glossary | This PR adds some new terms related to computer vision and speech to the glossary, feel free to let me know if I'm missing any you think are important that would help users better understand the docs! | 11-03-2022 18:25:52 | 11-03-2022 18:25:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,050 | closed | Generate: TF contrastive search with XLA support | # What does this PR do?
Adds contrastive search to TF, with XLA support.
In essence, TF's contrastive search is very similar to PT's, adapted to the structure that is present in other TF XLA generation functions (i.e. has a dedicated function to loop over, a separate function to update `model_kwargs` when in XLA mode, ...). The most notable difference is how the best candidate token (and associated model variables) are gathered -- PT relies on slicing, which TF doesn't support, so a `tf.gather` alternative is used.
The exact same integration tests (with the same input, model, and outputs) were added whenever possible. Three integration tests were not added, which will be addressed in a follow-up PR:
1. [GPT-J](https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/tests/models/gptj/test_modeling_gptj.py#L577) -- PT's test runs at half precision, for which we don't have the same TF facilities
2. [OPT](https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/tests/models/opt/test_modeling_opt.py#L495) -- OPT is not XLA compatible atm (it runs, but the position embeddings are wrong with padded structures, so we get different outputs)
3. [T5](https://github.com/huggingface/transformers/blob/d447c460b16626c656e4d7a9425f648fe69517b3/tests/models/t5/test_modeling_t5.py#L1227) -- the model used for this test do not have TF weights | 11-03-2022 18:00:30 | 11-03-2022 18:00:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(@Rocketknight1 is off, so I'm merging this to not slow down the corresponding blog post, which will contain TF examples thanks to this PR :D In any case, have a quick look when you're back, to ensure we kill any bad pattern before v4.25 gets released!) |
transformers | 20,049 | closed | updating the warmup_ratio from 0.1 to 0.2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@lvwerra
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-03-2022 17:49:46 | 11-03-2022 17:49:46 | @lvwerra - what should I do next?<|||||>Closing this - just used this to show how to make a PR for the transformers reading group! Sorry for the unnecessary pings :)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20049). All of your documentation changes will be reflected on that endpoint.<|||||>Please use personal forks of the repo when doing demos in the future. In this PR you pinged directly 15 people and also added an unnecessary notification for all of those who watched the repo. |
transformers | 20,048 | closed | PoolformerImageProcessor defaults to match previous FE | # What does this PR do?
Output of PoolformerImageProcessor didn't exactly match previous feature extractor. Updates defaults and size calculation logic to match outputs.
Running:
```
import torch
from transformers import PoolFormerFeatureExtractor, PoolFormerModel
from datasets import load_dataset
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]
feature_extractor = PoolFormerFeatureExtractor.from_pretrained("sail/poolformer_s12")
model = PoolFormerModel.from_pretrained("sail/poolformer_s12")
inputs = feature_extractor(image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
r = list(last_hidden_states.shape)
print(r)
```
Now outputs images of size `[1, 512, 7, 7]` - matching the output of the images before #19796 was merged in.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 11-03-2022 17:48:16 | 11-03-2022 17:48:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,047 | closed | pipeline("summarization") is extractive vs abstractive? | ### System Info
pipeline("summarization") is extractive vs abstractive?
# use bart in pytorch
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
# use t5 in tf
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
# use bart in pytorch
summarizer = pipeline("summarization")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
# use t5 in tf
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("An apple a day, keeps the doctor away", min_length=5, max_length=20)
### Expected behavior
pipeline("summarization") is extractive vs abstractive? there is no mention about it on the official documentation | 11-03-2022 16:51:56 | 11-03-2022 16:51:56 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,046 | closed | Multilabel, multiclass models with >2 classes per label using BCELoss instead of CategoricalCrossentropy | ### System Info
latest transformers
### Who can help?
@sgugger
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The PR here by @sgugger adds multilabel classification support with the BCEWithLogitsLoss:
https://github.com/huggingface/transformers/pull/14180/files
However, in the case where we have multiple labels, and the labels have multiple classes (or a mix of binary with multiple classes), it seems a bit strange to me to use BCEWithLogitsLoss and to force the user to one-hot-encode the labels with multiple classes during fine-tuning, which is the only way to make this work. Instead, we should in that case just use CategoricalCrossentropy like in this tutorial for each of the labels:
https://towardsdatascience.com/multi-label-multi-class-text-classification-with-bert-transformer-and-keras-c6355eccb63a
Perhaps I missed something, but it seems better to me to allow the user to specify the labels without reformatting then and specify multiple CategoricalCrossentropy losses than to force them to reformat the labels into a binary one-hot-encoding for all classes across multiple labels.
For example, if the user had a dataset like:
Text | Animal | Sound
Likes to fetch | Dog | Woof
Likes to sleep | Cat | Meow
Likes to fly | Bird | Chirp
My understanding is that currently we have to one-hot-encode the labels into the following to make it work with BCEWithLogitsLoss which is used in the implementation:
Dog | Cat | Bird | Woof | Meow | Chirp
1 | 0 | 0 | 1 | 0 | 0
0 | 1 | 0 | 0 | 1 | 0
0 | 0 | 1 | 0 | 0 | 1
I think it might be useful instead to just allow the user to pass in the labels directly and use CategoricalCrossentropy. Why is that currently not possible?
### Expected behavior
Allow the user to pass in the labels without creating a one-hot-encoded matrix for multiclass, multilabel scenario where there are >2 classes per label | 11-03-2022 16:34:21 | 11-03-2022 16:34:21 | Also, just thinking about correctness, the model could possibly give both Dog and Cat as an answer when using BCEWithLogitsLoss since it treats them as independent labels during fine-tuning after one hot encoding. This seems wrong to me. Perhaps I am missing something here? We should just use multiple CategoricalCrossentropy losses per label here both for correctness and to make it easier for the user to specify the labels to the model.<|||||>You are mistaking what we call a "multi-label" problem. A multi-label problem means one input can have zero, one or multiple labels. For instance the model could give both Dog and Cat as an answer because there might be both in the input.
For a model that predicts several categories of outputs, you will probably need to write your own head.<|||||>I guess I'm perhaps getting confused by the graph from scikit-learn here:
https://scikit-learn.org/stable/modules/multiclass.html?highlight=multilabel
multilabel classification is under sklearn.multioutput. Perhaps I've heard the terms multilabel and multioutput used interchangeably too many times and I'm getting confused by that. Also perhaps in tabular vs text contexts those terms may be used in slightly different ways.
I see now that the intention isn't to support this scenario in huggingface, so perhaps I can close this issue. This would be a more advanced scenario/feature where there can be multiple outputs but some are grouped together. There are also hierarchical models that output some top-level label as one class and then under that more specific labels as another class, thinking of the https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes dataset here -- which in the description says "excellent benchmark for hierarchical multiclass/multilabel text classification", but perhaps it should call it multioutput since you would want the model to output all l1/l2/l3 labels with one of the multiple classes for each. That is also similar to the hierarchical multilabel + multioutput + multiclass models some customers I have worked with recently used, although maybe if the model is supposed to output multiple labels like this where the labels are not independent "multilabel" is not the correct term then and only "multioutput" should be used.<|||||>I guess this issue is more of a feature request than a bug report to support "multioutput" instead of the current "multilabel" text classification scenario. But this feature might be so niche and specific that it's not really worth it to implement in huggingface as another text classification parameter, so I'll just close it here. |
transformers | 20,045 | closed | Fix ESM LM head test | The original ESM-2 checkpoints had a bug that meant the LM head bias was not saved correctly. Now this has been fixed, we need to update our LM test as well.
cc @ydshieh | 11-03-2022 16:07:40 | 11-03-2022 16:07:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh Unfortunately I deleted and remade the repos, so there's no commit to point at! I'll try to do that in future though. |
transformers | 20,044 | closed | Allow passing arguments to model testers for CLIP-like models | # What does this PR do?
This is a continuation of PR #19954, but for model likes `CLIP`. Currently for such models, we have
```python
class CLIPModelTester:
def __init__(self, parent, is_training=True):
self.parent = parent
self.text_model_tester = CLIPTextModelTester(parent)
self.vision_model_tester = CLIPVisionModelTester(parent)
```
and there is no way to pass any argument to the 2 component testers.
If this POC is approved, I will work on other models like `GroupViT`, `OwlViT`, `XCLIP` etc.
| 11-03-2022 15:30:15 | 11-03-2022 15:30:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>`GroupViT`, `OwlViT`, `XCLIP`.
`Flava` too, but with name `image_model_tester` instead of `vision_model_tester`.
|
transformers | 20,043 | closed | Only resize embeddings when necessary | # What does this PR do?
As seen in #19959, using our examples with models where the embedding size is larger than the tokenizer for padding reasons (to get embedding dim a multiple of a give number like 8 or 128), the fine-tuned models become incompatible with the original model.
This has confused a lot of users, just to support a full pretraining example. This is why this PR proposes to only resize the embeddings when the tokenizer has more tokens. This might in turn confuse users that do a full pretraining on a small vocab and expect the model to have a smaller embedding size, which is why a comment is added.
Fixes #19959 | 11-03-2022 15:29:33 | 11-03-2022 15:29:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,042 | closed | Attempting to test automatically the `_keys_to_ignore`. | # What does this PR do?
This adds a new part of the `tied_weights` test that aims at detecting automatically
when `_keys_to_ignore` is incorrectly set.
`_keys_to_ignore` aims to ignore weights that are supposed to be tied in the
final model, meaning it's OK if the parameter is missing from the on-disk weights.
The weights are really empty during the load, but they end up being tied afterwards
so we should ignore them during the load if they are missing.
The test also aims to detect `_keys_to_ignore` that might have been set but
could be misleading because the parameters are actually NOT tied anymore.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-03-2022 15:04:22 | 11-03-2022 15:04:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh the `splinter` test failing is normal ?
```
FAILED tests/models/splinter/test_modeling_splinter.py::SplinterModelTest::test_save_load_fast_init_from_base - AssertionError: 3069.73388671875 not less than or equal to 0.001 : splinter_qass.query_start_transform.dense.weight not identical
```<|||||>@Narsil
I am not able to reproduce the `splinter` test failure you mentioned above with current `main` on a GCP GPU VM. Could you provide more information about your environment and how you launched the test?<|||||>IT's this failure : https://app.circleci.com/jobs/github/huggingface/transformers/608396<|||||>@ydshieh This tests now fails in the CI tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py::Wav2Vec2ConformerModelTest::test_save_load_fast_init_from_base
However I can't seem to be able to reproduce locally ? Do you mind trying if it's my setup failing or the CI ?<|||||>Merging.
**fingers crossed** :)<|||||>Hey!
I am not sure if it is because of this PR but loading NLLB (that is affected by this PR) now gives:
```
│ /home/younes_huggingface_co/debug_issues/code/transformers/src/transformers/modeling_utils.py:24 │
│ 59 in _load_pretrained_model │
│ │
│ 2456 │ │ │ for key in missing_keys: │
│ 2457 │ │ │ │ if key.startswith(prefix): │
│ 2458 │ │ │ │ │ key = ".".join(key.split(".")[1:]) │
│ ❱ 2459 │ │ │ │ param = model_state_dict[key] │
│ 2460 │ │ │ │ if param.device == torch.device("meta"): │
│ 2461 │ │ │ │ │ if not load_in_8bit: │
│ 2462 │ │ │ │ │ │ set_module_tensor_to_device(model, key, "cpu", torch.empty(*para │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'encoder.embed_positions.weights'
```
Here is the snippet to reproduce the error:
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
src_lang = "eng_Latn"
tgt_lang = "spa_Latn"
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", src_lang=src_lang)
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M",
device_map= "auto")
```
I did not followed entirely this PR but I will dig into that now and see what exactly caused the issue 💪
cc @Narsil @sgugger |
transformers | 20,041 | closed | woctezuma / stable-diffusion-colab | ### System Info
Google Colab, Free version, GPU
### Who can help?
@NielsRogge, @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. https://github.com/woctezuma/stable-diffusion-colab
2. https://colab.research.google.com/github/woctezuma/stable-diffusion-colab/blob/main/stable_diffusion.ipynb#scrollTo=GR4vF2bw-sHR
3. copy create to drive
4. run 1st cell
5. run 2nd cell
6. copy my token from https://huggingface.co/settings/tokens
7. paste it to the filed
8. press enter
9. #1st error - https://discuss.huggingface.co/t/invalid-token-passed/22711
10. https://huggingface.co/settings/tokens mange invalidate and refres
11. run 2nd cell again
12. copy and paste in new token
```
_| _| _| _| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _|_|_|_| _|_| _|_|_| _|_|_|_|
_| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|
_|_|_|_| _| _| _| _|_| _| _|_| _| _| _| _| _| _|_| _|_|_| _|_|_|_| _| _|_|_|
_| _| _| _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|
_| _| _|_| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _| _| _| _|_|_| _|_|_|_|
To login, `huggingface_hub` now requires a token generated from https://huggingface.co/settings/tokens .
Token:
Login successful
Your token has been saved to /root/.huggingface/token
Authenticated through git-credential store but this isn't the helper defined on your machine.
You might have to re-authenticate when pushing to the Hugging Face Hub. Run the following command in your terminal in case you want to set this credential helper as the default
git config --global credential.helper store
```
13. I have run ```git config --global credential.helper store``` than I could rerun everything and move forward 2 cells
14. Cell CODE
```
import mediapy as media
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
remove_safety = False
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16, revision="fp16", use_auth_token=True)
if remove_safety:
pipe.safety_checker = lambda images, clip_input: (images, False)
pipe = pipe.to(device)
```
15. ERROR
```
[/usr/local/lib/python3.7/dist-packages/requests/models.py](https://localhost:8080/#) in raise_for_status(self)
940 if http_error_msg:
--> 941 raise HTTPError(http_error_msg, response=self)
942
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/fp16/model_index.json
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
233 subfolder=subfolder,
--> 234 revision=revision,
235 )
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py](https://localhost:8080/#) in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, legacy_cache_layout)
1056 proxies=proxies,
-> 1057 timeout=etag_timeout,
1058 )
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py](https://localhost:8080/#) in get_hf_file_metadata(url, use_auth_token, proxies, timeout)
1358 )
-> 1359 hf_raise_for_status(r)
1360
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)
253 # as well (request id and/or server error message)
--> 254 raise HfHubHTTPError(str(HTTPError), response=response) from e
255
HfHubHTTPError: <class 'requests.exceptions.HTTPError'> (Request ID: esduBFUm9KJXSxYhFffq4)
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
[<ipython-input-6-9b05f13f8bf3>](https://localhost:8080/#) in <module>
9
10
---> 11 pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16, revision="fp16", use_auth_token=True)
12 if remove_safety:
13 pipe.safety_checker = lambda images, clip_input: (images, False)
[/usr/local/lib/python3.7/dist-packages/diffusers/pipeline_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
371 local_files_only=local_files_only,
372 use_auth_token=use_auth_token,
--> 373 revision=revision,
374 )
375 # make sure we only download sub-folders and `diffusers` filenames
[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
254 except HTTPError as err:
255 raise EnvironmentError(
--> 256 "There was a specific connection error when trying to load"
257 f" {pretrained_model_name_or_path}:\n{err}"
258 )
OSError: There was a specific connection error when trying to load CompVis/stable-diffusion-v1-4:
<class 'requests.exceptions.HTTPError'> (Request ID: esduBFUm9KJXSxYhFffq4)
```
### Expected behavior
Run all the cells and generating photo's as on the GitHub project shows
https://github.com/woctezuma/stable-diffusion-colab | 11-03-2022 14:33:40 | 11-03-2022 14:33:40 | The error is that your token is not properly registered or does not grant access to the model. If you have accepted the terms of the license online, then it's probably a bug in `huggingface_hub` not setting your token properly, so you should report an issue in that repo :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,040 | closed | `torch.finfo` issue with torch.fx | # What does this PR do?
This PR allows the tracing of `torch.finfo` which were added massively to model implementations recently. | 11-03-2022 13:36:29 | 11-03-2022 13:36:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,039 | closed | [Doctest] Add configuration_camembert.py | # What does this PR do?
Adds configuration_camembert.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-03-2022 13:22:08 | 11-03-2022 13:22:08 | |
transformers | 20,038 | closed | Amazon Sagemaker deployment issue for FLAN-T5 model family | ### System Info
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Using the deployment script for Amazon Sagemaker as described on the FLAN-T5 model cards (e.g. google/flan-t5-small):
```
from sagemaker.huggingface import HuggingFaceModel
import sagemaker
role = sagemaker.get_execution_role()
hub = {
'HF_MODEL_ID':'google/flan-t5-small',
'HF_TASK':'text2text-generation'
}
huggingface_model = HuggingFaceModel(
transformers_version='4.17.0',
pytorch_version='1.10.2',
py_version='py38',
env=hub,
role=role,
)
predictor = huggingface_model.deploy(
initial_instance_count=1, # number of instances
instance_type='ml.m5.xlarge' # ec2 instance type
)
predictor.predict({
'inputs': "The answer to the universe is"
})
```
I receive the following error:
```
ModelError Traceback (most recent call last)
<ipython-input-10-eb84f66e23d1> in <module>
25
26 predictor.predict({
---> 27 'inputs': "The answer to the universe is"
28 })
/opt/conda/lib/python3.7/site-packages/sagemaker/predictor.py in predict(self, data, initial_args, target_model, target_variant, inference_id)
159 data, initial_args, target_model, target_variant, inference_id
160 )
--> 161 response = self.sagemaker_session.sagemaker_runtime_client.invoke_endpoint(**request_args)
162 return self._handle_response(response)
163
/opt/conda/lib/python3.7/site-packages/botocore/client.py in _api_call(self, *args, **kwargs)
510 )
511 # The "self" in this scope is referring to the BaseClient.
--> 512 return self._make_api_call(operation_name, kwargs)
513
514 _api_call.__name__ = str(py_operation_name)
/opt/conda/lib/python3.7/site-packages/botocore/client.py in _make_api_call(self, operation_name, api_params)
917 error_code = parsed_response.get("Error", {}).get("Code")
918 error_class = self.exceptions.from_code(error_code)
--> 919 raise error_class(parsed_response, operation_name)
920 else:
921 return parsed_response
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"code": 400,
"type": "InternalServerException",
"message": "\u0027T5LayerFF\u0027 object has no attribute \u0027config\u0027"
}
```
### Expected behavior
Model shall work when deployed on Sagemaker Studio. | 11-03-2022 11:29:26 | 11-03-2022 11:29:26 | cc @philschmid <|||||>Hello @BalazsFeherUK,
It seems that `T5-FLAN`/ `T5LayerFF` is not yet supported in `transformers==4.17.0`. You would need to update the transformers version to be able to use the model. You can check the forum on how you would do this: https://discuss.huggingface.co/t/deploying-open-ais-whisper-on-sagemaker/24761/9 |
transformers | 20,037 | closed | Add **kwargs to preprocess method | # What does this PR do?
Fixes failing doctests with (and real life usage of) processors which contain two processing objects: one image processor + one tokenizer/feature extractor.
When the processor is called, all kwargs are passed to both processing objects e.g. in CLIPProcessor: https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/processing_clip.py#L81-L85
Image processors therefore have to be able to accept arguments they will not use when called.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 11-03-2022 10:40:21 | 11-03-2022 10:40:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Agreed - I'll add splitting up the kwargs on the TODO list! |
transformers | 20,036 | closed | Fix some doctests after PR 15775 | # What does this PR do?
After PR #15775, we need to either update some expected values or specify `skip_special_tokens=True`.
I am not very comfortable to use `skip_special_tokens=True` for `PT_QUESTION_ANSWERING_SAMPLE` in `doc.py`, as it might fail other tests. We will have to run the doctest manually to see if everything is fine.
(A lazy way is not to use this argument, but just to update the expected values)
#### update
I launched doctest [here](https://github.com/huggingface/transformers/actions/runs/3384706275). The tests with `ForQuestionAnswering` all pass, so we are good!
| 11-03-2022 09:48:33 | 11-03-2022 09:48:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry, I always missed the last piece. Added the final commit to fix `docs/source/en/model_doc/speech_to_text.mdx`.
https://github.com/huggingface/transformers/pull/20036/commits/e6c4bc5f45fee4a8d3997bc4c04896fac1c25284 |
transformers | 20,035 | closed | Answer Mismatch: run squad_convert_examples_to_features with xlm-roberta | ### System Info
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
### Who can help?
@sgugger
@mfuntowicz
@aaugustin
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Overview
I am doing QA task on the xquad dataset, which is a multilingual version of squad and in squad format.
A problem occurs when I use xlm-roberta-base tokenizer to preprocess the xquad.zh.json (Chinese version) following the standard process.
Specifically, I convert a list of examples into features by a function [`squad_convert_examples_to_features`](https://github.com/huggingface/transformers/blob/main/src/transformers/data/processors/squad.py) provided by huggingface and find that the answers of some orginal examples are inconsistent with the their features.
I'll just pick an example for demonstration.
### Codes
```python
import transformers
from transformers import (
AutoTokenizer,
squad_convert_examples_to_features,
)
from transformers.data.processors.squad import SquadResult, SquadV1Processor, SquadV2Processor
model_name_or_path = 'xlm-roberta-base'
tokenizer = AutoTokenizer.from_pretrained(
model_name_or_path,
do_lower_case=False,
cache_dir='./cache/',
use_fast=False,
)
processor = SquadV1Processor()
examples = processor.get_train_examples(None, filename='xquad.zh.json')
...
# I pick just one tmp_example from examples
...
print(tmp_example.question_text)
# '利用计算复杂性理论对计算问题进行分类的主要依据是什么?'
print(tmp_example.context_text)
# '计算复杂性理论是理论计算机科学中计算理论的一个分支,它侧重于根据计算问题的固有难度对其进行分类,并将这些类别相互关联起来。计算问题被理解为原则上可由计算机解决的任务,这相当于说明该问题可通过机械地应用数学步骤(例如算法)来解决。'
pritn(tmp_example.answer_text) # which should be the ground truth
# '固有难度'
# Initialize the features, dataset, dataloader following the standard process
features, dataset = squad_convert_examples_to_features(
examples=[tmp_example],
tokenizer=tokenizer,
max_seq_length=512,
doc_stride=128,
max_query_length=64,
is_training=True,
return_dataset="pt",
threads=4,
)
train_sampler = RandomSampler(dataset)
train_dataloader = DataLoader(dataset, sampler=train_sampler, batch_size=8)
for n,batch in enumerate(train_dataloader):
start_positions = batch[3]
end_positions = batch[4]
if type(xlm_tokenizer).__name__ in ['XLMRobertaTokenizer']:
start_positions = start_positions + 1
end_positions = end_positions + 1
inputs = {
"input_ids": batch[0],
"attention_mask": batch[1],
"token_type_ids": batch[2],
"start_positions": start_positions,
"end_positions": end_positions,
}
print(xlm_tokenizer.decode(inputs['input_ids'][0, start_positions[0]:end_positions[0]+1]))
# '计算复杂性理论是理论计算机科学中计算理论的一个分支,它侧重于根据计算问题的固有难度对其进行分类,并将这些类别相互关联起来。计算问题被理解为原则上可由计算机解决的任务,这相当于说明该问题可通过机械地应用数学步骤(例如算法)来解决。'
# the ground truth answer span should be '固有难度', however, the actual answer span input to the model is shown as above which is inconsistent with the ground truth.
```
### Expected behavior
The ground truth answer span should be
'固有难度',
however, the actual answer span input to the model is
'计算复杂性理论是理论计算机科学中计算理论的一个分支,它侧重于根据计算问题的固有难度对其进行分类,并将这些类别相互关联起来。计算问题被理解为原则上可由计算机解决的任务,这相当于说明该问题可通过机械地应用数学步骤(例如算法)来解决。'
which is inconsistent with the ground truth.
Similar problems have been found in many examples. | 11-03-2022 09:26:52 | 11-03-2022 09:26:52 | Thanks for the report. However, as you have probably seen when executing it, `squad_convert_examples_to_features` is deprecated, so it's not maintained anymore.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,034 | closed | [Swin] Add Swin SimMIM checkpoints | # What does this PR do?
This PR adds 2 checkpoints for Swin Transformer pre-trained using the SimMIM objective (taken from [here](https://github.com/microsoft/Swin-Transformer/blob/main/MODELHUB.md#simmim-pretrained-swin-v1-models)).
They are on the hub: https://huggingface.co/models?other=simmim
It also fixes an important bug in `modeling_swin.py` regarding the window size not being set properly. | 11-03-2022 09:19:18 | 11-03-2022 09:19:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,033 | closed | Give `modeling_t5.py` a `_prune_heads` | # What does this PR do?
Run CircleCI tests for #19975
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-03-2022 06:50:07 | 11-03-2022 06:50:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20033). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,032 | closed | Type annotation for `pipeline()`s | ### Feature request
Currently, the return type of many functions are not defined.
```py
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
tokenizer( # no type checking/help here
```
It would be nice if the return type was set, the equivalent of me doing it manually with:
```py
tokenizer: DistilBertTokenizerFast = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
tokenizer( # now I get checking
```
### Motivation
The main benefits are probably pretty well understood...
* preventing typos in inputs that take strings (tasks, checkpoints, etc)
* autocomplete for methods/parameters in IDEs
* documentation tooltips in IDEs that support them.
### Your contribution
As you may already be aware, the pattern for enabling this is to use `Literal` and `@overload` from the `typing` module, to return a particular class based on the value of a string passed in.
A quick mock up:
```py
from typing import overload, Literal
class Model1:
def model_1_thing(self):
pass
class Model2:
def model_2_thing(self):
pass
@overload
def get_pipe(model: Literal["model_1"]) -> Model1:
pass
@overload
def get_pipe(model: Literal["model_2"]) -> Model2:
pass
def get_pipe(model):
if model == "model_1":
return Model1()
return Model2()
mod = get_pipe("model_1")
# `mod` is correctly identified as an instance of `Model1`
```
Now I not only get auto-complete when typing in the string:

But of course get the usual goodies when accessing attributes of the returned object:

To support older Python, I think [typing-extensions](https://pypi.org/project/typing-extensions/) would work, although I'm not certain.
The big question of course is whether the "developer experience" gains are worth the effort/complexity this would add to the codebase.
Apologies if this has already been discussed and decided on, I couldn't see an existing issue, but did sense a willingness to get types right from other issues. | 11-03-2022 06:42:20 | 11-03-2022 06:42:20 | Thanks for opening the issue. As of now, we have decided not to add any type annotations that render the code less readable. So whenever they can be added with no cost in readability, we welcome PRs, but for something more complex like you are suggesting, we would probably be less interested.<|||||>That seems reasonable :)
For completeness, I'll note that the `@overload`s can go in a separate `.pyi` file, leaving the main application logic clean. But the complexity would still be there, and you'd have `.pyi` files everywhere, so this is not a great solution. |
transformers | 20,031 | closed | BigBird attention type switching | ### System Info
transformers 4.21.2 BigBirdModel
### Who can help?
@ydshieh
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Currently BigBird switches attention type from default 'block_sparse' to 'original_full' in the forward call if it encounters a batch that contains only sequences shorter than the minimum sequence length:
https://github.com/huggingface/transformers/blob/a2a3afbc8d26d6170909365ffba6bd75e186255f/src/transformers/models/big_bird/modeling_big_bird.py#L2064
However, it never switches back. This means that the exact same (long) sequence can be encoded differently depending on whether it was preceded by a batch containing only short sequences or not.
### Expected behavior
It should probably switch back to block_sparse if it can as well | 11-03-2022 04:04:48 | 11-03-2022 04:04:48 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,030 | closed | Now supporting pathlike in pipelines too. | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/20024
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-02-2022 21:34:03 | 11-02-2022 21:34:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,029 | closed | Transformers scheduler doesn't alter LR of added param group after model unfreeze | ### System Info
python 3.8.10 on ubuntu 20.04
pytorch 1.12.1
transformers 4.20.1
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Add a transformers scheduler, such as `transformers.get_linear_schedule_with_warmup` to any training run that begins with a frozen model and unfreezes.
2. Monitor the learning rate, LR for param groups that are unfrozen after the 0th epoch are constant and unaffected by the scheduler
Sorry I don't have a better reproduction section. I mainly use transformers as a dependency of another library, and I tried making a reproducible script [here on Colab](https://colab.research.google.com/drive/1LXkIUP_vcVV3Vmc0sLInpIUEWU2H8K6P?usp=sharing), but Colab is crashing due to one of the imports.
### Expected behavior
I believe this is a bug but it could be expected behavior. I would expect the unfrozen param groups (param-group 2) should also be controlled by the scheduler, as they are when using pytorch schedulers such as `torch.optim.lr_scheduler.LinearLR`.
### LR of param groups after unfreeze when using `torch.optim.lr_scheduler.LinearLR`.

### LR of param groups after unfreeze when using `transformers.get_linear_LR_with_warmup`.

### LR of param groups after unfreeze when using `transformers.get_cosine_schedule_with_warmup`

| 11-02-2022 21:10:35 | 11-02-2022 21:10:35 | The Trainer by itself does not support several parameter groups, you will need to subclass and overwrite the methods that create schedulers/optimizers.<|||||>Thank you! Closing for now, but I'll reopen if there is any further issue. |
transformers | 20,028 | closed | Update esmfold conversion script | This update fixes the ESM checkpoint conversion script to work for ESMFold and fixes a bug in the example for the `EsmForProteinFolding` class. | 11-02-2022 19:24:54 | 11-02-2022 19:24:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,027 | closed | Document BLOOM lm_logits original training behavior | # What does this PR do?
~Compute softmax of Bloom in fp32 during half percision~
Document BLOOM lm_logits original training behavior
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@thomasw21 @stas00 @TevenLeScao | 11-02-2022 18:49:21 | 11-02-2022 18:49:21 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20027). All of your documentation changes will be reflected on that endpoint.<|||||>cc @thomasw21 and @younesbelkada but I don't think this is necessary as bfloat16 is more numerically stable than float16?<|||||>we noticed considerable performance difference between softmax in bf16 and softmax in fp32 in an internal bloom-based model. since during training the softmax is conducted in fp32, this change better reflects the model behavior in megatron-deepspeed.<|||||>@shijie-wu can you please share a bit more about how you run the model? <|||||>We redid the experiment regarding fp32 in SA. It seems like the gap we observed earlier is caused by some other issues. I have removed that part of the PR. After discussion with @thomasw21, instead of enforcing a type conversion, this PR will instead document the original behavior so that advanced users could recover it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,026 | closed | Show installed libraries and their versions in CI jobs | # What does this PR do?
Whenever there is a need to check
- if the versions of install libraries change
- and/or find out which ones change
it is not super easy to get this information.
This PR adds `pip freeze | tee installed.txt` to
- show the results
- save to a file
- and upload as an artifact.
It makes the access to this information easier, and potentially make the process to **get the difference** between previous/current runs easier too.
Example run job (and the artifact): [here](https://app.circleci.com/pipelines/github/huggingface/transformers/50778/workflows/cf542f91-cc42-4942-bac8-100436555dda/jobs/606945)
I plan to do the same for GH actions jobs, but maybe in another PR :-) | 11-02-2022 18:38:46 | 11-02-2022 18:38:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,025 | closed | T5 should not use teacher-forcing when under evaluation | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-6.0.2-76060002-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Steps to reproduce behaviour:
1. Find a task where the same token in the target sequence is repeated multiple times, e.g. IOB Tagging.
1. Train a t5 model on a task
3. Evaluate by predicting
4. Evaluate using generate and giving the same input_ids
### Expected behavior
The prediction task during evaluation should produce identical results to generate using num beams=1 ie greedy decoding.
Due to the nature of the task above, the target sequence has the form ((a|b|c){n})+, meaning sequences of target id repetitions, e.g. `000011110000`, due to teacher forcing, the model can learn to repeat the input token and quickly minimize the loss except for the boundary cases of `01` and `10`.
This is not visible when evaluating the model (e.g. via `.eval()`), and due to teacher forcing, the model gives excellent results. However, when using `.generate` with the same inputs, the model behaves horribly.
This is similar to
#12488 | 11-02-2022 18:38:12 | 11-02-2022 18:38:12 | Please use the [forums](https://discuss.huggingface.co/) for discussions like this one as we keep issues for bugs and feature requests in the library.<|||||>Then perhaps the ability to turn off teacher-forcing could be listed as a feature for the model?
I don’t see how using teacher forcing in an auto regressive model during evaluation is not a bug.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,024 | closed | `Pathlike` objects are treated as `AutoModel` objects in `pipeline` initialization | ### System Info
- `transformers` version: 4.22.2
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behaviour:
1. Run the following code snippet
```python
from transformers import pipeline
from pathlib import Path
# store pipeline locally
gen = pipeline("image-classification", "google/vit-base-patch16-224")
gen.save_pretrained(Path("./models") / "google/vit-base-patch16-224")
# load pipeline locally
new_gen = pipeline(
"image-classification", Path("./models") / "google/vit-base-patch16-224"
)
```
Loading pipeline fails with following error:
<img width="1010" alt="image" src="https://user-images.githubusercontent.com/20420308/199565245-190f8291-d691-48f4-bd43-3a4500bc225d.png">
This works file if I pass string path:
```
new_gen = pipeline(
"image-classification", str(Path("./models") / "google/vit-base-patch16-224")
)
```
### Expected behavior
The pipeline model argument should check for Pathlike objects and not treat them the same as AutoModel instances. | 11-02-2022 17:57:19 | 11-02-2022 17:57:19 | Excellent suggestion!
I opened a PR https://github.com/huggingface/transformers/pull/20030 |
transformers | 20,023 | closed | Fix doctest | # What does this PR do?
We just need `# doctest: +IGNORE_RESULT` after `>>> dataset = load_dataset` as usual
| 11-02-2022 17:24:58 | 11-02-2022 17:24:58 | Thanks for fixing!<|||||>Thanks @ydshieh! Sorry for breaking our streak of 6 days of no failures! <|||||>Oh, I haven't request yet and you already approved! Thanks a lot, so impressive your speed of response!<|||||>
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I am so bad in doctest ... What we need here is actually `# doctest: +IGNORE_RESULT`. Sorry. |
transformers | 20,022 | closed | [Audio Processor] Only pass sr to feat extractor | # What does this PR do?
The audio processor is composed of two components:
1. Feature extractor (input audio -> normalised audio)
2. Tokenizer (target text-> label ids)
Of these two components, the `audio` inputs and `sampling_rate` are arguments that are applicable to the feature extractor only. The `text` is applicable to the tokenizer only.
Currently, we only isolate the `audio` for the feature extractor and `text` for the tokenizer. However, the `sampling_rate` is passed to **both** the feature extractor and tokenizer. Thus, we get a warning for an unrecognized keyword argument in the tokenizer:
```python
from transformers import Wav2Vec2Processor
import numpy as np
audio = np.ones((2, 1000))
text = ['the cat', 'sat on']
processor = Wav2Vec2Processor.from_pretrained('facebook/wav2vec2-base')
out = processor(audio, sampling_rate=16000, text=text)
```
```
Keyword arguments {'sampling_rate': 16000} not recognized.
```
This PR splits the `sampling_rate` from the kwargs and passes it only to the feature extractor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-02-2022 17:06:40 | 11-02-2022 17:06:40 | Currently, I've only applied the change to the Wav2Vec2 Processor - once we're happy with the change I'll copy it to all audio processor classes. I've only done it to this one first to make the review easier!<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,021 | closed | Update auto processor to check image processor created | # What does this PR do?
Fixes failing test which was checking if a feature extractor was loaded. This is now updated to reflect that an image processor is now loaded.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 11-02-2022 15:01:56 | 11-02-2022 15:01:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,020 | closed | When using GPT2,CPU usage is high | ### System Info
transformers: 4.23.1
### Who can help?
Models:
- GPT-2 @patil-suraj, @patrickvonplaten, @LysandreJik
Library
- Pipelines @Narsil
### Reproduction
Questions: When using GPT-2 as:
```python
from transformers import pipeline
gpt2_pipe = pipeline('text-generation', model='XXX', tokenizer='gpt2')
starting_text = "a young boy"
response = gpt2_pipe(starting_text, max_length=60, num_return_sequences=1)
```
The CPU usage keep being > 90% for a few seconds, the generation is also slow.
However, when I manually change the `transformers` package in:
https://github.com/huggingface/transformers/blob/49b77b89ea1e89a9940f2b84da1bcc0696ecb07a/src/transformers/pipelines/text_generation.py#L229
to
```python
self.model = self.model.to('cuda')
input_ids = input_ids.to('cuda')
attention_mask = attention_mask.to('cuda')
generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs)
```
namely changing the variables to CUDA version, the CPU usage was much lower, and the speed is faster.
However, I could not find a good way to use cuda in `gpt2_pipline` such as `sd_pipeline` where you only need to add a `.to('cuda')`.
### Expected behavior
Can anyone give some advices? It's not an elegant way to fix the problem by changing the `transformers` package source code. | 11-02-2022 15:01:06 | 11-02-2022 15:01:06 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep the issues for bugs and feature requests only.<|||||>Try
```python
gpt2_pipe = pipeline('text-generation', model='XXX', tokenizer='gpt2', device=0) # or `cuda:0`
``` |
transformers | 20,019 | closed | ConnectionError when downloading weights | ### System Info
transformers.__version__: 4.24.0
python: 3.7.13
OS: Ubuntu 22.04.1 LTS
conda 4.12.0
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, EsmForProteinFolding
model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1")
```
Error:
```
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 443, in _error_catcher
yield
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 566, in read
data = self._fp_read(amt) if not fp_closed else b""
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 524, in _fp_read
data = self._fp.read(chunk_amt)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/http/client.py", line 465, in read
n = self.readinto(b)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/http/client.py", line 509, in readinto
n = self.fp.readinto(b)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/ssl.py", line 1071, in recv_into
return self.read(nbytes, buffer)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/ssl.py", line 929, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/requests/models.py", line 816, in generate
yield from self.raw.stream(chunk_size, decode_content=True)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 627, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 592, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/urllib3/response.py", line 448, in _error_catcher
raise ReadTimeoutError(self._pool, None, "Read timed out.")
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "get_esm.py", line 5, in <module>
model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1")
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2091, in from_pretrained
resolved_archive_file = cached_file(pretrained_model_name_or_path, filename, **cached_file_kwargs)
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/transformers/utils/hub.py", line 420, in cached_file
local_files_only=local_files_only,
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/huggingface_hub/file_download.py", line 1231, in hf_hub_download
headers=headers,
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/huggingface_hub/file_download.py", line 490, in http_get
for chunk in r.iter_content(chunk_size=1024):
File "/home/liviu/anaconda3/envs/tcr/lib/python3.7/site-packages/requests/models.py", line 822, in generate
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
```
### Expected behavior
The weights are getting downloaded and at around 8-10% i get the above error. This behaviour is solved if I use my Uni VPN.
Do I need special credentials to use ESMFold? Why would it work over VPN but not directly? | 11-02-2022 14:06:29 | 11-02-2022 14:06:29 | Looks like it's a connection issue on your end. There is no special credentials needed to load the model :-)<|||||>> Looks like it's a connection issue on your end. There is no special credentials needed to load the model :-)
@sgugger thanks for the fast answer. if I write like this:
`model = EsmForProteinFolding.from_pretrained("https://dl.fbaipublicfiles.com/fair-esm/models/esmfold_3B_v1.pt")`
It works just fine I get no errors, but I get the following warning
```
UserWarning: Using `from_pretrained` with the url of a file (here https://dl.fbaipublicfiles.com/fair-esm/models/esmfold_3B_v1.pt) is deprecated and won't be possible anymore in v5 of Transformers. You should host your file on the Hub (hf.co) instead and use the repository ID. Note that this is not compatible with the caching system (your file will be downloaded at each execution) or multiple processes (each process will download the file in a different temporary file).
f"Using `from_pretrained` with the url of a file (here {url}) is deprecated and won't be possible anymore in"
```<|||||>Yes, you should use the repo ID from the Hub.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>adding git pull to the user.bat file fixed the issue for me, turns out automatic 1111 was out of date ( incase anyone else having same issue and sees this )
|
transformers | 20,018 | closed | Does `prune_heads` really speed up during inference? | ### System Info
- `transformers` version: 4.22.1
- Platform: Linux-5.13.0-48-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.9.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi I have a code here, based on your examples. I try to use `bert` to test inference speed
The problem is no matter whether I pruned the model, the time seems to remain the same
The code is belows
```
from transformers import AutoTokenizer, AutoModel
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
device='cuda'
prune_heads = {}
prune_heads[0] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[1] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[2] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[3] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[4] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[5] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[6] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[7] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[8] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[9] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[10] = [0,1,2,3,4,5,6,7,8,9]
prune_heads[11] = [0,1,2,3,4,5,6,7,8,9]
'''Whether to prune'''
# model.prune_heads(prune_heads).
inputs = tokenizer("Hello world!", return_tensors="pt").to(device)
model=model.to(device)
model.eval()
import time
cnt=0
for i in range(3):
outputs = model(**inputs)
for i in range(10):
torch.cuda.synchronize()
start = time.perf_counter()
outputs = model(**inputs)
torch.cuda.synchronize()
end = time.perf_counter()
print(i,":",end-start)
cnt += (end-start)
# print(outputs)
print(cnt)
```
### Expected behavior
The problem is no matter whether I pruned the model, the time seems to remain the same | 11-02-2022 13:47:17 | 11-02-2022 13:47:17 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only. I am not aware of any place in the doc where we advertise head-pruning as a mean to speed up inference. I think you will need to look at converting your model to ONNX or quantize it for that.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,017 | closed | fix gradient checkpoint tests in encoder-decoder | # What does this PR do?
Fix the test `test_training_gradient_checkpointing` in `test_modeling_encoder_decoder.py`.
The current error is
```python
RuntimeError: Expected all tensors to be on the same device, but found at least two device
``` | 11-02-2022 11:05:00 | 11-02-2022 11:05:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,016 | closed | Add model parallelism to CodeGen | This PR adds model parallelisim to the CodeGen model. I have been using this since August. | 11-02-2022 09:44:56 | 11-02-2022 09:44:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20016). All of your documentation changes will be reflected on that endpoint.<|||||>ok, I did not know that. Thanks for the info!
Does `device_map="auto"` also work for CodeGen?<|||||>Yes, it's supported (at least on the main branch)!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thanks for your PR but this code for model parallelism is deprecated and will be removed from other code files. To use parallelism, please load the model with `device_map="auto"`.
I'm a bit confused about model parallelism in Huggingface. I'm trying to fine-tune a CodeGen model using Huggingface Trainer. Is loading the model with `device_map="auto"` the right way to enable model parallelism?
From what I read [here](https://huggingface.co/docs/transformers/v4.28.1/perf_train_gpu_many#naive-model-parallelism-vertical-and-pipeline-parallelism), model parallelism is only supported by GPT2 and T5. |
transformers | 20,014 | closed | chore: remove inference code, add pt framework. | To keep it consistent with the other docs ([image classification](https://huggingface.co/docs/transformers/tasks/image_classification), [audio classification](https://huggingface.co/docs/transformers/tasks/audio_classification), etc.), this PR:
* removes inference code
* adds separation for PT as a framework
in the [semantic segmentation](https://huggingface.co/docs/transformers/tasks/semantic_segmentation) guide. | 11-02-2022 08:50:22 | 11-02-2022 08:50:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry, I missed that PR. I thought this was outside the scope of task guides. Sounds good to not go ahead with this change. Let's still add the pt framework block<|||||>Agreed. |
transformers | 20,013 | closed | Add RocBert | This PR adds the [RocBert model](https://aclanthology.org/2022.acl-long.65.pdf).
RocBert is a pre-trained Chinese language model that is designed from the ground up to be robust against maliciously crafted adversarial texts such as misspellings, homograph attacks, and other forms of deception.

This property is crucial in downstream applications like content moderation.
RocBert differs from the classic Bert architecture in the following ways:
- besides token ids, the model also takes phonetic features and glyph features as input
- the model is also pre-trained with a contrastive learning objective that stabilizes the feature space against synthetic attacks
Since the model structure and tokenizer is quite different from existing implementations, we would like to submit this PR to add a new model class.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-02-2022 07:01:14 | 11-02-2022 07:01:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Thanks for your suggestion, I already fixed it ~<|||||>@ArthurZucker hi, I already fixed the code according to sgugger's advice., could you please review it, thanks!<|||||>Yes! Doing this asap 🤗 sorry for the delay <|||||>Last comment, it seems that the issue with naming still persists, we should make sure to either write `RoC` or `Roc` everywhere. <|||||>@ArthurZucker I didn't make [weiweishi/roc-bert-base-zh](https://huggingface.co/weiweishi/roc-bert-base-zh) public before, it's avaliable now, and other issues are resolved~ |
transformers | 20,015 | closed | Request for Examples on Correct Use the CLIPText Model (transformers.CLIPTextModel) | From the documentation on the [Hugging Face Hub](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), it is not clear:
1. How the CLIPTextModel can be used.
2. If it can be used to conduct Zero Shot Classification of textual inputs using a predefined list of tokens as classes. To be precise, let us say I have a list dataset called x with two strings such that x = ["father holding a baby", "man at war"], and; a list of classes or image generation tokens; such that y = ["family time", "next generation weapons"]: Can I use the CLIPTextModel to class x using y?
In short, I am asking for an example on:
1. The correct usage of the CLIPTextModel.
2. The use-cases for the model.
| 11-02-2022 06:21:41 | 11-02-2022 06:21:41 | I'm transferring this to transformers repo as this is more related to the docs there
cc @stevhliu <|||||>Hi,
The docs is actually correct, CLIPTextModel can be used to encode text into a vector representation (embedding).
```
from transformers import CLIPTokenizer, CLIPTextModel
model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32")
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt")
outputs = model(**inputs)
last_hidden_state = outputs.last_hidden_state
pooled_output = outputs.pooler_output # pooled (EOS token) states
````
So you have a set of texts (here "a photo of a cat" and "a photo of a dog") which you first prepare for the model using the tokenizer. Next, you forward them through CLIP's text encoder to get an embedding (here called "pooled output") out, which is of shape (batch_size, hidden_size), which in this case will be (2, 512).
That's all the CLIP text encoder does! Turn text into embedding vectors.
So no you can't use only the CLIP text encoder to perform zero-shot classification of images, for that you need both the image and text encoders (which is what `CLIPModel` is; it consists of both `CLIPTextModel` and `CLIPVisionModel`). |
transformers | 20,012 | closed | Make sentencepiece import conditional in BertJapaneseTokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Sentencepiece is an optional dependency, but #19769 has unconditionally imported it into `tokenization_bert_japanese.py`.
I've written a wrapper library for another library that depends directly on transformers and today have found most of my tests failing with the error ` ModuleNotFoundError: No module named 'sentencepiece'`.
This fixes the issue by calling `is_sentencepiece_available()` before the import.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Models: @LysandreJik
Library: @n1t0
Documentation: @sgugger
@r-terada @hiroshi-matsuda-rit
I'm not too familiar with this project, so I'm copying the tags from #19769 | 11-02-2022 01:36:53 | 11-02-2022 01:36:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,011 | closed | sentencepiece\sentencepiece\src\sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())] | ### System Info
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from transformers import T5Tokenizer
tokenizer = T5Tokenizer(vocab_file='vocab_ruturk.spm')
Traceback (most recent call last):
File "app.py", line 3, in <module>
tokenizer = T5Tokenizer(vocab_file='vocab.ruturk.spm')
File "env\lib\site-packages\transformers\models\t5\tokenization_t5.py", line 157, in __init__
self.sp_model.Load(vocab_file)
File "env\lib\site-packages\sentencepiece\__init__.py", line 910, in Load
return self.LoadFromFile(model_file)
File "env\lib\site-packages\sentencepiece\__init__.py", line 311, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: a\sentencepiece\sentencepiece\src\sentencepiece_processor.cc(1102) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
### Expected behavior
No errors | 11-01-2022 22:31:33 | 11-01-2022 22:31:33 | It looks like you are using the tokenizer with a broken sentencepiece vocab. In any case, we would need a reproducer with a file we have access to to be able to investigate.<|||||>Ran into the same issue. How did you solve it?<|||||>> Ran into the same issue. How did you solve it?
The whole problem was the vocab. I just took a different one.<|||||>Whats wrong with vocab? how to change it correct? |
transformers | 20,010 | closed | Reorganize glossary | This PR reorganizes the glossary to be alphabetical, and words under the General Terms can be linked to. | 11-01-2022 22:04:00 | 11-01-2022 22:04:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,009 | closed | Make convert_to_onnx runable as script again | Fix `convert_graph_to_onnx.py` script crash by replacing relative import.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes `python transformers/convert_graph_to_onnx.py` crash with error `Error while converting the model: attempted relative import with no known parent package` . I found this was previously fixed in #10857 and regressed.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-01-2022 19:45:03 | 11-01-2022 19:45:03 | It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20009). All of your documentation changes will be reflected on that endpoint.<|||||>> It seems there is an issue with your CircleCI permissions, the tests won't run. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?
Thanks, CircleCI supposedly has access to all my repositories now but I'm not sure how to re-trigger the tests. Sorry if I'm missing something obvious.<|||||>You can try an empty commit (`git commit -m "Trigger CI" --allow-empty`)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry it fell under our radar. Don't hesitate to ping me next time it happens! |
transformers | 20,008 | closed | How can I access prompt scores/logprobs? | I have tried `model.generate(**inputs, return_dict_in_generate=True, output_scores=True)` but it only gives the scores for generated tokens. For my application, it would be convenient if there’s a similar parameter to `echo` in the OpenAI GPT-3 API that lets us access prompt scores/logprobs, but I have yet to find it. Any help is appreciated! | 11-01-2022 19:44:07 | 11-01-2022 19:44:07 | Hey @xiaoyangnickhu 👋 We do have a tool for that, but which is not yet documented:
👉 [compute_transition_beam_scores](https://github.com/huggingface/transformers/blob/e0b825a8d03f50ed9dbf9fbbbb3b4fcf0b4e4b22/src/transformers/generation_utils.py#L876)
Their arguments should be self explanatory, but let me know if you'd like further guidance :)<|||||>Thanks for responding! Some followup questions:
1. Since this function only has access to `scores` for generated tokens, I am having a hard time understanding why the returned `transition_scores` might contain scores for prompt/input tokens. Could you please maybe clarify where this function deals with prompt tokens?
2. Do I need to switch to beam search to use this function? (I have been using greedy decoding.)
3. Also, can I compute prompt token logprobs by applying `log_softmax` to `model(input_ids).logits`?
Thanks!<|||||>@xiaoyangnickhu
1. It does not include the score for the input prompt. The concept of "score" is derived from the assumption in autoregressive language generation where the probability distribution of a word sequence can be decomposed into the product of conditional next-word distributions. The probability of the input tokens is `1` for the input, so we don't include it there.
2. For greedy decoding you'll have to do it manually for now, i.e., gather from the `scores` output the selected token scores (see docs [here](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation_utils.GreedySearchDecoderOnlyOutput)). We may include an output for this in the future :)
3. EDIT: Yes you can. <|||||>Thanks! Appreciate the details. Some followups regarding 3:
I see that for each input token `xi`, you feed the sequence `[x1,x2,...,xi]` to `model` to obtain the logits. What is the reasoning behind this? We have been doing `model([x1,x2,...,xn]).logits` (i.e., give `model` the full sequence and apply `log_softmax` to each token); is this the wrong approach? (My goal here is to obtain the vector `[None, logprob(x2|x1), logprob(x3|x1,x2),...,logprob(xn|x1,...,x_{n-1})]`)<|||||>Closed the issue by mistake. Sorry...<|||||>@gante Would you be able to take a look at my questions above? Thanks!<|||||>@xiaoyangnickhu oops, you are absolutely right, you can obtain the conditional logits that way! (I've been so used to work on generate, one token at a time, that forgot that the `.logits` output holds the desired output for all steps).
I've edited my answer above in case someone stumbles across this thread :)<|||||>Thanks!!<|||||>@gante @hxiaoyang hello, I am using bloom with the API. I need these scores/logprob for input similar to what we can get in OpenAI. Is there a way?<|||||>Hey @goelnaman -- by API, what do you mean exactly? I don't think most APIs support it, but I'd be able to tag the right team member :)<|||||>Thanks @gante I have tried ... InferenceApi() and requests.request() but didn't see logprobs of input in any of these.
In OpenAI API, one can get this information by using echo=True, logprobs=... for example.<|||||>Hey @goelnaman -- I can confirm that it does not return the scores (API docs [here](https://huggingface.co/docs/api-inference/detailed_parameters#text-generation-task))
The [`text-generation-inference`](https://github.com/huggingface/text-generation-inference) solution also doesn't support it. The only way to get it at the moment is:
1. With local python code, as discussed in this issue
2. With the [Inference Endpoints](https://huggingface.co/inference-endpoints), where you can configure any API.
Sadly, I do not have examples for 2. (it's on my todo list :) ) |
transformers | 20,007 | closed | RuntimeError: Failed to import transformers.models.flaubert.modeling_flaubert because of the following error (look up to see its traceback): module 'signal' has no attribute 'SIGKILL' | ### System Info
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sgugger, @D3xter1922
@malfet (we now come across each other daily, the world is so small...)
Opening here to document for the wider crowd, this is actually a PyTorch issue affecting Windows which went cf. https://github.com/pytorch/pytorch/issues/85427
cf. [Removed XLMModel inheritance from FlaubertModel(torch+tf)](https://github.com/huggingface/transformers/commit/ed858f535474d822615f846917254d586d2a5a31)
cf in [particular blame lines 26-45](https://github.com/huggingface/transformers/blame/main/src/transformers/models/flaubert/modeling_flaubert.py)
error [caused by `from ...modeling_utils import PreTrainedModel, SequenceSummary, SQuADHead` line 37]
(https://github.com/huggingface/transformers/blob/main/src/transformers/models/flaubert/modeling_flaubert.py#L37)
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Opening here to document for the wider crowd, this is a PyTorch issue affecting Windows which went cf. https://github.com/pytorch/pytorch/issues/85427
cf. complete stack trace for your reference only and closing
error [caused by `from ...modeling_utils import PreTrainedModel, SequenceSummary, SQuADHead` line 37]
(https://github.com/huggingface/transformers/blob/main/src/transformers/models/flaubert/modeling_flaubert.py#L37)
this goes down to the accelerate package
`from transformers import (FlaubertWithLMHeadModel)`
```
NOTE: Redirects are currently not supported in Windows or MacOs.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1076, in _LazyModule._get_module(self, module_name)
1075 try:
-> 1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
File ~\miniconda3\envs\MyEnv\lib\importlib\__init__.py:127, in import_module(name, package) 126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1030, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1007, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:986, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:680, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:850, in exec_module(self, module)
File <frozen importlib._bootstrap>:228, in _call_with_frames_removed(f, *args, **kwds)
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\models\flaubert\modeling_flaubert.py:37, in <module>
29 from ...modeling_outputs import (
30 BaseModelOutput,
31 MaskedLMOutput,
(...)
35 TokenClassifierOutput,
36 )
---> 37 from ...modeling_utils import PreTrainedModel, SequenceSummary, SQuADHead
38 from ...pytorch_utils import apply_chunking_to_forward, find_pruneable_heads_and_indices, prune_linear_layer
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\modeling_utils.py:78, in <module>
77 if is_accelerate_available():
---> 78 from accelerate import __version__ as accelerate_version
79 from accelerate import dispatch_model, infer_auto_device_map, init_empty_weights
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\__init__.py:7, in <module>
5 __version__ = "0.13.2"
----> 7 from .accelerator import Accelerator
8 from .big_modeling import cpu_offload, disk_offload, dispatch_model, init_empty_weights, load_checkpoint_and_dispatch
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\accelerator.py:27, in <module>
25 import torch
---> 27 from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state
28 from .data_loader import prepare_data_loader
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\checkpointing.py:24, in <module>
22 from torch.cuda.amp import GradScaler
---> 24 from .utils import (
25 MODEL_NAME,
26 OPTIMIZER_NAME,
27 RNG_STATE_NAME,
28 SCALER_NAME,
29 SCHEDULER_NAME,
30 get_pretty_name,
31 is_tpu_available,
32 save,
33 )
36 if is_tpu_available(check_device=False):
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\utils\__init__.py:96, in <module>
87 from .deepspeed import (
88 DeepSpeedEngineWrapper,
89 DeepSpeedOptimizerWrapper,
(...)
93 HfDeepSpeedConfig,
94 )
---> 96 from .launch import PrepareForLaunch, _filter_args, get_launch_prefix
97 from .memory import find_executable_batch_size
File ~\miniconda3\envs\MyEnv\lib\site-packages\accelerate\utils\launch.py:25, in <module>
24 if is_torch_version(">=", "1.9.0"):
---> 25 import torch.distributed.run as distrib_run
28 def get_launch_prefix():
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\run.py:386, in <module>
385 from torch.distributed.elastic.utils.logging import get_logger
--> 386 from torch.distributed.launcher.api import LaunchConfig, elastic_launch
389 log = get_logger()
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\launcher\__init__.py:10, in <module>
1 #!/usr/bin/env/python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
(...)
6 # This source code is licensed under the BSD-style license found in the
7 # LICENSE file in the root directory of this source tree.
---> 10 from torch.distributed.launcher.api import ( # noqa: F401
11 LaunchConfig,
12 elastic_launch,
13 launch_agent,
14 )
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\launcher\api.py:15, in <module>
14 from torch.distributed.elastic import events, metrics
---> 15 from torch.distributed.elastic.agent.server.api import WorkerSpec
16 from torch.distributed.elastic.agent.server.local_elastic_agent import LocalElasticAgent
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\agent\server\__init__.py:40, in <module>
31 from .api import ( # noqa: F401
32 ElasticAgent,
33 RunResult,
(...)
38 WorkerState,
39 )
---> 40 from .local_elastic_agent import TORCHELASTIC_ENABLE_FILE_TIMER, TORCHELASTIC_TIMER_FILE
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\agent\server\local_elastic_agent.py:19, in <module>
17 from typing import Any, Dict, Optional, Tuple
---> 19 import torch.distributed.elastic.timer as timer
20 from torch.distributed.elastic import events
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\timer\__init__.py:44, in <module>
43 from .local_timer import LocalTimerClient, LocalTimerServer # noqa: F401
---> 44 from .file_based_local_timer import FileTimerClient, FileTimerServer, FileTimerRequest
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\timer\file_based_local_timer.py:63, in <module>
52 return json.dumps(
53 {
54 "version": self.version,
(...)
59 },
60 )
---> 63 class FileTimerClient(TimerClient):
64 """
65 Client side of ``FileTimerServer``. This client is meant to be used
66 on the same host that the ``FileTimerServer`` is running on and uses
(...)
79 negative or zero signal will not kill the process.
80 """
File ~\miniconda3\envs\MyEnv\lib\site-packages\torch\distributed\elastic\timer\file_based_local_timer.py:81, in FileTimerClient()
64 """
65 Client side of ``FileTimerServer``. This client is meant to be used
66 on the same host that the ``FileTimerServer`` is running on and uses
(...)
79 negative or zero signal will not kill the process.
80 """
---> 81 def __init__(self, file_path: str, signal=signal.SIGKILL) -> None:
82 super().__init__()
AttributeError: module 'signal' has no attribute 'SIGKILL'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Input In [3], in <cell line: 1>()
----> 1 from transformers import (PretrainedConfig, FlaubertConfig, AutoTokenizer, FlaubertTokenizer, FlaubertWithLMHeadModel, TrainingArguments, DataCollatorForLanguageModeling) #pipeline
2 from datasets import (load_dataset, load_from_disk, concatenate_datasets, ClassLabel)
3 import pytorch_lightning as pl
File <frozen importlib._bootstrap>:1055, in _handle_fromlist(module, fromlist, import_, recursive)
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1067, in _LazyModule.__getattr__(self, name)
1065 elif name in self._class_to_module.keys():
1066 module = self._get_module(self._class_to_module[name])
-> 1067 value = getattr(module, name)
1068 else:
1069 raise AttributeError(f"module {self.__name__} has no attribute {name}")
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1066, in _LazyModule.__getattr__(self, name)
1064 value = self._get_module(name)
1065 elif name in self._class_to_module.keys():
-> 1066 module = self._get_module(self._class_to_module[name])
1067 value = getattr(module, name)
1068 else:
File ~\miniconda3\envs\MyEnv\lib\site-packages\transformers\utils\import_utils.py:1078, in _LazyModule._get_module(self, module_name)
1076 return importlib.import_module("." + module_name, self.__name__)
1077 except Exception as e:
-> 1078 raise RuntimeError(
1079 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1080 f" traceback):\n{e}"
1081 ) from e
RuntimeError: Failed to import transformers.models.flaubert.modeling_flaubert because of the following error (look up to see its traceback):
module 'signal' has no attribute 'SIGKILL'
```
### Expected behavior
flawless import as usual | 11-01-2022 19:35:16 | 11-01-2022 19:35:16 | This is a PyTorch issue affecting Windows which went cf. https://github.com/pytorch/pytorch/issues/85427<|||||>actually reopening as bug on Windows PyTorch side makes Transformers crash<|||||>I'm not sure what you intend us to do to fix this, since it comes from PyTorch?<|||||>for pip would propose to add to requirement.txt torch<=1.12.1 ? and for conda feedstocks' environment.yaml pytorch<=1.12.1
point is Transformers do really crash on Windows with PyTorch=1.13.0<|||||>PyTorch is already pinned in the setup.<|||||>indeed, not yet visible downstream (pip, conda) as of currentbut quite right https://github.com/huggingface/transformers/blame/main/setup.py#L166
https://github.com/huggingface/transformers/pull/19989
closing |
transformers | 20,006 | closed | Fix typo in quicktour | Fixes typo in dataset name for the quicktour | 11-01-2022 17:55:52 | 11-01-2022 17:55:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,005 | closed | Fix dataset in quicktour | This PR adds a dataset in the `Trainer` section of the Quicktour so users can successfully run the code samples (from forum feedback [here](https://discuss.huggingface.co/t/trainer-a-pytorch-optimized-training-loop-example-code/25163)). | 11-01-2022 17:00:05 | 11-01-2022 17:00:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,004 | closed | Update object detection pipeline to use post_process_object_detection methods | # What does this PR do?
Updates the `ObjectDetectionPipeline` to use the `XXXFeatureExtractor.post_process_object_detection` methods instead of the deprecated `XXXFeatureExtractor.post_process` methods.
Postprocessing methods have been updated recently with this [PR](https://github.com/huggingface/transformers/pull/19709).
Partially fixes the hardcoded threshold issue with the inference widgets, which requires adding a threshold button to the widgets.
Fixes # 414
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 11-01-2022 15:59:15 | 11-01-2022 15:59:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,003 | closed | Add object detection + segmentation transforms | # What does this PR do?
Adds logic for processing bounding boxes and some additional transforms (`rgb_to_id`, `id_to_rgb`) needed for DETR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 11-01-2022 15:53:46 | 11-01-2022 15:53:46 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20003). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,002 | closed | Fix the test for corrupted checkpoints in from_pretrained | # What does this PR do?
As pointed out in #19974, there is a bug in `from_pretrained` when the model with head contains the same key as the base model, the checkpoint is then detected as corrupted. This PR fixes it and introduces a test to make sure there is no regression.
Fixes #19974
cc @NielsRogge | 11-01-2022 14:41:15 | 11-01-2022 14:41:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,001 | closed | typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-01-2022 12:48:25 | 11-01-2022 12:48:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,000 | closed | Add ESMFold code sample | null | 11-01-2022 12:17:39 | 11-01-2022 12:17:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>FYI, would be great to add ESM to the doc tests, to make sure this is tested.
|
transformers | 19,999 | closed | Some weights of BertForPreTraining were not initialized from the model checkpoint | ### System Info
Some weights of BertForPreTraining were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run script.
### Expected behavior
load BertForPreTraining | 11-01-2022 11:26:43 | 11-01-2022 11:26:43 | Please use the [forums](https://discuss.huggingface.co/) to ask questions or fill the template to report a bug. Getting a warning because the checkpoint you selected does not contain all weights for your architecture is not, by itself, a bug.<|||||>Yes, `bert-base-uncased` only includes the weights and bias of the language modeling head, but not the next sentence prediction task, which is what the warning is telling you. In other words, it corresponds to the `BertForMaskedLM` model.
Therefore closing this issue, feel free to reopen. |
transformers | 19,998 | closed | Include better versions of a model when they are available in the model doc pages | @NielsRogge provided a great suggestion.
We could add a banner on top of Swin's docs (and other models where this is applicable), to indicate we have a better model now, SwinV2. The same could be done for DETR, as Conditional DETR and Deformable DETR greatly improve the convergence and AP metrics.
We can start with the following:
- [ ] Swin
- [ ] DETR
And then expand to the other models. Or else, we can start with five such models (feel free to suggest more). When reviewing model PRs to `transformers`, we would just need to be mindful about this a bit so that we can suggest it accordingly to the contributors.
Cc: @osanseviero @nateraw | 11-01-2022 11:14:33 | 11-01-2022 11:14:33 | I am very wary about this. If you start telling on the page of a model that have been developed by Org 1 that there is a better model developed by Org 2, they will get mad and we will then have unnecessary conflicts to handle.
For the same reason we stay away from benchmarks between frameworks/hardware, I would stay away from this.<|||||>Point noted.
But what if the same org comes up with a better version? But I understand this creates a weird distinction which is not desirable.
Leaving it open for today in case anyone has any inputs. <|||||>Yeah it was just a suggestion, I wouldn't have opened an issue for this actually.
I agree that this could become very opinionated (it's very subjective which model is better). We could just do it for papers that come from the same team (Swin => Swinv2), to promote upcoming work. But for models that originate from different teams, this might be hard. That's where "evaluate on the hub" will come into play, where people can see which models perform best on a given task.
<|||||>Closing it for now. Should there be a need, it can easily be reopened. <|||||>>We could just do it for papers that come from the same team (Swin => Swinv2), to promote upcoming work.
Sure, why not :) for those scenarios I think it's ok to open small PRs updating the corresponding docstrings |
transformers | 19,997 | closed | Added mask_time_prob and mask_time_length arguments to wav2vec2 pretraining script and readme | # What does this PR do?
This PR was requested by @patrickvonplaten following my question and following discussion in the Discord ask-for-help channel under the title [Wav2vec2 - why is mask_time_prob=0.05?](https://discord.com/channels/879548962464493619/1035113782223056896)
This PR adds the arguments `mask_time_prob` and `mask_time_length` to the `examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py` script and the corresponding example use in the `README.md`.
`mask_time_prob` is a variable describing two things, depending on context:
1) the percentage of the encoded feature vector to be masked during the contrastive learning task in pre-training
2) to imitate [SpecAugment](https://arxiv.org/abs/1904.08779) during fine-tuning
In this script, we are considering it in the context of 1).
`mask_time_length` is a variable describing the length (in # of frames frames) of each applied mask. It is added for completion.
# Background
In the original [wav2vec 2.0 article](https://arxiv.org/abs/2006.11477), the variable `mask_time_prob` is set to `0.65`, which (due to overlap) results in an effective masking of approximately 49% of the feature vectors during pretraining. `mask_time_length` corresponds to the _M_ variable in the article and is set to 10 there.
However, when considering the [config file of wav2vec2-base](https://huggingface.co/patrickvonplaten/wav2vec2-base/blob/main/config.json), one finds that `mask_time_prob=0.05`. This is because this model is usually used for finetuning, and not for (continued) pretraining, and for finetuning `0.05` is a better hyperparameter value (see Appendix B of [wav2vec 2.0 article](https://arxiv.org/abs/2006.11477)). This is a bit confusing.
By considering the [config file](https://huggingface.co/patrickvonplaten/wav2vec2-base-v2/blob/main/config.json) of the `wav2vec2-base-v2` model, which was used during Patricks experimentation (see the [speech-pretraining readme](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining)) one finds that indeed `mask_time_prob=0.65` was used for pretraining.
The values `0.65` and `10` are set as default values for the `DataCollatorForWav2Vec2Pretraining` class defined in the script (as this class may be extracted from the script by users), but no defaults are given in the argparser, as the argument values are also specified in the [wav2vec2-base](https://huggingface.co/patrickvonplaten/wav2vec2-base/blob/main/config.json) and [wav2vec2-base-v2](https://huggingface.co/patrickvonplaten/wav2vec2-base-v2/blob/main/config.json) model configs, and if setting defaults in the argparser, the model config values would never be applied, which may be desired. Hence, the parser argument will only be relevant if it is explicitly specified as an argument when executing the script.
**I believe this PR may also lift a bigger question, which is if `mask_time_prob` should be split into two different variables to avoid confusion in the future.**
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Participants in discussion on Discord: @osanseviero @patrickvonplaten
| 11-01-2022 11:08:28 | 11-01-2022 11:08:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19997). All of your documentation changes will be reflected on that endpoint.<|||||>cc @sanchit-gandhi <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey! I'm not sure what the next step is here, like I said, this is my first PR :) Some checks failed, not sure why. All tests passed before I commited @sanchit-gandhi's suggestions, but it seems they are mainly failing due to timeouts?<|||||>You will need to rebase your PR on main for the tests to pass, as your branch does not have the fixes for the last release of TensorFlow.<|||||>Hey @mpierrau! Exciting to see that you've picked-up this PR again! Let me know if you need any final help - we're close to merging now!
As Sylvain has mentioned, you'll need to rebase onto main to fix the failing tests (see 5. in this guide: https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request, just remember to force push as detailed 😉)<|||||>Hey, sorry, I somehow managed to miss the force push flag anyways... I hope it still works?<|||||>Hey @mpierrau! Unfortunately the commit history gets messed up after a rebase + non-force push. Not to worry though! Let's open a new PR with your changes in favour of this one. You can create a new branch and copy over the relevant file (`run_pretraining_...`):
```
git checkout -b new-branch-mask-time-prob
git restore --source adding-mask_time_prob-args-to-wav2vec2-pretraining-script -- /path/to/relevant/file
```
You can then commit, rebase, and force push to origin to open a new PR with just the required changes.<|||||>Closing in favour of https://github.com/huggingface/transformers/pull/20985. |
transformers | 19,996 | closed | Update image_classification.mdx to link to the correct task page | Small fix.
| 11-01-2022 08:27:22 | 11-01-2022 08:27:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,995 | closed | [Doctest] Add configuration_deberta_v2.py | # What does this PR do?
Adds configuration_deberta_v2.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-01-2022 02:08:10 | 11-01-2022 02:08:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,994 | closed | Unpin PyTorch to test if doc builds | # What does this PR do?
The doc building seg-faults since we pinned PyTorch. Using this PR to experiment. | 10-31-2022 22:20:57 | 10-31-2022 22:20:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The tests fail from the beginning with `Fatal Python error: Segmentation fault`, and no failed tests being collected/reported.
**We still need to pin torch.**
Doc build jobs pass now after the docker image being re-built last night (my best guess for the reason)
|
transformers | 19,993 | closed | Update glossary | This PR adds some more computer vision and speech terms (feel free to suggest more!) and reorganizes it alphabetically (and each term can be linked to) so it's more like an actual glossary. I also edited some of the terms, like `attention mask` for length, since a glossary typically just provides a brief definition. If we want to keep the explanations, maybe I can link to the course instead. | 10-31-2022 21:53:36 | 10-31-2022 21:53:36 | I cannot review this PR as it is. You have changed the text for the existing entries and removed the links to the YouTube videos and since you did it with the reorganization, the diff makes it impossible to properly comment.
You have also removed existing entries like autoencoder model or autoregressive model.
Please focus on one thing at a time per PR (I have already told you this multiple times). For instance here, focus first on the reorg **with no other changes in existing content** and then in a second PR we can discuss text changes.<|||||>Sorry, I'll try to slow down and just focus on one thing at a time!
I'll close this PR and open two separate ones to address the reorganization and new terms. |
transformers | 19,992 | open | Add in-layer TF Tokenizer to BPE tokenizers | ### Feature request
As what we have with `TFBertTokenizer`, but with models that use Byte Pair Encoding (e.g. `TFT5Tokenizer`, `TFClipTokenizer`) etc...
They were implemented in `keras-nlp` (https://github.com/keras-team/keras-nlp/pull/389) and we can now bring them here.
### Motivation
With that feature we will be able to serve almost every model with TF Serving, which will make it much easier to serve models, as we won't have to write handlers and custom servers.
Having TF BPE Tokenizers is (I think) the last barrier to make `transformers` fully TF Serving-compliant.
### Your contribution
I can submit a PR, but there are a huge lot of models for which we would need to do that, so I expect a large number of subtasks if you decide to go for it.
Also, as `keras-nlp` implemented it (https://github.com/keras-team/keras-nlp/pull/389), should we copy-paste the code for each tokenizer or import from `keras-nlp`, while keeping the reference to their repo? | 10-31-2022 19:18:53 | 10-31-2022 19:18:53 | It seems like I have to tag @n1t0, @LysandreJik because this is about the tokenizers. <|||||>cc @Rocketknight1 <|||||>An alternative would be adding a `as_keras_layer` (or something) method to `PreTrainedTokenizer` and create the TF BPE Tokenizer from the vocab and merges from the tokenizer. What do you think?<|||||>This is great! We'd been blocked by the lack of a BPE tokenizer in TF-text or Keras-NLP, as it's an extremely common tokenizer class for us. We're definitely going to explore this as soon as we can find some time.<|||||>@Rocketknight1 I can contribute and submit a PR, I would just like some guidance of your on where to implement such Tokenizer (and whether should we import it from Keras-NLP or copy-paste + adapt it).
As they haven't released version of the package in a while, maybe creating a base BPE Tokenizer on a `tf_tokenization_utils` file and then having the specifics of `prepare_for_tokenization` be implemented for each model.
What do you think? I would just like to know if you folks are OK with some copy pasted code from `keras-nlp`.
<|||||>Hi @piEsposito - good questions all round. Right now we only have one TF tokenizer in the library, the BERT tokenizer in `tf_tokenization_bert.py`. I think a good plan would be to add a single BPE tokenizer for a couple of popular models that use BPE (e.g. RoBERTa, GPT or DeBERTa). After that, we should be able to see how much code is shared and how much is model-specific and then refactor out a shared method for future tokenizers. WDYT?
Also, code copy-pasted from `keras-nlp` is fine as long as the licence allows it, but we also don't mind having `keras-nlp` as a dependency, since in-graph tokenizers will already have `tensorflow-text` as a dependency anyway.<|||||>@Rocketknight1 I will try doing it for T5 or GPT.
About having `keras-nlp` as a dependency: I've opened an issue there https://github.com/keras-team/keras-nlp/issues/442 asking for them to release to Pypi a version with BPE Tokenizer, and in the meantime will try to implement it in a way that works with T5, then copy paste only if this is needed.
How does that sound?
I should have something in that sense on the next week if you approve the idea.<|||||>Sounds perfect to me!<|||||>Hi! This is Chen from KerasNLP, we can release a branch specially for the BPE if you need it soon, but before that there is a concern I want to raise:
I made the TF BPE with many regex hacks because tf_text uses Google re2, which does not fully match the python re. Although I tested on multiple datasets (multiple languages as well) and it worked well, I am still not 100% confident it provides the exactly same result as openAI BPE. So please make sure you have a good testing coverage before using it in production, thanks!<|||||>Thanks @chenmoneygithub! We have quite a lot of models with BPE tokenizers that we could probably test against.<|||||>Awesome! We will go ahead and make a release for BPE tokenizer then. Will update this thread when that is finished.<|||||>We have made a release containing the BPE tokenizer: https://pypi.org/project/keras-nlp/
Please let us know if you find any issues, thank you!<|||||>cc @piEsposito to the above comment! ^<|||||>@Rocketknight1 thank you let me get started!<|||||>Yeah I'm exploring it and guess what it is not as easy as I thought haha. <|||||>@piEsposito That was my experience too - are you having trouble even getting the results to match for a single model?<|||||>@Rocketknight1 I'm having it too, which is kinda fun, because the tokens are total mismatches, but when I decode them back they are still the same as the input. I think we will have to go deep on the internals to check for the differences. <|||||>Ugh, of course - there are multiple valid tokenizations for the same string. I'm not enough of a tokenizers expert to know the exact algorithms used and if they differ between the many BPE models we have.<|||||>@Rocketknight1 I could make them match for GPT2, should open a PR this week. Sorry for the delay, this thing was an order of magnitude harder to do that I was estimating.<|||||>Don't apologize at all - this is something we were struggling with too!<|||||>Thanks for understanding, I'll try to get at least a draft today. <|||||>@Rocketknight1 after some delay I could figure out a way to make it work, even with generation. I've requested your review and, as we agreed, kept the implementation minimal for us to get a sense of the effort needed to create the in-layer TF Tokenizer for the models that use BPE.<|||||>@Rocketknight1 all right, that was fun. Let's do it for CLIP now and figure out how we put the `<w/>` logic inside the keras-nlp bpe tokenizer.<|||||>Awesome, good luck!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey don't stale us buddy. We're just putting out some fires and enjoying the holidays. |
transformers | 19,991 | closed | Cached sin and cos matrices for rotary at GPT-J model initialization for faster generation | # What does this PR do?
Makes generations faster by caching sincos matrices for rotary for the max sequence length at model initialisation so it won't be done every other forward.
around ~%15 percent speedup overall
Tested on A100-SXM4-80GB
Benchmarks:
1 token forward average runtime out of 100 iterations with cached sin cos:
0.023421470640283642s
1 token forward average runtime out of 100 iterations without cached sin cos:
0.02738137301433869s
10 generations with 1 token context and 40 tokens generated without sincos caching:
1.410405054409057s
10 generations with 1 token context and 40 tokens generated with sincos caching:
1.2199230638332665s
Test script:
```
torch.manual_seed(123)
input_ids = torch.randint(0, 100, (1, 1)).cuda().long()
iterations = 11
with torch.no_grad():
for i in range(iterations):
if i == 1:
#start counting time after tier 1 due to pytorch warming up
t = time.perf_counter()
outputs = model.generate(input_ids, max_length=input_ids.shape[-1] + 50, min_length=input_ids.shape[-1] + 50, do_sample=True, use_cache=True)
#outputs = model.forward(input_ids)
print((time.perf_counter() - t) / (iterations - 1))
```
Models:
- GPT-J
| 10-31-2022 17:12:44 | 10-31-2022 17:12:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19991). All of your documentation changes will be reflected on that endpoint.<|||||>Mmm, now it looks like the tests are not running for some reason. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? And/or push an empty commit?<|||||>Hi @kurumuz Thank you for the PR. I left a few comments, especially for the shape and the case where `self.rotary_dim = None`.
It is not very clear to me why the (changed) shapes of `sin` and `cos` doesn't cause any issue.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,990 | closed | [EncoderDecoderModel] Add support for gradient checkpointing | # What does this PR do?
This PR adds gradient checkpointing support for EncoderDecoderModel in PyTorch.
As requested on the forum: https://discuss.huggingface.co/t/feature-request-gradient-checkpointing-for-encoderdecodermodel/25278 | 10-31-2022 17:02:40 | 10-31-2022 17:02:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,989 | closed | Pin torch to < 1.13 temporarily | # What does this PR do?
Pin torch to < 1.13 temporarily, as it causes strange failures (segmentation fault) on CircleCI.
Evidence could be found
with torch 1.12.1
https://app.circleci.com/pipelines/github/huggingface/transformers/50619/workflows/5b665ba2-5f45-4b61-9e08-de6c8a2349cd
with torch 1.13.0
https://app.circleci.com/pipelines/github/huggingface/transformers/50621/workflows/8a137f60-2e66-48fd-aeb6-1a8d49369d4c | 10-31-2022 16:43:44 | 10-31-2022 16:43:44 | Failed test is irrelevant. Merge now.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19989). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,988 | closed | Tranformers documentation translation to Italian #17459 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # 17459
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-31-2022 16:30:28 | 10-31-2022 16:30:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,987 | closed | [Don't merge] Debug CircleCI | # What does this PR do?
debug circleci | 10-31-2022 16:19:02 | 10-31-2022 16:19:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,986 | closed | [ASR Examples] Update 'tasks' for model card | # What does this PR do?
The task 'automatic-speech-recognition' was added to the model card creator in #19985. This PR updates all the ASR examples scripts accordingly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-31-2022 15:18:55 | 10-31-2022 15:18:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,985 | closed | [modelcard] Update for ASR | # What does this PR do?
Updates the modelcard to include ASR in the task mapping and task-tag-to-name mapping, and the WER in the metric tags.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-31-2022 15:12:06 | 10-31-2022 15:12:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,984 | closed | Improve model tester | # What does this PR do?
Some model testers have `__init__` like
```python
def __init__(
self,
parent,
)
```
and others accept many more arguments to customize them.
- This PR makes them accept arguments, so we have uniform style.
- This is also necessary to make `tiny model creation` to give **more correct** outputs (i.e. config/model/processor files), where `vocab_size` need to be sync between the tiny config (via model testers) and the converted (smaller) tokenizers.
I think for the review, you can just look the change in a single model test file :-)
#### TODO (in another PR 🙏 ): same change for some TF/Flax model testers | 10-31-2022 15:09:28 | 10-31-2022 15:09:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Just FYI
I updated this PR for `CANINE` https://github.com/huggingface/transformers/pull/19984/commits/bc83147623cb6441fed78d5ca9c46b114791706b and `ESMFold` https://github.com/huggingface/transformers/pull/19984/commits/0f8e4061f884fee859e63bf4b5d4bd0b8365906a
Don't really think you will reject these changes, but just in case!<|||||>Still LGTM! |
transformers | 19,983 | closed | Cannot export Donut models to ONNX | ### System Info
- `transformers` version: 4.24.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It seems that the default tolerance of 1e-5 in the ONNX configuration for vision-encoder-decoder models is too small for Donut checkpoints (currently seeing 5e-3 - 9e-3 is needed). As a result, many (all?) Donut checkpoints can't be exported using the default values in the CLI.
Having said that, the relatively large discrepancy in the exported models suggests there is a deeper issue involved with tracing these models and it would be great to eliminate this potential source of error before increasing the default value for `atol`.
Steps to reproduce:
1. Pick one of the Donut checkpoints from the [`naver-clover-ix`](https://huggingface.co/naver-clova-ix) org on the Hub
2. Export the model using the ONNX CLI, e.g.
```
python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-docvqa --feature=vision2seq-lm onnx/
```
3. The above gives the following error:
```
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0091094970703125 for [ -0.6990948 -49.217014 3.7758636 ... 3.2241364 2.7353969
-51.43289 ] vs [ -0.6989002 -49.215897 3.7760048 ... 3.223978 2.7355423
-51.433964 ]
```
<details><summary>Full stack trace</summary>
<p>
```
Framework not requested. Using torch to export to ONNX.
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 4.74k/4.74k [00:00<00:00, 791kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 803M/803M [00:09<00:00, 81.2MB/s]
/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/TensorShape.cpp:2895.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 363/363 [00:00<00:00, 85.0kB/s]
Using framework PyTorch: 1.12.1
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:230: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_channels != self.num_channels:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:220: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if width % self.patch_size[1] != 0:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if height % self.patch_size[0] != 0:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:536: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if min(input_resolution) <= self.window_size:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:136: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size, height // window_size, window_size, width // window_size, window_size, num_channels
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:148: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
windows = windows.view(-1, height // window_size, width // window_size, window_size, window_size, num_channels)
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:622: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
was_padded = pad_values[3] > 0 or pad_values[5] > 0
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:623: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if was_padded:
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:411: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
batch_size // mask_shape, mask_shape, self.num_attention_heads, dim, dim
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:682: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
height_downsampled, width_downsampled = (height + 1) // 2, (width + 1) // 2
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:266: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
should_pad = (height % 2 == 1) or (width % 2 == 1)
/Users/lewtun/git/hf/transformers/src/transformers/models/donut/modeling_donut_swin.py:267: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if should_pad:
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
Validating ONNX model...
-[✓] ONNX model output names match reference model ({'last_hidden_state'})
- Validating ONNX Model output "last_hidden_state":
-[✓] (3, 4800, 1024) matches (3, 4800, 1024)
-[x] values not close enough (atol: 0.0001)
Traceback (most recent call last):
File "/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Users/lewtun/miniconda3/envs/transformers/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/lewtun/git/hf/transformers/src/transformers/onnx/__main__.py", line 180, in <module>
main()
File "/Users/lewtun/git/hf/transformers/src/transformers/onnx/__main__.py", line 107, in main
validate_model_outputs(
File "/Users/lewtun/git/hf/transformers/src/transformers/onnx/convert.py", line 455, in validate_model_outputs
raise ValueError(
ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 0.0091094970703125 for [ -0.6990948 -49.217014 3.7758636 ... 3.2241364 2.7353969
-51.43289 ] vs [ -0.6989002 -49.215897 3.7760048 ... 3.223978 2.7355423
-51.433964 ]
```
</p>
</details>
### Expected behavior
Donut checkpoints can be exported to ONNX using either a good default value for `atol` or changes to the modeling code enable much better agreement between the original / exported models | 10-31-2022 14:01:37 | 10-31-2022 14:01:37 | cc @mht-sharma would you mind taking a look at this? It might be related to some of the subtleties you noticed with Whisper and passing encoder outputs through the model vs using the getters<|||||>```python -m transformers.onnx --model=naver-clova-ix/donut-base-finetuned-cord-v2 --feature=vision2seq-lm scratch/onnx --atol 1e-2```
with ```--atol 1e-2``` it works, but value of atol is low.
I think it is better to convert the model separately:
- Encoder
- Decoder
- Decoder with past value.
And pipeline it together.
<|||||>@BakingBrains I mentioned this here #19401<|||||>Update:
The error occurs only in the encoder part of the model i.e `Donut`. Updated the model inputs to actual inputs from dataset, however the still still persisted.
The issue starts happening from the following [modeling_donut_swin.py#L501 ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/donut/modeling_donut_swin.py#L501 ) layer activation in the `DonutSwinLayer`. The `GeluActivation` causes the outputs to diverge between original and onnx models. After removing the activation or using `relu` the model works till 1e-4 atol.<|||||>> Update: The error occurs only in the encoder part of the model i.e `Donut`. Updated the model inputs to actual inputs from dataset, however the still still persisted.
>
> The issue starts happening from the following [modeling_donut_swin.py#L501 ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/donut/modeling_donut_swin.py#L501) layer activation in the `DonutSwinLayer`. The `GeluActivation` causes the outputs to diverge between original and onnx models. After removing the activation or using `relu` the model works till 1e-4 atol.
The original SwinModel is also using this: https://huggingface.co/microsoft/swin-base-patch4-window7-224-in22k/raw/main/config.json
If you try to convert it, you don't get this issue
<|||||>Any updates on it @mht-sharma ?<|||||>Hi, @lewtun & @mht-sharma any updates?<|||||>Hi @WaterKnight1998 , apologies for late response. I was not able to work actively on the issue past few weeks. However, I have seen similar issues with other models and it was mainly because of the sensitivity to the inputs. This model also gave similar behaviour when trying different inputs during validation. However, the error was still around 0.001X.
Since the model architecture of `SwinModel` and its `Donut` Encoder is same, it's highly likely that the issue is with the used inputs. But I will validate this once and get back to you in few days.<|||||>> Hi @WaterKnight1998 , apologies for late response. I was not able to work actively on the issue past few weeks. However, I have seen similar issues with other models and it was mainly because of the sensitivity to the inputs. This model also gave similar behaviour when trying different inputs during validation. However, the error was still around 0.001X.
>
> Since the model architecture of `SwinModel` and its `Donut` Encoder is same, it's highly likely that the issue is with the used inputs. But I will validate this once and get back to you in few days.
Thank you for the explanation. I am looking forward for your fix :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @WaterKnight1998 @mht-sharma ,
Do you have inference script for Donut document parsing model using encoder and decoder onnx models? Similar to this [TrOCR gist](https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,982 | closed | Add MEGA | ### Model description
MEGA introduces a new attention method which incorporates gating and exponential moving averages to create strong local dependencies, reducing the need for full softmax attention. MEGA set a new SOTA on Long Range Arena, and MEGA-chunk performs nearly as well while achieving linear complexity WRT sequence length. I have seen really promising results from my own experiments with MEGA on long documents -- both in efficiency and model performance. It would be awesome to have MEGA (+ MEGA-chunk) available in the Hugging Face ecosystem!
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
* [Paper](https://arxiv.org/abs/2209.10655)
* [Official implementation](https://github.com/facebookresearch/mega)
* [Links to pretrained weights](https://github.com/facebookresearch/mega#models-checkpoints)
* I'm only aware of @violet-zct through the official MEGA repo | 10-31-2022 14:00:04 | 10-31-2022 14:00:04 | > I have seen really promising results from my own experiments with MEGA on long documents
Cool! Could you elaborate?
It would be very useful indeed, especially if there would be pre-trained weights for longer sequence tasks, like summarization of very long texts, or classifying multiple images (this has been asked a lot for LayoutLM-like models, which only operate on single document images).
However I'm not seeing very useful pre-trained weights at the moment, would be very useful to have a BERT-like, Wav2Vec2 or ViT-like checkpoint for operating on long sequences<|||||>Thanks for the quick response @NielsRogge!
I have only experimented with MEGA in a long-document classification setting so far, and I trained the full architecture from scratch without using the pre-trained weights. I used the authors' implementation and set up a BERT-style document classification class, using similar architectural details as in the `Text` task for LRA (Appendix D), but with `encoder_chunk_size=16`.
For performance details in my initial experiment: I used roughly 7k documents in training and 2k in validation, with up to ~3k tokens in a document. Using a single T4 GPU, each epoch (train + eval) averaged ~22 seconds. This is quite a bit faster than I've seen with other linear-complexity attention mechanisms, and I suspect it's largely due to the significant decrease in model size (4-6 layers with a single attention head in each). It's hard to compare model performance since I trained fully from scratch, but MEGA certainly seemed to reach competitive performance for my task.
I agree that the currently available model weights aren't the most generally useful, and that a BERT-like encoder would be great. I'm not sure if the authors intend to release something like that, but if not, hopefully the speed gains reduce the barrier for community LM contributions.<|||||>Hey @mnaylor5! Apologies if this is implied, but are you working on contributing this model or just indicating it would be great to have? I'd be happy to help implement in the transformers repo if the latter (or in either case if you'd be interested!). I have some 3090s to throw at this, though perhaps this isn't enough compute?
In any case, excited to see if I can help & to see this get added to HF! <|||||>Hi @MarkRich - no worries, I definitely could have been clearer. At this point, I am mainly just saying that it would be great to have available in the Hugging Face ecosystem. I'd love to contribute, but I doubt I can realistically commit the time over the next few weeks at least. I put up the issue in case anyone from the HF team or community got excited about implementing it 😄 <|||||>Sweet, I can take a crack at it. @NielsRogge is there any chance I can get added to a slack channel or something similar so that I can ask questions? My email address is [email protected] <|||||>Sure, I'll create a channel and send you an invite.<|||||>My research involves the MEGA model. Is there any way that I can contribute to this? Happy to make it available on HuggingFace!<|||||>Hi,
That'd be great. Could you provide your email address? I'll add you to the Slack channel<|||||>Thank you! My email is lingjzhu at umich.edu<|||||>@NielsRogge Hi, this is a gentle follow-up about adding MEGA. Could I start to work on it now? <|||||>@NielsRogge Nevermind. I have joined. Thank you!<|||||>Hi there! I was able to set aside some time to pretrain a very basic Mega model using BERT-style masked language modeling. I know this was something that @NielsRogge mentioned as being more useful, so I hope these pretrained weights will be helpful for getting Mega into `transformers`!
I used the official Mega implementation (specifically the `MegaEncoderLayer` class) and pretrained on wikitext-103 - nothing earth-shattering, but hopefully helpful. :smile: The model specs and code I used for training are in [this Colab notebook](https://colab.research.google.com/drive/1qfUO6o5HRdxBblWlw058HVyvaEPhPpH8?usp=sharing) along with code for loading classes and weights; and the weights and tokenizer are saved in [this repo on the HF model hub](https://huggingface.co/mnaylor/mega-wikitext-103). <|||||>Hi there @lingjzhu @MarkRich @NielsRogge - any update on how this is going? I've been using the Mega architecture (from the original implementation) more in my own experiments, and I am super excited about using it more within the HF ecosystem.
I might have some time to help with the implementation of Mega into Transformers over the next few weeks, so I would be happy to contribute to any ongoing efforts or take a stab at contributing it myself.<|||||>> Hi there @lingjzhu @MarkRich @NielsRogge - any update on how this is going? I've been using the Mega architecture (from the original implementation) more in my own experiments, and I am super excited about using it more within the HF ecosystem.
>
> I might have some time to help with the implementation of Mega into Transformers over the next few weeks, so I would be happy to contribute to any ongoing efforts or take a stab at contributing it myself.
@mnaylor5 That would be nice. I have been working on the text version and have an initial WIP codebase. However, due to interruptions by some life events, I haven't completed it yet. I will upload it to my github this weekend and maybe we can work together to complete it. <|||||>@lingjzhu cool, no worries! I'll get started and look forward to checking out your code 😄 <|||||>@NielsRogge - apologies if there's a better place to ask this, or if I'm missing some documentation that explains this. The Mega paper includes experiments on encoder-only tasks (text and image classification) as well as seq2seq (machine translation, language modeling with encoder-decoder). Is there a preference from the HF team on how to structure these separate approaches? My own work with Mega has been within encoder-only settings (pre-training with masked LM and fine-tuning on sequence or token classification), so I'm inclined to start by implementing it similarly to BERT, but I wasn't sure if this would be a problem.<|||||>@mnaylor5 My WIP code is [here](https://github.com/lingjzhu/transformers/tree/main). The code is in the `src/transformers/models/src` but it still could not run at the moment.
I have started by copying the code for T5 model and using mega as a drop-in replacement for the attention module. That said, I have moved all mega-related code from the official repo to `modeling_mega.py` and am now fusing them together with the `pretrained_model` class. Given that T5 has both an encoder and a decoder, it would be great to implement them all in one. I think most of the existing code can be reused. Maybe we could coordinate and finish the rest of the work?
Once the implementation is ready, I can pretrain an encoder, a decoder, and an encoder-decoder model on a medium size dataset and push them to the hub. <|||||>Thanks @lingjzhu! I ended up doing a similar pure PyTorch reimplementation of the original Mega code - after doing that and reading through the Hugging Face documentation, I think I have a solid understanding for how to proceed. Even though a large part of the Mega architecture is the EMA-based attention, it probably makes sense to implement the full Mega blocks that they propose (including the normalized feed-forward layer) rather than dropping in the EMA portion into another architecture like T5. This approach will keep the implementation in line with what the Mega paper introduces, and using T5 as a base would also make it more difficult to work within encoder-only settings like document classification.
With this in mind and in response to my own question above, I think it makes the most sense to approach the Mega implementation similarly to BigBird, which is conceptually similar to the improvements offered by Mega - efficiency improvements over standard self-attention which can be used in encoder-only, decoder-only, and seq2seq settings. The [BigBird implementation](https://github.com/huggingface/transformers/blob/main/src/transformers/models/big_bird/modeling_big_bird.py) follows the approach of BERT, which sets things up in a way that allows `BigBirdModel` to be used as either an encoder or decoder based on the provided config. If my understanding is correct, the extension to seq2seq is then handled by Hugging Face's [`EncoderDecoderModel` class](https://huggingface.co/docs/transformers/model_doc/encoder-decoder).
I have gotten started by using the `add-new-model-like` command and starting from RoBERTa (since I used a RoBERTa tokenizer in the MLM pretraining in my earlier comment), and I'm working through the implementation now.
**One question for @NielsRogge / the Hugging Face team**: the original implementation of Mega does not include token type embeddings - it does not preclude their usage, but their tasks did not use token type embeddings. I'm afraid that tasks like QA would be difficult to implement without these embeddings, but including them would introduce a divergence from any of the model checkpoints currently available from the original repo (including the ones I linked above from the BERT-style encoder). Do you have a recommended way of approaching this?<|||||>Hi,
Some models like DistilBERT also don't support token_type_ids and they work just fine (thanks to the SEP token). But feel free to add support for token type ids, it can't hurt using them :)<|||||>@NielsRogge thanks for the quick response. That makes sense, and I'll add support for them 😄 <|||||>@mnaylor5 You are a saint for posting that Colab! I have been looking to train Mega too. @NielsRogge How is it coming, integrating MEGA into Huggingface?<|||||>@mnaylor5 I am getting this error on your colab:
5 frames
[/content/./mega/fairseq/modules/moving_average_gated_attention.py](https://localhost:8080/#) in forward(self, x, padding_mask, incremental_state, need_weights, attn_mask, before_attn_fn)
303 # B x L x S -> B x K x C x S
304 nc = seq_len // self.chunk_size
--> 305 q = q.reshape(bsz, nc, self.chunk_size, self.zdim)
306
307 if ctx_len < self.chunk_size:
RuntimeError: shape '[32, 621, 2, 64]' is invalid for input of size 2545664
Do I need to add some padding and the padding mask?<|||||>Hi,
MEGA is now available here: https://huggingface.co/docs/transformers/main/model_doc/mega<|||||>@Tylersuard Yep, you can use MEGA in the `main` branch of Transformers - that PR was merged just a couple of weeks ago.
I haven't dug into your specific error, but I'd guess that you're using chunking and need to pad inputs to a multiple of your chunk size |
transformers | 19,981 | closed | Add Audio Spectogram Transformer | # What does this PR do?
Fixes #16383
This PR adds the [Audio Spectogram Transformer (AST)](https://arxiv.org/abs/2104.01778) model from MIT.
Similar to Whisper (actually prior to Whisper), the model treats audio as an image and applies a Vision Transformer on it.
The model gets SOTA results on audio classification benchmarks. | 10-31-2022 13:23:52 | 10-31-2022 13:23:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, how can I use it already? I installed the branch, but unsure how to load the model. I'm new with huggingface :D<|||||>Hey @FrankFundel - hoping @NielsRogge adds a nice example as part of this PR documenting just that 🤞 In the mean time, you can try adapting the example from https://huggingface.co/docs/transformers/tasks/audio_classification
You'll need to change the repo names from `facebook/wav2vec2-base` to the appropriate Audio Spectrogram Transformer repo name. You'll also need to change the preprocess function (https://huggingface.co/docs/transformers/tasks/audio_classification#preprocess) to something like:
```python
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
input_features = feature_extractor(audio_array, sampling_rate=feature_extractor.sampling_rate)
return input_features
```
This is all currently untested, so might require some playing around to make it work.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19981). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19981). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19981). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,980 | closed | Update Special Language Tokens for PLBART | # What does this PR do?
This PR fixes the special tokens for PLBartTokenizer, raised in Issue #19505. Previously the tokenizer treats java, python etc. as special tokens, and removes them when decoding is performed.
@LysandreJik | 10-31-2022 12:57:49 | 10-31-2022 12:57:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ArthurZucker <|||||>Hey! Great work 👍
We should make sure the CI tests are all green, and could you add a new test like `test_special_code_tokenization` where we make sure that the expected behavior of #19505 works<|||||>@ArthurZucker ok, tests are green. I added to the `test_full_multi_tokenizer,` `test_full_base_tokenizer` tests to check for this behaviour as the test tokenizer is already loaded in these tests. <|||||>@ArthurZucker bumping this. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19980). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19980). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19980). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks, @ArthurZucker! I made some changes to the tests with intermediate inputs and accepted several changes.
> calling `_convert_lang_code_special_format` in the `src_lang.setter` to avoid calling it everywhere / avoid one liner functions
With this, wouldn't the `src_lang would.setter` have to be called everywhere instead?
I think its quite difficult to make it backward compat and simpler as you suggest, as there are lots of places the user provides the src_lang, tgt_lang and there is also `self._src_lang` as well as` self.src_lang`. This at least preserves the functionality as before and the mapping as under-the-hood.<|||||>@ArthurZucker made that readability change. can we merge? |
transformers | 19,979 | closed | Run shellcheck on all *.sh scripts and attempt to fix errors | Also, refactor a few repetitive code patterns
# What does this PR do?
Attempt to fix shell scripting errors in examples etc.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [n/a] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [n/a] Did you write any new necessary tests?
## Who can review?
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
| 10-31-2022 12:47:26 | 10-31-2022 12:47:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19979). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks. I'm hoping it could be accepted simply to give users of the code base better examples to copy/paste from; the changes are mainly mechanical.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,978 | open | LayoutLMv3 Processor - subword does not get assigned -100 with unusual words | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.10
- Python version: 3.8.12
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.1+cu113 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import numpy as np
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
image = (np.random.rand(100, 100, 3) * 255).astype(np.uint8) # dummy image
words = ['pencil', '0000000000000000', 'phone']
boxes = [[1, 2, 3, 4], [10, 11, 12, 13], [20, 21, 22, 23]]
word_labels = [0, 0, 0]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt")
print(encoding['input_ids'])
print(processor.tokenizer.convert_ids_to_tokens(encoding['input_ids'].flatten()))
print(encoding['labels'])
# Output:
# tensor([[ 0, 21451, 1437, 49393, 1028, 2]])
# ['<s>', 'Ġpencil', 'Ġ', '0000000000000000', 'Ġphone', '</s>']
# tensor([[-100, 0, 0, 0, 0, -100]])
```
### Expected behavior
Since we are passing only 3 words `words = ['pencil', '0000000000000000', 'phone']`, I am expecting `encoding['labels']` to have only 3 non -100 labels (`(encoding['labels'] != -100).sum() == 3`).
However the output is `tensor([[-100, 0, 0, 0, 0, -100]])` where it contains 4 non -100 labels. So there is a mismatch between the input words and labels after processing. The same thing happens with '**********' word and probably other unusual "word"s. | 10-31-2022 11:38:43 | 10-31-2022 11:38:43 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge @sgugger Could this be a problem which can also affect other users as well or am I doing something wrong? (`word_ids()` works fine in this case by the way)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I've seen other people reporting wrong behaviour with unusual characters as well.
The logic to go from word-level labels to token-level labels is [here](https://github.com/huggingface/transformers/blob/3b309818e794cf6ff7fa79f34ea3e7b2386156da/src/transformers/models/layoutlmv3/tokenization_layoutlmv3_fast.py#L635-L660), might be worth looking at this more in depth.
I'll mark this issue as a good first issue as I currently don't have the bandwidth to look into it.
<|||||>The problem appears to be that for certain words (like "0000000000000000"), the first word piece is the character "Ġ", which is not being counted as part of the word. As a result, the offset for the following word piece is 0, causing both words to receive a label. Apparently the issue originates from `encode_batch` and from there to `encode_char_offsets` (which is in Rust).
This is my first attempt to contribute here, so I may be completely wrong...what can I do from here to help? @NielsRogge <|||||>Hello, may I ask you if there is anything left for me and my friends to contribute for this issue?<|||||>The same problem arises with all BPE based tokenizers. Example with LayoutXLM:
```
import numpy as np
from transformers import LayoutXLMTokenizerFast
processor = LayoutXLMTokenizerFast.from_pretrained(
"microsoft/layoutxlm-base", apply_ocr=False
)
words = ["pencil", "0000000000000000", "phone"]
boxes = [[1, 2, 3, 4], [10, 11, 12, 13], [20, 21, 22, 23]]
word_labels = [1, 2, 3]
encoding = processor(
text=words, boxes=boxes, word_labels=word_labels, return_tensors="pt"
)
print(encoding["input_ids"])
print(processor.convert_ids_to_tokens(encoding["input_ids"].flatten()))
print(encoding["labels"])
# Output:
# tensor([[ 0, 5551, 13003, 6, 28568, 197094, 197094, 24089, 2]])
# ['<s>', '▁pen', 'cil', '▁', '0000', '000000', '000000', '▁phone', '</s>']
# tensor([[-100, 1, -100, 2, 2, -100, -100, 3, -100]])
```
The main issue is BPE can produce "empty" token at the beginning of a word with `offset_mapping = (0, 0)`. Which leads to the following non empty token (which is the continuation of the word) having an `offset_mapping = (0, X)`.
Dirty solution is to check where @NielsRogge indicated and add a guard if previous token was empty. The problem is that it needs to be done for all BPE based tokenizers. Only checking if the `offset_mapping` starts with 0 is not sufficient when an empty token exists.
The other solution is to fix BPE (should it even be able to produce empty tokens?) in the Rust source.
The problem is NOT present in the NOT fast tokenizer provided by `sentencepiece` because [it operates at word level instead of token level](https://github.com/huggingface/transformers/blob/3b309818e794cf6ff7fa79f34ea3e7b2386156da/src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py#L1124-L1135). |
transformers | 19,977 | closed | Add ESMFold | cc @sgugger @LysandreJik @tomsercu @rmrao @nikitos9000
Opening a draft PR because deadlines are getting tight and I'd like to get everyone on the same page!
What's done:
- [X] Create a minimal port of `openfold`
- [X] Port ESMFold as `EsmForProteinFolding`
- [X] Update weight conversion scripts to port ESMFold weights from original repo
- [X] Update config formats to support ESMFold models
TODO:
- [x] Resolve small output discrepancies in ESM-2 stem that cause differences in final protein predictions
- [x] Add documentation
- [x] Add testing
- [x] Ensure everything is importable from the `transformers` root
- [x] ~Add an auto class for protein folding?~
- [x] Ensure non-folding ESM classes can be loaded with AutoModel
- [x] Remove some `openfold` functions/methods that aren't being called
- [x] Clean up the `openfold` port into a single dir/file
- [x] Ensure all `openfold` code is correctly licenced
- [x] ~Add auxiliary method(s) to convert the outputs into bio file formats like `pdb`~
- [ ] Reupload ESM checkpoints with the new formats
- [x] Upload ESMFold_v1 checkpoint | 10-31-2022 11:25:32 | 10-31-2022 11:25:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging for now, there are still a few improvements needed (example in a docstring for instance) but they can go in their own PRs :-) |
transformers | 19,976 | closed | Speed up TF token classification postprocessing by converting complete tensors to numpy | # What does this PR do?
The postprocessing of the token classification when using TensorFlow is not as fast as it could be. We discovered that with some experiments and profiling that showed that, in some settings, most time of the pipeline is spent in `gather_pre_entities`:
Before:
<img width="500" alt="before" src="https://user-images.githubusercontent.com/37573274/198983453-8a34e7a8-6f67-4010-9509-359b4fb9bdf7.png">
After:
<img width="500" alt="after" src="https://user-images.githubusercontent.com/37573274/198983760-1f27b4b3-6d55-4137-a22c-707992a11637.png">
This PR speeds it up by converting `input_ids` and `offset_mapping` to numpy before passing them to `gather_pre_entities`. Thereby, the tensor is moved to the appropriate device only once. Besides, it's also in line with the type annotation of `gather_pre_entities`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case: **n/a**
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation): **n/a**
- [ ] Did you write any new necessary tests? **n/a**
## Who can review?
Could you please review, @LysandreJik 😊
| 10-31-2022 10:10:27 | 10-31-2022 10:10:27 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @Rocketknight1 <|||||>This looks like a great improvement, thank you! I didn't realize how inefficient the postprocessing was there.
The PR is failing style checks, but I can fix that here, and will merge once that's done. Thank you!<|||||>Update: I believe the failing checks are caused by issues unrelated to this PR - you just happened to fork at a bad time. I'll merge and watch tests to make sure nothing goes too terribly wrong. Thanks again!<|||||>Great! Thanks for the quick review and merge 😊 |
transformers | 19,975 | closed | Give `modeling_t5.py` a `_prune_heads` | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) 19960 (not perfectly)
I refer a colab script in issue, it can prune but with forward problems.
You can see here https://colab.research.google.com/drive/1b9mHjtn2UxuHU_Sb_RXts12rDzbebBX0#scrollTo=hUSe4a1oOp6D
I use `opendelta` to visualize the pruning process.
But we seems to be a forward problem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-31-2022 08:50:08 | 10-31-2022 08:50:08 | Great work! Fell free to ping me for a review 👍<|||||>Hi @ArthurZucker , please pay more attention to the `position_bias`, I think I change it too sharply otherwise the shape will not be the same to the `score`<|||||>Hi @ArthurZucker , I upload a new commits, it seems better deal with `position_bias`. And if we do not add `head_mask` and `decoder_head_mask` in the model `forward`, the code can run. But we just ignore this line's problem.
https://github.com/huggingface/transformers/blob/c3a93d8d821bc1df3601ba858e7385eada8db3a5/src/transformers/models/t5/modeling_t5.py#L547
<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19975). All of your documentation changes will be reflected on that endpoint.<|||||>Okay! That's way better, the least code we change the better. I remember seeing a similar fix, let me link that and I think we will be able to merge if slow tests pass 👍<|||||>Great @ArthurZucker ! What other tests should be done? <|||||>gently pin @patrickvonplaten Thanks<|||||>Hey, the tests are simply the CircleCI tests! Try running `make fixup` and `make fix-copies`. The integrations tests have to pass to be able to merge the PR! <|||||>Hi @ArthurZucker I have try to pull new request after `make fixup` and `make fix-copies`, but it still can not pass CircleCI :(<|||||>Hey! You didn't have to open a new PR! I will have a look and help you fix the tests ☺️<|||||>OK then ! Thanks! It seems my flat can not work on the `make fixup`<|||||>Okay, it seems that #19097 also wanted to adress part of this issue. Since it has not really progressed, we can do everything here<|||||>Great! Anything that I can help?
<|||||>OKay, try running :
- `make style` to pass the `check_code_quality`
- `git pull upstream main` to merge the changes from the `huggingface/transformers/main` branch.
Then we will try to debug the tests that are failing.
You can also do this by running `RUN_SLOW=1 pytest tests/models/t5/test_modeling_t5.py`. <|||||>Hi @ArthurZucker ,I try to `make style`, it reports error ! And this commits seems unsuccessful
```
All done! ✨ 🍰 ✨
597 files reformatted, 1299 files left unchanged.
isort examples tests src utils
Skipped 1 files
/Library/Developer/CommandLineTools/usr/bin/make autogenerate_code
running deps_table_update
updating src/transformers/dependency_versions_table.py
/Library/Developer/CommandLineTools/usr/bin/make extra_style_checks
python utils/custom_init_isort.py
python utils/sort_auto_mappings.py
doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source
make[1]: doc-builder: No such file or directory
make[1]: *** [extra_style_checks] Error 1
make: *** [style] Error 2
```<|||||>Maybe it is my machine's problem of `make style`? @ArthurZucker Macbook pro with m1 maybe? I could not install `doc-builder`
<|||||>No, the error is from the missing `huggingface_doc` package ! Don't worry. Try installing it.
The files should still have been formatted <|||||>OK! I install it and rerun `make style` and `python utils/check_copies.py --fix_and_overwrite`
Hope it will work this time<|||||>Hi @ArthurZucker it seems that `make style` can run well on linux but cannot run well in macOS system. :) Maybe it is better to find a new method for Apple M1 :)<|||||>Hi @ArthurZucker , could you please give me a review? Many thanks!<|||||>Hi @ArthurZucker
<|||||>Hey, let's try to rebase to `be59316681fca13483da0ac2eac341f7df090e35`, since a loooot of files were modified by make style ( and this is not normal!). The issue most probably comes from your version of `black. `pip install hf-doc-builder` or upgrading it sould solve this!
I will review once that's clean! I can also help with make style if you are still unable to have the expected result! 🤗 <|||||>Hi @ArthurZucker , what is rebase? BTW? Could you please help me make style? Since my machine's version is too tricky<|||||>Let me re-fork the link it seems too dirty at this point |
transformers | 19,974 | closed | Potential bug in modeling_utils.py | ### System Info
Transformers main branch.
### Who can help?
@LysandreJik @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
There currently seems to be a potential bug in modeling_utils.py, revealed by the failure of 2 tests defined in `test_modeling_common.py`, namely [test_correct_missing_keys](https://github.com/huggingface/transformers/blob/243439a8271137aa290d7546e5704feeaa0cd1e5/tests/test_modeling_common.py#L1448) and [test_save_load_fast_init_from_base](https://github.com/huggingface/transformers/blob/243439a8271137aa290d7546e5704feeaa0cd1e5/tests/test_modeling_common.py#L313).
The issue occurs when a head model (like `xxxForSequenceClassification`) defines a parameter that has the same name as one in the base model (`xxxModel`). Let's say the base model defines a `self.layernorm` attribute/parameter, and the head model also defines a `self.layernorm`.
You can reproduce the error by cloning [this branch](https://github.com/NielsRogge/transformers/tree/add_ast_bug) of mine, then run the following tests:
```
pytest tests/models/audio_spectogram_transformer/test_modeling_audio_spectogram_transformer.py::AudioSpectogramTransformerModelTest
```
In that case, both tests fail with the following error:
```
(...)
# Some models may have keys that are not in the state by design, removing them before needlessly warning
# the user.
if cls._keys_to_ignore_on_load_missing is not None:
for pat in cls._keys_to_ignore_on_load_missing:
missing_keys = [k for k in missing_keys if re.search(pat, k) is None]
if cls._keys_to_ignore_on_load_unexpected is not None:
for pat in cls._keys_to_ignore_on_load_unexpected:
unexpected_keys = [k for k in unexpected_keys if re.search(pat, k) is None]
# retrieve weights on meta device and put them back on CPU.
# This is not ideal in terms of memory, but if we don't do that not, we can't initialize them in the next step
if low_cpu_mem_usage:
for key in missing_keys:
if key.startswith(prefix):
key = ".".join(key.split(".")[1:])
param = model_state_dict[key]
if param.device == torch.device("meta"):
if not load_in_8bit:
set_module_tensor_to_device(model, key, "cpu", torch.empty(*param.size(), dtype=dtype))
else:
set_module_8bit_tensor_to_device(model, key, "cpu", torch.empty(*param.size(), dtype=dtype))
# retrieve unintialized modules and initialize before maybe overriding that with the pretrained weights.
if _fast_init:
uninitialized_modules = model.retrieve_modules_from_names(
missing_keys, add_prefix=add_prefix_to_model, remove_prefix=remove_prefix_from_model
)
for module in uninitialized_modules:
model._init_weights(module)
# Make sure we are able to load base models as well as derived models (with heads)
start_prefix = ""
model_to_load = model
if len(cls.base_model_prefix) > 0 and not hasattr(model, cls.base_model_prefix) and has_prefix_module:
start_prefix = cls.base_model_prefix + "."
if len(cls.base_model_prefix) > 0 and hasattr(model, cls.base_model_prefix) and not has_prefix_module:
model_to_load = getattr(model, cls.base_model_prefix)
if any(key in expected_keys_not_prefixed for key in loaded_keys):
> raise ValueError(
"The state dictionary of the model you are trying to load is corrupted. Are you sure it was "
"properly saved?"
)
E ValueError: The state dictionary of the model you are trying to load is corrupted. Are you sure it was properly saved?
```
However, when simply renaming `self.layernorm` to `self.layer_norm` in the head model, both tests pass.
### Expected behavior
Normally, this should work without any error. I think the reason we haven't encountered this issue yet is simply because the case where a head model defines a parameter that has the same name as one defined in the base model is quite rare. However, this should still work as expected, as in this case for instance the layernorm of the base Transformer will have `audio_spectogram_transformer.layernorm` as name and the layernorm of the head model simply `layernorm`.
Unless I'm missing something here ;) happy to discuss. | 10-31-2022 08:43:32 | 10-31-2022 08:43:32 | Will have a look this morning. Sounds like a bug indeed!<|||||>@sgugger thanks for fixing, however I'm still encountering an issue that is probably related to this.
Specifically, the parameter whose name is the same between a base model and a head model (`self.layernorm` in my case) makes the [test_save_load_fast_init_from_base](https://github.com/huggingface/transformers/blob/243439a8271137aa290d7546e5704feeaa0cd1e5/tests/test_modeling_common.py#L313) test fail.
It can be reproduced as follows:
```
RUN_SLOW=yes pytest tests/models/audio_spectrogram_transformer/test_modeling_audio_spectrogram_transformer.py::AudioSpectrogramTransformerModelTest::test_save_load_fast_init_from_base
```
This might also be related to the fast init mechanism itself, which doesn't seem to support parameters which have the same name between the base and head model. Should I just skip the test?
<|||||>No, as this means parameters won't be properly initialized. I won't have any bandwidth to fix this in the near future so someone else will have to fix it.<|||||>I had a very quick look into it and I don't see any easy fix -> so for now I would advise to use a different name for the weights in the head (like `final_layernorm` maybe?)<|||||>Ok, I'll do that, thanks for looking into it |
transformers | 19,973 | closed | issue with --jit_mode_eval enabled in trainer commandline | ### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Trainer: @sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python3 examples/pytorch/text-classification/run_glue.py --model_name_or_path /skyrex01/wangyi/output/mrpc/ --task_name mrpc --do_eval --max_seq_length 128 --output_dir /skyrex01/wangyi/output/mrpc/inference1/ --overwrite_output_dir True --fp16 --jit_mode_eval
### Expected behavior
jit success and run the jit traced model. | 10-31-2022 03:14:00 | 10-31-2022 03:14:00 | current behavior
error log:
[INFO|trainer.py:557] 2022-10-30 20:10:11,309 >> Using cuda_amp half precision backend
10/30/2022 20:10:11 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:725] 2022-10-30 20:10:11,309 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence2, sentence1. If idx, sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
Traceback (most recent call last):
File "/home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py", line 622, in <module>
main()
File "/home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py", line 560, in main
metrics = trainer.evaluate(eval_dataset=eval_dataset)
File "/home/wangyi/project/hugface/transformers/src/transformers/trainer.py", line 2792, in evaluate
output = eval_loop(
File "/home/wangyi/project/hugface/transformers/src/transformers/trainer.py", line 2913, in evaluation_loop
model = self._wrap_model(self.model, training=False, dataloader=dataloader)
File "/home/wangyi/project/hugface/transformers/src/transformers/trainer.py", line 1299, in _wrap_model
model = self.torch_jit_model_eval(model, dataloader, training)
File "/home/wangyi/project/hugface/transformers/src/transformers/trainer.py", line 1263, in torch_jit_model_eval
jit_model = torch.jit.trace(jit_model, jit_inputs, strict=False)
File "/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py", line 750, in trace
return trace_module(
File "/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py", line 967, in trace_module
module._c._create_method_from_trace(
File "/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py", line 1552, in forward
outputs = self.bert(
File "/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py", line 968, in forward
batch_size, seq_length = input_shape
ValueError: not enough values to unpack (expected 2, got 1)
<|||||>case2: if we run only predict. command like
python3 examples/pytorch/text-classification/run_glue.py --model_name_or_path /skyrex01/wangyi/output/mrpc/ --task_name mrpc --do_predict --max_seq_length 128 --output_dir /skyrex01/wangyi/output/mrpc/inference1/ --overwrite_output_dir True --fp16 --jit_mode_eval
error changing, since inputdata does not contain "labels" in this case
jit failure as "failed to use PyTorch jit mode due to: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)."
jit error:
[INFO|modeling_utils.py:2616] 2022-10-30 20:13:22,055 >> All the weights of BertForSequenceClassification were initialized from the model checkpoint at /skyrex01/wangyi/output/mrpc/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
10/30/2022 20:13:22 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /skyrex01/wangyi/.cache/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-1928edc8ebbd0881.arrow
10/30/2022 20:13:22 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /skyrex01/wangyi/.cache/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-3447598ff1e90f2d.arrow
Running tokenizer on dataset: 0%| | 0/2 [00:00<?, ?ba/s]
10/30/2022 20:13:22 - INFO - datasets.arrow_dataset - Caching processed dataset at /skyrex01/wangyi/.cache/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-64cad0d6db19155f.arrow
Running tokenizer on dataset: 50%|████████████████████████████████████████████████████████████████▌ | 1/2 [00:00<00:00, 7.20ba/s]
[INFO|trainer.py:557] 2022-10-30 20:13:25,460 >> Using cuda_amp half precision backend
10/30/2022 20:13:25 - INFO - __main__ - *** Predict ***
[INFO|trainer.py:725] 2022-10-30 20:13:25,462 >> The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2. If idx, sentence1, sentence2 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
[WARNING|trainer.py:1268] 2022-10-30 20:13:25,714 >> failed to use PyTorch jit mode due to: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select).
[INFO|trainer.py:2925] 2022-10-30 20:13:25,715 >> ***** Running Prediction *****
[INFO|trainer.py:2927] 2022-10-30 20:13:25,715 >> Num examples = 1725
[INFO|trainer.py:2930] 2022-10-30 20:13:25,715 >> Batch size = 8
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 216/216 [00:02<00:00, 87.92it/s]
10/30/2022 20:13:28 - INFO - __main__ - ***** Predict results mrpc *****
[INFO|modelcard.py:444] 2022-10-30 20:13:28,370 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Text Classification', 'type': 'text-classification'}, 'dataset': {'name': 'GLUE MRPC', 'type': 'glue', 'args': 'mrpc'}}
<|||||>case3: if we run only predict on cpu. command like
python3 examples/pytorch/text-classification/run_glue.py --model_name_or_path /skyrex01/wangyi/output/mrpc/ --task_name mrpc --do_predict --max_seq_length 128 --output_dir /skyrex01/wangyi/output/mrpc/inference1/ --overwrite_output_dir True --bf16 --jit_mode_eval --no_cuda
error pop like
ERROR: Tensor-valued Constant nodes differed in value across invocations. This often indicates that the tracer has encountered untraceable code.
Node:
%13 : Tensor = prim::Constant[value={8}](), scope: __module.bert/__module.bert.encoder/__module.bert.encoder.layer.0/__module.bert.encoder.layer.0.attention/__module.bert.encoder.layer.0.attention.self # /home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py:341:0
Source Location:
/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(341): forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(419): forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(489): forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(603): forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(1014): forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/home/wangyi/project/hugface/transformers/src/transformers/models/bert/modeling_bert.py(1552): forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1118): _slow_forward
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/nn/modules/module.py(1130): _call_impl
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py(967): trace_module
/skyrex05/wangyi/miniconda3/envs/compatibility_test/lib/python3.9/site-packages/torch/jit/_trace.py(750): trace
/home/wangyi/project/hugface/transformers/src/transformers/trainer.py(1263): torch_jit_model_eval
/home/wangyi/project/hugface/transformers/src/transformers/trainer.py(1299): _wrap_model
/home/wangyi/project/hugface/transformers/src/transformers/trainer.py(2913): evaluation_loop
/home/wangyi/project/hugface/transformers/src/transformers/trainer.py(2866): predict
/home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py(588): main
/home/wangyi/project/hugface/transformers/examples/pytorch/text-classification/run_glue.py(622): <module>
Comparison exception: The values for attribute 'shape' do not match: torch.Size([]) != torch.Size([768, 768]).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,972 | closed | DebertaV2 Modeling for SQUAD v2.0 | ### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.4.0-1087-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
@sgugger
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python3 transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py \
--model_name_or_path microsoft/deberta-v3-large \
--dataset_name squad_v2 \
--context_column context \
--question_column question \
--answer_column answers \
--do_train \
--do_eval \
--max_seq_length 512 \
--doc_stride 128 \
--warmup_ratio 0.2 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 8 \
--learning_rate 7e-6 \
--num_train_epochs 3 \
--version_2_with_negative \
--label_names "start_positions", "end_positions" \
--predict_with_generate \
--load_best_model_at_end \
--eval_steps ${eval_steps} \
--save_steps ${eval_steps} \
--evaluation_strategy steps \
--logging_steps ${eval_steps} \
--logging_strategy steps \
--save_total_limit 5 \
--metric_for_best_model "f1" \
--greater_is_better true \
--overwrite_output_dir \
--output_dir ${ckpt_path} 2>&1 | tee ~/${ckpt_path}/finetune_run_$(date +"%Y_%m_%d_%I_%M_%p").log
### Expected behavior
I have used a similar script for SQUADv2 on other models (RoBERTa), but it seems this model is not registered properly in HF, hence the following error:
```
Traceback (most recent call last):
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 716, in <module>
main()
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 380, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/home/ayazdan/.local/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py", line 467, in from_pretrained
f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
ValueError: Unrecognized configuration class <class 'transformers.models.deberta_v2.configuration_deberta_v2.DebertaV2Config'> for this kind of AutoModel: AutoModelForSeq2SeqLM.
Model type should be one of BartConfig, BigBirdPegasusConfig, BlenderbotConfig, BlenderbotSmallConfig, EncoderDecoderConfig, FSMTConfig, LEDConfig, LongT5Config, M2M100Config, MarianConfig, MBartConfig, MT5Config, MvpConfig, PegasusConfig, PegasusXConfig, PLBartConfig, ProphetNetConfig, T5Config, XLMProphetNetConfig.
```
I made a quick fix to register the model, however another issue still exists, regarding the model itself.
```
Traceback (most recent call last):
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 716, in <module>
main()
File "transformer-sparsity/examples/pytorch/question-answering/run_seq2seq_qa.py", line 383, in main
model.resize_token_embeddings(len(tokenizer))
File "/home/ayazdan/.local/lib/python3.7/site-packages/transformers/configuration_utils.py", line 254, in __getattribute__
return super().__getattribute__(key)
AttributeError: 'DebertaV2Config' object has no attribute 'resize_token_embeddings'
```
| 10-30-2022 23:49:49 | 10-30-2022 23:49:49 | Deberta is not a Seq2Seq model, you can't make a quick fix to enable its use with `run_seq2seq_qa`, as you have experienced it. Deberta has a model qith a QA head, so you will be able to use it with the regular `run_qa` script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,971 | closed | Add SpA-Former | ### Model description
I would like to add [SpA-Former](https://arxiv.org/abs/2206.10910) model to the Transformers.
It is an end-to-end transformer to recover a shadow-free image from a single shaded image. Unlike traditional methods that require two steps for shadow detection and then shadow removal, the SpA-Former unifies these steps into one, which is a one-stage network capable of directly learning the mapping function between shadows and no shadows, it does not require a separate shadow detection. Thus, SpA-former is adaptable to real image de-shadowing for shadows projected on different semantic regions. SpA-Former consists of transformer layer and a series of joint Fourier transform residual blocks and two-wheel joint spatial attention. The network in this paper is able to handle the task while achieving a very fast processing efficiency.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
[Link](https://github.com/zhangbaijin/SpA-Former-shadow-removal) to Model Repo
[Link](https://arxiv.org/abs/2206.10910) to Paper | 10-30-2022 17:12:46 | 10-30-2022 17:12:46 | @NielsRogge Would it be a valuable contribution to HuggingFace?<|||||>Sure this would be valuable! Let me know if you need any help |
transformers | 19,970 | closed | How to annotate these type of data for custom OCR training | @NielsRogge @gante can you please explain how to annotate the below files for custom handwritten mathematical equation training. More importantly s^2





_Originally posted by @mohit-217 in https://github.com/huggingface/transformers/issues/16007#issuecomment-1296276393_
| 10-30-2022 14:45:53 | 10-30-2022 14:45:53 | @NielsRogge @gante Please review
How I need to annotate for these type of data.
1.
- 3 ( 7s+ 8 )
- 5 (6s + 7)
- s^2
- s^2 + 3s + 1
2.
- 3 ( 7 s + 8 )
- 5 ( 6 s + 7 )
- s 2
- s 2 + 3 s + 1
which one is correct ?<|||||>Hi,
Could you please ask this question on our [forum](https://discuss.huggingface.co/), rather than here?
Github issues are meant for bugs or feature requests.
Thanks! |
transformers | 19,969 | closed | Removed dependency from Distilbert tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19303. Removes bert dependency from distilbert tokenizer
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-30-2022 11:43:18 | 10-30-2022 11:43:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,968 | closed | [Doctest] Add configuration_deberta.py | # What does this PR do?
Adds configuration_deberta.py to utils/documentation_tests.txt
Based on https://github.com/huggingface/transformers/issues/19487
@ydshieh can you please have a look? thanks :D
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-30-2022 10:18:51 | 10-30-2022 10:18:51 | _The documentation is not available anymore as the PR was closed or merged._ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.