text
stringlengths 2
11.8k
|
---|
Speech Translation via Pipelines
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
thon |
The automatic speech recognition pipeline can also be used to translate speech in just a couple lines of code
thon
from datasets import load_dataset
from transformers import pipeline
librispeech_en = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
asr = pipeline(
"automatic-speech-recognition",
model="facebook/s2t-wav2vec2-large-en-de",
feature_extractor="facebook/s2t-wav2vec2-large-en-de",
)
translation_de = asr(librispeech_en[0]["file"]) |
See model hub to look for Speech2Text2 checkpoints.
Resources
Causal language modeling task guide
Speech2Text2Config
[[autodoc]] Speech2Text2Config
Speech2TextTokenizer
[[autodoc]] Speech2Text2Tokenizer
- batch_decode
- decode
- save_vocabulary
Speech2Text2Processor
[[autodoc]] Speech2Text2Processor
- call
- from_pretrained
- save_pretrained
- batch_decode
- decode
Speech2Text2ForCausalLM
[[autodoc]] Speech2Text2ForCausalLM
- forward |
LayoutXLM
Overview
LayoutXLM was proposed in LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha
Zhang, Furu Wei. It's a multilingual extension of the LayoutLMv2 model trained
on 53 languages.
The abstract from the paper is the following:
Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document
understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In
this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to
bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also
introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in
7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled
for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA
cross-lingual pre-trained models on the XFUN dataset.
This model was contributed by nielsr. The original code can be found here.
Usage tips and examples
One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so:
thon
from transformers import LayoutLMv2Model
model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base") |
Note that LayoutXLM has its own tokenizer, based on
[LayoutXLMTokenizer]/[LayoutXLMTokenizerFast]. You can initialize it as
follows:
thon
from transformers import LayoutXLMTokenizer
tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base")
Similar to LayoutLMv2, you can use [LayoutXLMProcessor] (which internally applies
[LayoutLMv2ImageProcessor] and
[LayoutXLMTokenizer]/[LayoutXLMTokenizerFast] in sequence) to prepare all
data for the model. |
As LayoutXLM's architecture is equivalent to that of LayoutLMv2, one can refer to LayoutLMv2's documentation page for all tips, code examples and notebooks.
LayoutXLMTokenizer
[[autodoc]] LayoutXLMTokenizer
- call
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LayoutXLMTokenizerFast
[[autodoc]] LayoutXLMTokenizerFast
- call
LayoutXLMProcessor
[[autodoc]] LayoutXLMProcessor
- call |
MegatronBERT
Overview
The MegatronBERT model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model
Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley,
Jared Casper and Bryan Catanzaro.
The abstract from the paper is the following:
Recent work in language modeling demonstrates that training large transformer models advances the state of the art in
Natural Language Processing applications. However, very large models can be quite difficult to train due to memory
constraints. In this work, we present our techniques for training very large transformer models and implement a simple,
efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our
approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model
parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We
illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain
15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline
that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance
the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9
billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in
BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we
achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA
accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy
of 89.4%).
This model was contributed by jdemouth. The original code can be found here.
That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular,
it contains a hybrid model parallel approach using "tensor parallel" and "pipeline parallel" techniques.
Usage tips
We have provided pretrained BERT-345M checkpoints
for use to evaluate or finetuning downstream tasks.
To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC)
Registry CLI. Further documentation for downloading models can be found in the NGC documentation.
Alternatively, you can directly download the checkpoints using:
BERT-345M-uncased: |
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip
-O megatron_bert_345m_v0_1_uncased.zip
BERT-345M-cased: |
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_cased/zip -O
megatron_bert_345m_v0_1_cased.zip
Once you have obtained the checkpoints from NVIDIA GPU Cloud (NGC), you have to convert them to a format that will
easily be loaded by Hugging Face Transformers and our port of the BERT code.
The following commands allow you to do the conversion. We assume that the folder models/megatron_bert contains
megatron_bert_345m_v0_1_{cased, uncased}.zip and that the commands are run from inside that folder: |
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_uncased.zip
python3 $PATH_TO_TRANSFORMERS/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron_bert_345m_v0_1_cased.zip
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
MegatronBertConfig
[[autodoc]] MegatronBertConfig
MegatronBertModel
[[autodoc]] MegatronBertModel
- forward
MegatronBertForMaskedLM
[[autodoc]] MegatronBertForMaskedLM
- forward
MegatronBertForCausalLM
[[autodoc]] MegatronBertForCausalLM
- forward
MegatronBertForNextSentencePrediction
[[autodoc]] MegatronBertForNextSentencePrediction
- forward
MegatronBertForPreTraining
[[autodoc]] MegatronBertForPreTraining
- forward
MegatronBertForSequenceClassification
[[autodoc]] MegatronBertForSequenceClassification
- forward
MegatronBertForMultipleChoice
[[autodoc]] MegatronBertForMultipleChoice
- forward
MegatronBertForTokenClassification
[[autodoc]] MegatronBertForTokenClassification
- forward
MegatronBertForQuestionAnswering
[[autodoc]] MegatronBertForQuestionAnswering
- forward |
XLM-ProphetNet |
DISCLAIMER: If you see something strange, file a Github Issue and assign
@patrickvonplaten
Overview
The XLM-ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
XLM-ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of
just the next token. Its architecture is identical to ProhpetNet, but the model was trained on the multi-lingual
"wiki100" Wikipedia dump. XLM-ProphetNet's model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset XGLUE.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
The Authors' code can be found here.
Resources |
Causal language modeling task guide
Translation task guide
Summarization task guide
XLMProphetNetConfig
[[autodoc]] XLMProphetNetConfig
XLMProphetNetTokenizer
[[autodoc]] XLMProphetNetTokenizer
XLMProphetNetModel
[[autodoc]] XLMProphetNetModel
XLMProphetNetEncoder
[[autodoc]] XLMProphetNetEncoder
XLMProphetNetDecoder
[[autodoc]] XLMProphetNetDecoder
XLMProphetNetForConditionalGeneration
[[autodoc]] XLMProphetNetForConditionalGeneration
XLMProphetNetForCausalLM
[[autodoc]] XLMProphetNetForCausalLM |
Open-Llama
This model is in maintenance mode only, we don't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.31.0.
You can do so by running the following command: pip install -U transformers==4.31.0.
This model differs from the OpenLLaMA models on the Hugging Face Hub, which primarily use the LLaMA architecture. |
Overview
The Open-Llama model was proposed in the open source Open-Llama project by community developer s-JoL.
The model is mainly based on LLaMA with some modifications, incorporating memory-efficient attention from Xformers, stable embedding from Bloom, and shared input-output embedding from PaLM.
And the model is pre-trained on both Chinese and English, which gives it better performance on Chinese language tasks.
This model was contributed by s-JoL.
The original code was released on GitHub by s-JoL, but is now removed.
OpenLlamaConfig
[[autodoc]] OpenLlamaConfig
OpenLlamaModel
[[autodoc]] OpenLlamaModel
- forward
OpenLlamaForCausalLM
[[autodoc]] OpenLlamaForCausalLM
- forward
OpenLlamaForSequenceClassification
[[autodoc]] OpenLlamaForSequenceClassification
- forward |
Phi
Overview
The Phi-1 model was proposed in Textbooks Are All You Need by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li.
The Phi-1.5 model was proposed in Textbooks Are All You Need II: phi-1.5 technical report by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee.
Summary
In Phi-1 and Phi-1.5 papers, the authors showed how important the quality of the data is in training relative to the model size.
They selected high quality "textbook" data alongside with synthetically generated data for training their small sized Transformer
based model Phi-1 with 1.3B parameters. Despite this small scale, phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP.
They follow the same strategy for Phi-1.5 and created another 1.3B parameter model with performance on natural language tasks comparable
to models 5x larger, and surpassing most non-frontier LLMs. Phi-1.5 exhibits many of the traits of much larger LLMs such as the ability
to “think step by step” or perform some rudimentary in-context learning.
With these two experiments the authors successfully showed the huge impact of quality of training data when training machine learning models.
The abstract from the Phi-1 paper is the following:
We introduce phi-1, a new large language model for code, with significantly smaller size than
competing models: phi-1 is a Transformer-based model with 1.3B parameters, trained for 4 days on
8 A100s, using a selection of “textbook quality” data from the web (6B tokens) and synthetically
generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, phi-1 attains
pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. It also displays surprising emergent
properties compared to phi-1-base, our model before our finetuning stage on a dataset of coding
exercises, and phi-1-small, a smaller model with 350M parameters trained with the same pipeline as
phi-1 that still achieves 45% on HumanEval.
The abstract from the Phi-1.5 paper is the following:
We continue the investigation into the power of smaller Transformer-based language models as
initiated by TinyStories – a 10 million parameter model that can produce coherent English – and
the follow-up work on phi-1, a 1.3 billion parameter model with Python coding performance close
to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to
generate “textbook quality” data as a way to enhance the learning process compared to traditional
web data. We follow the “Textbooks Are All You Need” approach, focusing this time on common
sense reasoning in natural language, and create a new 1.3 billion parameter model named phi-1.5,
with performance on natural language tasks comparable to models 5x larger, and surpassing most
non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic
coding. More generally, phi-1.5 exhibits many of the traits of much larger LLMs, both good –such
as the ability to “think step by step” or perform some rudimentary in-context learning– and bad,
including hallucinations and the potential for toxic and biased generations –encouragingly though, we
are seeing improvement on that front thanks to the absence of web data. We open-source phi-1.5 to
promote further research on these urgent topics.
This model was contributed by Susnato Dhar.
The original code for Phi-1, Phi-1.5 and Phi-2 can be found here, here and here, respectively.
Usage tips |
This model is quite similar to Llama with the main difference in [PhiDecoderLayer], where they used [PhiAttention] and [PhiMLP] layers in parallel configuration.
The tokenizer used for this model is identical to the [CodeGenTokenizer].
How to use Phi-2
Phi-2 has been integrated in the development version (4.37.0.dev) of transformers. Until the official version is released through pip, ensure that you are doing one of the following: |
Phi-2 has been integrated in the development version (4.37.0.dev) of transformers. Until the official version is released through pip, ensure that you are doing one of the following:
When loading the model, ensure that trust_remote_code=True is passed as an argument of the from_pretrained() function. |
When loading the model, ensure that trust_remote_code=True is passed as an argument of the from_pretrained() function.
Update your local transformers to the development version: pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers. The previous command is an alternative to cloning and installing from the source.
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
inputs = tokenizer('Can you help me write a formal email to a potential business partner proposing a joint venture?', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=30)
text = tokenizer.batch_decode(outputs)[0]
print(text)
'Can you help me write a formal email to a potential business partner proposing a joint venture?\nInput: Company A: ABC Inc.\nCompany B: XYZ Ltd.\nJoint Venture: A new online platform for e-commerce' |
Example :
thon |
from transformers import PhiForCausalLM, AutoTokenizer
define the model and tokenizer.
model = PhiForCausalLM.from_pretrained("microsoft/phi-1_5")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5")
feel free to change the prompt to your liking.
prompt = "If I were an AI that had just achieved"
apply the tokenizer.
tokens = tokenizer(prompt, return_tensors="pt")
use the model to generate new tokens.
generated_output = model.generate(**tokens, use_cache=True, max_new_tokens=10)
tokenizer.batch_decode(generated_output)[0]
'If I were an AI that had just achieved a breakthrough in machine learning, I would be thrilled' |
Combining Phi and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. |
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon |
import torch
from transformers import PhiForCausalLM, AutoTokenizer
define the model and tokenizer and push the model and tokens to the GPU.
model = PhiForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5")
feel free to change the prompt to your liking.
prompt = "If I were an AI that had just achieved"
apply the tokenizer.
tokens = tokenizer(prompt, return_tensors="pt").to("cuda")
use the model to generate new tokens.
generated_output = model.generate(**tokens, use_cache=True, max_new_tokens=10)
tokenizer.batch_decode(generated_output)[0]
'If I were an AI that had just achieved a breakthrough in machine learning, I would be thrilled' |
Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using microsoft/phi-1 checkpoint and the Flash Attention 2 version of the model using a sequence length of 2048.
PhiConfig
[[autodoc]] PhiConfig |
PhiConfig
[[autodoc]] PhiConfig
PhiModel
[[autodoc]] PhiModel
- forward
PhiForCausalLM
[[autodoc]] PhiForCausalLM
- forward
- generate
PhiForSequenceClassification
[[autodoc]] PhiForSequenceClassification
- forward
PhiForTokenClassification
[[autodoc]] PhiForTokenClassification
- forward |
Auto Classes
In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you
are supplying to the from_pretrained() method. AutoClasses are here to do this job for you so that you
automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary.
Instantiating one of [AutoConfig], [AutoModel], and
[AutoTokenizer] will directly create a class of the relevant architecture. For instance
python
model = AutoModel.from_pretrained("google-bert/bert-base-cased")
will create a model that is an instance of [BertModel].
There is one class of AutoModel for each task, and for each backend (PyTorch, TensorFlow, or Flax).
Extending the Auto Classes
Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a
custom class of model NewModel, make sure you have a NewModelConfig then you can add those to the auto
classes like this:
thon
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel) |
You will then be able to use the auto classes like you would usually do!
If your NewModelConfig is a subclass of [~transformers.PretrainedConfig], make sure its
model_type attribute is set to the same key you use when registering the config (here "new-model").
Likewise, if your NewModel is a subclass of [PreTrainedModel], make sure its
config_class attribute is set to the same class you use when registering the model (here
NewModelConfig). |
AutoConfig
[[autodoc]] AutoConfig
AutoTokenizer
[[autodoc]] AutoTokenizer
AutoFeatureExtractor
[[autodoc]] AutoFeatureExtractor
AutoImageProcessor
[[autodoc]] AutoImageProcessor
AutoProcessor
[[autodoc]] AutoProcessor
Generic model classes
The following auto classes are available for instantiating a base model class without a specific head.
AutoModel
[[autodoc]] AutoModel
TFAutoModel
[[autodoc]] TFAutoModel
FlaxAutoModel
[[autodoc]] FlaxAutoModel
Generic pretraining classes
The following auto classes are available for instantiating a model with a pretraining head.
AutoModelForPreTraining
[[autodoc]] AutoModelForPreTraining
TFAutoModelForPreTraining
[[autodoc]] TFAutoModelForPreTraining
FlaxAutoModelForPreTraining
[[autodoc]] FlaxAutoModelForPreTraining
Natural Language Processing
The following auto classes are available for the following natural language processing tasks.
AutoModelForCausalLM
[[autodoc]] AutoModelForCausalLM
TFAutoModelForCausalLM
[[autodoc]] TFAutoModelForCausalLM
FlaxAutoModelForCausalLM
[[autodoc]] FlaxAutoModelForCausalLM
AutoModelForMaskedLM
[[autodoc]] AutoModelForMaskedLM
TFAutoModelForMaskedLM
[[autodoc]] TFAutoModelForMaskedLM
FlaxAutoModelForMaskedLM
[[autodoc]] FlaxAutoModelForMaskedLM
AutoModelForMaskGeneration
[[autodoc]] AutoModelForMaskGeneration
TFAutoModelForMaskGeneration
[[autodoc]] TFAutoModelForMaskGeneration
AutoModelForSeq2SeqLM
[[autodoc]] AutoModelForSeq2SeqLM
TFAutoModelForSeq2SeqLM
[[autodoc]] TFAutoModelForSeq2SeqLM
FlaxAutoModelForSeq2SeqLM
[[autodoc]] FlaxAutoModelForSeq2SeqLM
AutoModelForSequenceClassification
[[autodoc]] AutoModelForSequenceClassification
TFAutoModelForSequenceClassification
[[autodoc]] TFAutoModelForSequenceClassification
FlaxAutoModelForSequenceClassification
[[autodoc]] FlaxAutoModelForSequenceClassification
AutoModelForMultipleChoice
[[autodoc]] AutoModelForMultipleChoice
TFAutoModelForMultipleChoice
[[autodoc]] TFAutoModelForMultipleChoice
FlaxAutoModelForMultipleChoice
[[autodoc]] FlaxAutoModelForMultipleChoice
AutoModelForNextSentencePrediction
[[autodoc]] AutoModelForNextSentencePrediction
TFAutoModelForNextSentencePrediction
[[autodoc]] TFAutoModelForNextSentencePrediction
FlaxAutoModelForNextSentencePrediction
[[autodoc]] FlaxAutoModelForNextSentencePrediction
AutoModelForTokenClassification
[[autodoc]] AutoModelForTokenClassification
TFAutoModelForTokenClassification
[[autodoc]] TFAutoModelForTokenClassification
FlaxAutoModelForTokenClassification
[[autodoc]] FlaxAutoModelForTokenClassification
AutoModelForQuestionAnswering
[[autodoc]] AutoModelForQuestionAnswering
TFAutoModelForQuestionAnswering
[[autodoc]] TFAutoModelForQuestionAnswering
FlaxAutoModelForQuestionAnswering
[[autodoc]] FlaxAutoModelForQuestionAnswering
AutoModelForTextEncoding
[[autodoc]] AutoModelForTextEncoding
TFAutoModelForTextEncoding
[[autodoc]] TFAutoModelForTextEncoding
Computer vision
The following auto classes are available for the following computer vision tasks.
AutoModelForDepthEstimation
[[autodoc]] AutoModelForDepthEstimation
AutoModelForImageClassification
[[autodoc]] AutoModelForImageClassification
TFAutoModelForImageClassification
[[autodoc]] TFAutoModelForImageClassification
FlaxAutoModelForImageClassification
[[autodoc]] FlaxAutoModelForImageClassification
AutoModelForVideoClassification
[[autodoc]] AutoModelForVideoClassification
AutoModelForMaskedImageModeling
[[autodoc]] AutoModelForMaskedImageModeling
TFAutoModelForMaskedImageModeling
[[autodoc]] TFAutoModelForMaskedImageModeling
AutoModelForObjectDetection
[[autodoc]] AutoModelForObjectDetection
AutoModelForImageSegmentation
[[autodoc]] AutoModelForImageSegmentation
AutoModelForImageToImage
[[autodoc]] AutoModelForImageToImage
AutoModelForSemanticSegmentation
[[autodoc]] AutoModelForSemanticSegmentation
TFAutoModelForSemanticSegmentation
[[autodoc]] TFAutoModelForSemanticSegmentation
AutoModelForInstanceSegmentation
[[autodoc]] AutoModelForInstanceSegmentation
AutoModelForUniversalSegmentation
[[autodoc]] AutoModelForUniversalSegmentation
AutoModelForZeroShotImageClassification
[[autodoc]] AutoModelForZeroShotImageClassification
TFAutoModelForZeroShotImageClassification
[[autodoc]] TFAutoModelForZeroShotImageClassification
AutoModelForZeroShotObjectDetection
[[autodoc]] AutoModelForZeroShotObjectDetection
Audio
The following auto classes are available for the following audio tasks.
AutoModelForAudioClassification
[[autodoc]] AutoModelForAudioClassification
AutoModelForAudioFrameClassification
[[autodoc]] TFAutoModelForAudioClassification
TFAutoModelForAudioFrameClassification
[[autodoc]] AutoModelForAudioFrameClassification
AutoModelForCTC
[[autodoc]] AutoModelForCTC
AutoModelForSpeechSeq2Seq
[[autodoc]] AutoModelForSpeechSeq2Seq
TFAutoModelForSpeechSeq2Seq
[[autodoc]] TFAutoModelForSpeechSeq2Seq
FlaxAutoModelForSpeechSeq2Seq
[[autodoc]] FlaxAutoModelForSpeechSeq2Seq
AutoModelForAudioXVector
[[autodoc]] AutoModelForAudioXVector
AutoModelForTextToSpectrogram
[[autodoc]] AutoModelForTextToSpectrogram
AutoModelForTextToWaveform
[[autodoc]] AutoModelForTextToWaveform
Multimodal
The following auto classes are available for the following multimodal tasks.
AutoModelForTableQuestionAnswering
[[autodoc]] AutoModelForTableQuestionAnswering
TFAutoModelForTableQuestionAnswering
[[autodoc]] TFAutoModelForTableQuestionAnswering
AutoModelForDocumentQuestionAnswering
[[autodoc]] AutoModelForDocumentQuestionAnswering
TFAutoModelForDocumentQuestionAnswering
[[autodoc]] TFAutoModelForDocumentQuestionAnswering
AutoModelForVisualQuestionAnswering
[[autodoc]] AutoModelForVisualQuestionAnswering
AutoModelForVision2Seq
[[autodoc]] AutoModelForVision2Seq
TFAutoModelForVision2Seq
[[autodoc]] TFAutoModelForVision2Seq
FlaxAutoModelForVision2Seq
[[autodoc]] FlaxAutoModelForVision2Seq |
InstructBLIP
Overview
The InstructBLIP model was proposed in InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning by Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi.
InstructBLIP leverages the BLIP-2 architecture for visual instruction tuning.
The abstract from the paper is the following:
General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-language pre-training has been widely studied, vision-language instruction tuning remains relatively less explored. In this paper, we conduct a systematic and comprehensive study on vision-language instruction tuning based on the pre-trained BLIP-2 models. We gather a wide variety of 26 publicly available datasets, transform them into instruction tuning format and categorize them into two clusters for held-in instruction tuning and held-out zero-shot evaluation. Additionally, we introduce instruction-aware visual feature extraction, a crucial method that enables the model to extract informative features tailored to the given instruction. The resulting InstructBLIP models achieve state-of-the-art zero-shot performance across all 13 held-out datasets, substantially outperforming BLIP-2 and the larger Flamingo. Our models also lead to state-of-the-art performance when finetuned on individual downstream tasks (e.g., 90.7% accuracy on ScienceQA IMG). Furthermore, we qualitatively demonstrate the advantages of InstructBLIP over concurrent multimodal models. |
InstructBLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips
InstructBLIP uses the same architecture as BLIP-2 with a tiny but important difference: it also feeds the text prompt (instruction) to the Q-Former.
InstructBlipConfig
[[autodoc]] InstructBlipConfig
- from_vision_qformer_text_configs
InstructBlipVisionConfig
[[autodoc]] InstructBlipVisionConfig
InstructBlipQFormerConfig
[[autodoc]] InstructBlipQFormerConfig
InstructBlipProcessor
[[autodoc]] InstructBlipProcessor
InstructBlipVisionModel
[[autodoc]] InstructBlipVisionModel
- forward
InstructBlipQFormerModel
[[autodoc]] InstructBlipQFormerModel
- forward
InstructBlipForConditionalGeneration
[[autodoc]] InstructBlipForConditionalGeneration
- forward
- generate |
MPNet
Overview
The MPNet model was proposed in MPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
MPNet adopts a novel pre-training method, named masked and permuted language modeling, to inherit the advantages of
masked language modeling and permuted language modeling for natural language understanding.
The abstract from the paper is the following:
BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models.
Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for
pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and
thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel
pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the
dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position
information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in
XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of
down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large
margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g.,
BERT, XLNet, RoBERTa) under the same model setting.
The original code can be found here.
Usage tips
MPNet doesn't have token_type_ids, you don't need to indicate which token belongs to which segment. Just
separate your segments with the separation token tokenizer.sep_token (or [sep]).
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
MPNetConfig
[[autodoc]] MPNetConfig
MPNetTokenizer
[[autodoc]] MPNetTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
MPNetTokenizerFast
[[autodoc]] MPNetTokenizerFast |
MPNetModel
[[autodoc]] MPNetModel
- forward
MPNetForMaskedLM
[[autodoc]] MPNetForMaskedLM
- forward
MPNetForSequenceClassification
[[autodoc]] MPNetForSequenceClassification
- forward
MPNetForMultipleChoice
[[autodoc]] MPNetForMultipleChoice
- forward
MPNetForTokenClassification
[[autodoc]] MPNetForTokenClassification
- forward
MPNetForQuestionAnswering
[[autodoc]] MPNetForQuestionAnswering
- forward |
TFMPNetModel
[[autodoc]] TFMPNetModel
- call
TFMPNetForMaskedLM
[[autodoc]] TFMPNetForMaskedLM
- call
TFMPNetForSequenceClassification
[[autodoc]] TFMPNetForSequenceClassification
- call
TFMPNetForMultipleChoice
[[autodoc]] TFMPNetForMultipleChoice
- call
TFMPNetForTokenClassification
[[autodoc]] TFMPNetForTokenClassification
- call
TFMPNetForQuestionAnswering
[[autodoc]] TFMPNetForQuestionAnswering
- call |
ConvNeXT
Overview
The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them.
The abstract from the paper is the following:
The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers
(e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide
variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive
biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually "modernize" a standard ResNet toward the design
of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models
dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy
and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets. |
ConvNeXT architecture. Taken from the original paper.
This model was contributed by nielsr. TensorFlow version of the model was contributed by ariG23498,
gante, and sayakpaul (equal contribution). The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXT.
[ConvNextForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextConfig
[[autodoc]] ConvNextConfig
ConvNextFeatureExtractor
[[autodoc]] ConvNextFeatureExtractor
ConvNextImageProcessor
[[autodoc]] ConvNextImageProcessor
- preprocess |
ConvNextModel
[[autodoc]] ConvNextModel
- forward
ConvNextForImageClassification
[[autodoc]] ConvNextForImageClassification
- forward
TFConvNextModel
[[autodoc]] TFConvNextModel
- call
TFConvNextForImageClassification
[[autodoc]] TFConvNextForImageClassification
- call |
SegFormer
Overview
The SegFormer model was proposed in SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping
Luo. The model consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great
results on image segmentation benchmarks such as ADE20K and Cityscapes.
The abstract from the paper is the following:
We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with
lightweight multilayer perception (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel
hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding,
thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution
differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from
different layers, and thus combining both local attention and global attention to render powerful representations. We
show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our
approach up to obtain a series of models from SegFormer-B0 to SegFormer-B5, reaching significantly better performance
and efficiency than previous counterparts. For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters,
being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on
Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C.
The figure below illustrates the architecture of SegFormer. Taken from the original paper. |
This model was contributed by nielsr. The TensorFlow version
of the model was contributed by sayakpaul. The original code can be found here.
Usage tips |
SegFormer consists of a hierarchical Transformer encoder, and a lightweight all-MLP decoder head.
[SegformerModel] is the hierarchical Transformer encoder (which in the paper is also referred to
as Mix Transformer or MiT). [SegformerForSemanticSegmentation] adds the all-MLP decoder head on
top to perform semantic segmentation of images. In addition, there's
[SegformerForImageClassification] which can be used to - you guessed it - classify images. The
authors of SegFormer first pre-trained the Transformer encoder on ImageNet-1k to classify images. Next, they throw
away the classification head, and replace it by the all-MLP decode head. Next, they fine-tune the model altogether on
ADE20K, Cityscapes and COCO-stuff, which are important benchmarks for semantic segmentation. All checkpoints can be
found on the hub.
The quickest way to get started with SegFormer is by checking the example notebooks (which showcase both inference and
fine-tuning on custom data). One can also check out the blog post introducing SegFormer and illustrating how it can be fine-tuned on custom data.
TensorFlow users should refer to this repository that shows off-the-shelf inference and fine-tuning.
One can also check out this interactive demo on Hugging Face Spaces
to try out a SegFormer model on custom images.
SegFormer works on any input size, as it pads the input to be divisible by config.patch_sizes.
One can use [SegformerImageProcessor] to prepare images and corresponding segmentation maps
for the model. Note that this image processor is fairly basic and does not include all data augmentations used in
the original paper. The original preprocessing pipelines (for the ADE20k dataset for instance) can be found here. The most
important preprocessing step is that images and segmentation maps are randomly cropped and padded to the same size,
such as 512x512 or 640x640, after which they are normalized.
One additional thing to keep in mind is that one can initialize [SegformerImageProcessor] with
reduce_labels set to True or False. In some datasets (like ADE20k), the 0 index is used in the annotated
segmentation maps for background. However, ADE20k doesn't include the "background" class in its 150 labels.
Therefore, reduce_labels is used to reduce all labels by 1, and to make sure no loss is computed for the
background class (i.e. it replaces 0 in the annotated maps by 255, which is the ignore_index of the loss function
used by [SegformerForSemanticSegmentation]). However, other datasets use the 0 index as
background class and include this class as part of all labels. In that case, reduce_labels should be set to
False, as loss should also be computed for the background class.
As most models, SegFormer comes in different sizes, the details of which can be found in the table below
(taken from Table 7 of the original paper). |
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
| :---------------: | ------------- | ------------------- | :---------------------: | :------------: | :-------------------: |
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
Note that MiT in the above table refers to the Mix Transformer encoder backbone introduced in SegFormer. For
SegFormer's results on the segmentation datasets like ADE20k, refer to the paper.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SegFormer. |
[SegformerForImageClassification] is supported by this example script and notebook.
Image classification task guide
Semantic segmentation:
[SegformerForSemanticSegmentation] is supported by this example script.
A blog on fine-tuning SegFormer on a custom dataset can be found here.
More demo notebooks on SegFormer (both inference + fine-tuning on a custom dataset) can be found here.
[TFSegformerForSemanticSegmentation] is supported by this example notebook.
Semantic segmentation task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
SegformerConfig
[[autodoc]] SegformerConfig
SegformerFeatureExtractor
[[autodoc]] SegformerFeatureExtractor
- call
- post_process_semantic_segmentation
SegformerImageProcessor
[[autodoc]] SegformerImageProcessor
- preprocess
- post_process_semantic_segmentation |
SegformerModel
[[autodoc]] SegformerModel
- forward
SegformerDecodeHead
[[autodoc]] SegformerDecodeHead
- forward
SegformerForImageClassification
[[autodoc]] SegformerForImageClassification
- forward
SegformerForSemanticSegmentation
[[autodoc]] SegformerForSemanticSegmentation
- forward |
TFSegformerDecodeHead
[[autodoc]] TFSegformerDecodeHead
- call
TFSegformerModel
[[autodoc]] TFSegformerModel
- call
TFSegformerForImageClassification
[[autodoc]] TFSegformerForImageClassification
- call
TFSegformerForSemanticSegmentation
[[autodoc]] TFSegformerForSemanticSegmentation
- call |
XLS-R
Overview
The XLS-R model was proposed in XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale by Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman
Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, Michael Auli.
The abstract from the paper is the following:
This paper presents XLS-R, a large-scale model for cross-lingual speech representation learning based on wav2vec 2.0.
We train models with up to 2B parameters on nearly half a million hours of publicly available speech audio in 128
languages, an order of magnitude more public data than the largest known prior work. Our evaluation covers a wide range
of tasks, domains, data regimes and languages, both high and low-resource. On the CoVoST-2 speech translation
benchmark, we improve the previous state of the art by an average of 7.4 BLEU over 21 translation directions into
English. For speech recognition, XLS-R improves over the best known prior work on BABEL, MLS, CommonVoice as well as
VoxPopuli, lowering error rates by 14-34% relative on average. XLS-R also sets a new state of the art on VoxLingua107
language identification. Moreover, we show that with sufficient model size, cross-lingual pretraining can outperform
English-only pretraining when translating English speech into other languages, a setting which favors monolingual
pretraining. We hope XLS-R can help to improve speech processing tasks for many more languages of the world.
Relevant checkpoints can be found under https://huggingface.co/models?other=xls_r.
The original code can be found here.
Usage tips |
XLS-R is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLS-R model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using
[Wav2Vec2CTCTokenizer].
XLS-R's architecture is based on the Wav2Vec2 model, refer to Wav2Vec2's documentation page for API reference. |
XLM-V
Overview
XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R).
It was introduced in the XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.
From the abstract of the XLM-V paper:
Large multilingual language models typically rely on a single vocabulary shared across 100+ languages.
As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged.
This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R.
In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by
de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity
to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically
more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V,
a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we
tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and
named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
This model was contributed by stefan-it, including detailed experiments with XLM-V on downstream tasks.
The experiments repository can be found here.
Usage tips |
XLM-V is compatible with the XLM-RoBERTa model architecture, only model weights from fairseq
library had to be converted.
The XLMTokenizer implementation is used to load the vocab and performs tokenization.
A XLM-V (base size) model is available under the facebook/xlm-v-base identifier.
XLM-V architecture is the same as XLM-RoBERTa, refer to XLM-RoBERTa documentation for API reference, and examples. |
RemBERT
Overview
The RemBERT model was proposed in Rethinking Embedding Coupling in Pre-trained Language Models by Hyung Won Chung, Thibault Févry, Henry Tsai, Melvin Johnson, Sebastian Ruder.
The abstract from the paper is the following:
We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art
pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to
significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By
reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on
standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that
allocating additional capacity to the output embedding provides benefits to the model that persist through the
fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger
output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage
Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these
findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the
number of parameters at the fine-tuning stage.
Usage tips
For fine-tuning, RemBERT can be thought of as a bigger version of mBERT with an ALBERT-like factorization of the
embedding layer. The embeddings are not tied in pre-training, in contrast with BERT, which enables smaller input
embeddings (preserved during fine-tuning) and bigger output embeddings (discarded at fine-tuning). The tokenizer is
also similar to the Albert one rather than the BERT one.
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
RemBertConfig
[[autodoc]] RemBertConfig
RemBertTokenizer
[[autodoc]] RemBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
RemBertTokenizerFast
[[autodoc]] RemBertTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary |
RemBertModel
[[autodoc]] RemBertModel
- forward
RemBertForCausalLM
[[autodoc]] RemBertForCausalLM
- forward
RemBertForMaskedLM
[[autodoc]] RemBertForMaskedLM
- forward
RemBertForSequenceClassification
[[autodoc]] RemBertForSequenceClassification
- forward
RemBertForMultipleChoice
[[autodoc]] RemBertForMultipleChoice
- forward
RemBertForTokenClassification
[[autodoc]] RemBertForTokenClassification
- forward
RemBertForQuestionAnswering
[[autodoc]] RemBertForQuestionAnswering
- forward |
TFRemBertModel
[[autodoc]] TFRemBertModel
- call
TFRemBertForMaskedLM
[[autodoc]] TFRemBertForMaskedLM
- call
TFRemBertForCausalLM
[[autodoc]] TFRemBertForCausalLM
- call
TFRemBertForSequenceClassification
[[autodoc]] TFRemBertForSequenceClassification
- call
TFRemBertForMultipleChoice
[[autodoc]] TFRemBertForMultipleChoice
- call
TFRemBertForTokenClassification
[[autodoc]] TFRemBertForTokenClassification
- call
TFRemBertForQuestionAnswering
[[autodoc]] TFRemBertForQuestionAnswering
- call |
SeamlessM4T
Overview
The SeamlessM4T model was proposed in SeamlessM4T — Massively Multilingual & Multimodal Machine Translation by the Seamless Communication team from Meta AI.
This is the version 1 release of the model. For the updated version 2 release, refer to the Seamless M4T v2 docs.
SeamlessM4T is a collection of models designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text.
SeamlessM4T enables multiple tasks without relying on separate models: |
Speech-to-speech translation (S2ST)
Speech-to-text translation (S2TT)
Text-to-speech translation (T2ST)
Text-to-text translation (T2TT)
Automatic speech recognition (ASR) |
[SeamlessM4TModel] can perform all the above tasks, but each task also has its own dedicated sub-model.
The abstract from the paper is the following:
What does it take to create the Babel Fish, a tool that can help individuals translate speech between any two languages? While recent breakthroughs in text-based models have pushed machine translation coverage beyond 200 languages, unified speech-to-speech translation models have yet to achieve similar strides. More specifically, conventional speech-to-speech translation systems rely on cascaded systems that perform translation progressively, putting high-performing unified systems out of reach. To address these gaps, we introduce SeamlessM4T, a single model that supports speech-to-speech translation, speech-to-text translation, text-to-speech translation, text-to-text translation, and automatic speech recognition for up to 100 languages. To build this, we used 1 million hours of open speech audio data to learn self-supervised speech representations with w2v-BERT 2.0. Subsequently, we created a multimodal corpus of automatically aligned speech translations. Filtered and combined with human-labeled and pseudo-labeled data, we developed the first multilingual system capable of translating from and into English for both speech and text. On FLEURS, SeamlessM4T sets a new standard for translations into multiple target languages, achieving an improvement of 20% BLEU over the previous SOTA in direct speech-to-text translation. Compared to strong cascaded models, SeamlessM4T improves the quality of into-English translation by 1.3 BLEU points in speech-to-text and by 2.6 ASR-BLEU points in speech-to-speech. Tested for robustness, our system performs better against background noises and speaker variations in speech-to-text tasks compared to the current SOTA model. Critically, we evaluated SeamlessM4T on gender bias and added toxicity to assess translation safety. Finally, all contributions in this work are open-sourced and accessible at https://github.com/facebookresearch/seamless_communication
Usage
First, load the processor and a checkpoint of the model:
thon |
from transformers import AutoProcessor, SeamlessM4TModel
processor = AutoProcessor.from_pretrained("facebook/hf-seamless-m4t-medium")
model = SeamlessM4TModel.from_pretrained("facebook/hf-seamless-m4t-medium")
You can seamlessly use this model on text or on audio, to generated either translated text or translated audio.
Here is how to use the processor to process text and audio:
thon |
let's load an audio sample from an Arabic speech corpus
from datasets import load_dataset
dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True)
audio_sample = next(iter(dataset))["audio"]
now, process it
audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt")
now, process some English test as well
text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt") |
Speech
[SeamlessM4TModel] can seamlessly generate text or speech with few or no changes. Let's target Russian voice translation:
thon
audio_array_from_text = model.generate(text_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze()
audio_array_from_audio = model.generate(audio_inputs, tgt_lang="rus")[0].cpu().numpy().squeeze() |
With basically the same code, I've translated English text and Arabic speech to Russian speech samples.
Text
Similarly, you can generate translated text from audio files or from text with the same model. You only have to pass generate_speech=False to [SeamlessM4TModel.generate].
This time, let's translate to French.
thon |
from audio
output_tokens = model.generate(**audio_inputs, tgt_lang="fra", generate_speech=False)
translated_text_from_audio = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True)
from text
output_tokens = model.generate(**text_inputs, tgt_lang="fra", generate_speech=False)
translated_text_from_text = processor.decode(output_tokens[0].tolist()[0], skip_special_tokens=True) |
Tips
1. Use dedicated models
[SeamlessM4TModel] is transformers top level model to generate speech and text, but you can also use dedicated models that perform the task without additional components, thus reducing the memory footprint.
For example, you can replace the audio-to-audio generation snippet with the model dedicated to the S2ST task, the rest is exactly the same code:
thon |
from transformers import SeamlessM4TForSpeechToSpeech
model = SeamlessM4TForSpeechToSpeech.from_pretrained("facebook/hf-seamless-m4t-medium")
Or you can replace the text-to-text generation snippet with the model dedicated to the T2TT task, you only have to remove generate_speech=False.
thon
from transformers import SeamlessM4TForTextToText
model = SeamlessM4TForTextToText.from_pretrained("facebook/hf-seamless-m4t-medium") |
Feel free to try out [SeamlessM4TForSpeechToText] and [SeamlessM4TForTextToSpeech] as well.
2. Change the speaker identity
You have the possibility to change the speaker used for speech synthesis with the spkr_id argument. Some spkr_id works better than other for some languages!
3. Change the generation strategy
You can use different generation strategies for speech and text generation, e.g .generate(input_ids=input_ids, text_num_beams=4, speech_do_sample=True) which will successively perform beam-search decoding on the text model, and multinomial sampling on the speech model.
4. Generate speech and text at the same time
Use return_intermediate_token_ids=True with [SeamlessM4TModel] to return both speech and text !
Model architecture
SeamlessM4T features a versatile architecture that smoothly handles the sequential generation of text and speech. This setup comprises two sequence-to-sequence (seq2seq) models. The first model translates the input modality into translated text, while the second model generates speech tokens, known as "unit tokens," from the translated text.
Each modality has its own dedicated encoder with a unique architecture. Additionally, for speech output, a vocoder inspired by the HiFi-GAN architecture is placed on top of the second seq2seq model.
Here's how the generation process works: |
Input text or speech is processed through its specific encoder.
A decoder creates text tokens in the desired language.
If speech generation is required, the second seq2seq model, following a standard encoder-decoder structure, generates unit tokens.
These unit tokens are then passed through the final vocoder to produce the actual speech. |
This model was contributed by ylacombe. The original code can be found here.
SeamlessM4TModel
[[autodoc]] SeamlessM4TModel
- generate
SeamlessM4TForTextToSpeech
[[autodoc]] SeamlessM4TForTextToSpeech
- generate
SeamlessM4TForSpeechToSpeech
[[autodoc]] SeamlessM4TForSpeechToSpeech
- generate
SeamlessM4TForTextToText
[[autodoc]] transformers.SeamlessM4TForTextToText
- forward
- generate
SeamlessM4TForSpeechToText
[[autodoc]] transformers.SeamlessM4TForSpeechToText
- forward
- generate
SeamlessM4TConfig
[[autodoc]] SeamlessM4TConfig
SeamlessM4TTokenizer
[[autodoc]] SeamlessM4TTokenizer
- call
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
SeamlessM4TTokenizerFast
[[autodoc]] SeamlessM4TTokenizerFast
- call
SeamlessM4TFeatureExtractor
[[autodoc]] SeamlessM4TFeatureExtractor
- call
SeamlessM4TProcessor
[[autodoc]] SeamlessM4TProcessor
- call
SeamlessM4TCodeHifiGan
[[autodoc]] SeamlessM4TCodeHifiGan
SeamlessM4THifiGan
[[autodoc]] SeamlessM4THifiGan
SeamlessM4TTextToUnitModel
[[autodoc]] SeamlessM4TTextToUnitModel
SeamlessM4TTextToUnitForConditionalGeneration
[[autodoc]] SeamlessM4TTextToUnitForConditionalGeneration |
ImageGPT
Overview
The ImageGPT model was proposed in Generative Pretraining from Pixels by Mark
Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, Ilya Sutskever. ImageGPT (iGPT) is a GPT-2-like
model trained to predict the next pixel value, allowing for both unconditional and conditional image generation.
The abstract from the paper is the following:
Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models
can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels,
without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels,
we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and
low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide
ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. We are also
competitive with self-supervised benchmarks on ImageNet when substituting pixels for a VQVAE encoding, achieving 69.0%
top-1 accuracy on a linear probe of our features. |
Summary of the approach. Taken from the original paper.
This model was contributed by nielsr, based on this issue. The original code can be found
here.
Usage tips |
ImageGPT is almost exactly the same as GPT-2, with the exception that a different activation
function is used (namely "quick gelu"), and the layer normalization layers don't mean center the inputs. ImageGPT
also doesn't have tied input- and output embeddings.
As the time- and memory requirements of the attention mechanism of Transformers scales quadratically in the sequence
length, the authors pre-trained ImageGPT on smaller input resolutions, such as 32x32 and 64x64. However, feeding a
sequence of 32x32x3=3072 tokens from 0..255 into a Transformer is still prohibitively large. Therefore, the authors
applied k-means clustering to the (R,G,B) pixel values with k=512. This way, we only have a 32*32 = 1024-long
sequence, but now of integers in the range 0..511. So we are shrinking the sequence length at the cost of a bigger
embedding matrix. In other words, the vocabulary size of ImageGPT is 512, + 1 for a special "start of sentence" (SOS)
token, used at the beginning of every sequence. One can use [ImageGPTImageProcessor] to prepare
images for the model.
Despite being pre-trained entirely unsupervised (i.e. without the use of any labels), ImageGPT produces fairly
performant image features useful for downstream tasks, such as image classification. The authors showed that the
features in the middle of the network are the most performant, and can be used as-is to train a linear model (such as
a sklearn logistic regression model for example). This is also referred to as "linear probing". Features can be
easily obtained by first forwarding the image through the model, then specifying output_hidden_states=True, and
then average-pool the hidden states at whatever layer you like.
Alternatively, one can further fine-tune the entire model on a downstream dataset, similar to BERT. For this, you can
use [ImageGPTForImageClassification].
ImageGPT comes in different sizes: there's ImageGPT-small, ImageGPT-medium and ImageGPT-large. The authors did also
train an XL variant, which they didn't release. The differences in size are summarized in the following table: |
| Model variant | Depths | Hidden sizes | Decoder hidden size | Params (M) | ImageNet-1k Top 1 |
|---|---|---|---|---|---|
| MiT-b0 | [2, 2, 2, 2] | [32, 64, 160, 256] | 256 | 3.7 | 70.5 |
| MiT-b1 | [2, 2, 2, 2] | [64, 128, 320, 512] | 256 | 14.0 | 78.7 |
| MiT-b2 | [3, 4, 6, 3] | [64, 128, 320, 512] | 768 | 25.4 | 81.6 |
| MiT-b3 | [3, 4, 18, 3] | [64, 128, 320, 512] | 768 | 45.2 | 83.1 |
| MiT-b4 | [3, 8, 27, 3] | [64, 128, 320, 512] | 768 | 62.6 | 83.6 |
| MiT-b5 | [3, 6, 40, 3] | [64, 128, 320, 512] | 768 | 82.0 | 83.8 |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ImageGPT. |
Demo notebooks for ImageGPT can be found here.
[ImageGPTForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ImageGPTConfig
[[autodoc]] ImageGPTConfig
ImageGPTFeatureExtractor
[[autodoc]] ImageGPTFeatureExtractor
- call
ImageGPTImageProcessor
[[autodoc]] ImageGPTImageProcessor
- preprocess
ImageGPTModel
[[autodoc]] ImageGPTModel
- forward
ImageGPTForCausalImageModeling
[[autodoc]] ImageGPTForCausalImageModeling
- forward
ImageGPTForImageClassification
[[autodoc]] ImageGPTForImageClassification
- forward |
Nezha
Overview
The Nezha model was proposed in NEZHA: Neural Contextualized Representation for Chinese Language Understanding by Junqiu Wei et al.
The abstract from the paper is the following:
The pre-trained language models have achieved great successes in various natural language understanding (NLU) tasks
due to its capacity to capture the deep contextualized information in text by pre-training on large-scale corpora.
In this technical report, we present our practice of pre-training language models named NEZHA (NEural contextualiZed
representation for CHinese lAnguage understanding) on Chinese corpora and finetuning for the Chinese NLU tasks.
The current version of NEZHA is based on BERT with a collection of proven improvements, which include Functional
Relative Positional Encoding as an effective positional encoding scheme, Whole Word Masking strategy,
Mixed Precision Training and the LAMB Optimizer in training the models. The experimental results show that NEZHA
achieves the state-of-the-art performances when finetuned on several representative Chinese tasks, including
named entity recognition (People's Daily NER), sentence matching (LCQMC), Chinese sentiment classification (ChnSenti)
and natural language inference (XNLI).
This model was contributed by sijunhe. The original code can be found here.
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
NezhaConfig
[[autodoc]] NezhaConfig
NezhaModel
[[autodoc]] NezhaModel
- forward
NezhaForPreTraining
[[autodoc]] NezhaForPreTraining
- forward
NezhaForMaskedLM
[[autodoc]] NezhaForMaskedLM
- forward
NezhaForNextSentencePrediction
[[autodoc]] NezhaForNextSentencePrediction
- forward
NezhaForSequenceClassification
[[autodoc]] NezhaForSequenceClassification
- forward
NezhaForMultipleChoice
[[autodoc]] NezhaForMultipleChoice
- forward
NezhaForTokenClassification
[[autodoc]] NezhaForTokenClassification
- forward
NezhaForQuestionAnswering
[[autodoc]] NezhaForQuestionAnswering
- forward |
Audio Spectrogram Transformer
Overview
The Audio Spectrogram Transformer model was proposed in AST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass.
The Audio Spectrogram Transformer applies a Vision Transformer to audio, by turning audio into an image (spectrogram). The model obtains state-of-the-art results
for audio classification.
The abstract from the paper is the following:
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2. |
Audio Spectrogram Transformer architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips |
When fine-tuning the Audio Spectrogram Transformer (AST) on your own dataset, it's recommended to take care of the input normalization (to make
sure the input has mean of 0 and std of 0.5). [ASTFeatureExtractor] takes care of this. Note that it uses the AudioSet
mean and std by default. You can check ast/src/get_norm_stats.py to see how
the authors compute the stats for a downstream dataset.
Note that the AST needs a low learning rate (the authors use a 10 times smaller learning rate compared to their CNN model proposed in the
PSLA paper) and converges quickly, so please search for a suitable learning rate and learning rate scheduler for your task. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with the Audio Spectrogram Transformer.
A notebook illustrating inference with AST for audio classification can be found here.
[ASTForAudioClassification] is supported by this example script and notebook.
See also: Audio classification. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ASTConfig
[[autodoc]] ASTConfig
ASTFeatureExtractor
[[autodoc]] ASTFeatureExtractor
- call
ASTModel
[[autodoc]] ASTModel
- forward
ASTForAudioClassification
[[autodoc]] ASTForAudioClassification
- forward |
Mask2Former
Overview
The Mask2Former model was proposed in Masked-attention Mask Transformer for Universal Image Segmentation by Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar. Mask2Former is a unified framework for panoptic, instance and semantic segmentation and features significant performance and efficiency improvements over MaskFormer.
The abstract from the paper is the following:
Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice
of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K). |
Mask2Former architecture. Taken from the original paper.
This model was contributed by Shivalika Singh and Alara Dirik. The original code can be found here.
Usage tips |
Mask2Former uses the same preprocessing and postprocessing steps as MaskFormer. Use [Mask2FormerImageProcessor] or [AutoImageProcessor] to prepare images and optional targets for the model.
To get the final segmentation, depending on the task, you can call [~Mask2FormerImageProcessor.post_process_semantic_segmentation] or [~Mask2FormerImageProcessor.post_process_instance_segmentation] or [~Mask2FormerImageProcessor.post_process_panoptic_segmentation]. All three tasks can be solved using [Mask2FormerForUniversalSegmentation] output, panoptic segmentation accepts an optional label_ids_to_fuse argument to fuse instances of the target object/s (e.g. sky) together. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mask2Former.
Demo notebooks regarding inference + fine-tuning Mask2Former on custom data can be found here. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
Mask2FormerConfig
[[autodoc]] Mask2FormerConfig
MaskFormer specific outputs
[[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerModelOutput
[[autodoc]] models.mask2former.modeling_mask2former.Mask2FormerForUniversalSegmentationOutput
Mask2FormerModel
[[autodoc]] Mask2FormerModel
- forward
Mask2FormerForUniversalSegmentation
[[autodoc]] Mask2FormerForUniversalSegmentation
- forward
Mask2FormerImageProcessor
[[autodoc]] Mask2FormerImageProcessor
- preprocess
- encode_inputs
- post_process_semantic_segmentation
- post_process_instance_segmentation
- post_process_panoptic_segmentation |
PatchTSMixer
Overview
The PatchTSMixer model was proposed in TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting by Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong and Jayant Kalagnanam.
PatchTSMixer is a lightweight time-series modeling approach based on the MLP-Mixer architecture. In this HuggingFace implementation, we provide PatchTSMixer's capabilities to effortlessly facilitate lightweight mixing across patches, channels, and hidden features for effective multivariate time-series modeling. It also supports various attention mechanisms starting from simple gated attention to more complex self-attention blocks that can be customized accordingly. The model can be pretrained and subsequently used for various downstream tasks such as forecasting, classification and regression.
The abstract from the paper is the following:
TSMixer is a lightweight neural architecture exclusively composed of multi-layer perceptron (MLP) modules designed for multivariate forecasting and representation learning on patched time series. Our model draws inspiration from the success of MLP-Mixer models in computer vision. We demonstrate the challenges involved in adapting Vision MLP-Mixer for time series and introduce empirically validated components to enhance accuracy. This includes a novel design paradigm of attaching online reconciliation heads to the MLP-Mixer backbone, for explicitly modeling the time-series properties such as hierarchy and channel-correlations. We also propose a Hybrid channel modeling approach to effectively handle noisy channel interactions and generalization across diverse datasets, a common challenge in existing patch channel-mixing methods. Additionally, a simple gated attention mechanism is introduced in the backbone to prioritize important features. By incorporating these lightweight components, we significantly enhance the learning capability of simple MLP structures, outperforming complex Transformer models with minimal computing usage. Moreover, TSMixer's modular design enables compatibility with both supervised and masked self-supervised learning methods, making it a promising building block for time-series Foundation Models. TSMixer outperforms state-of-the-art MLP and Transformer models in forecasting by a considerable margin of 8-60%. It also outperforms the latest strong benchmarks of Patch-Transformer models (by 1-2%) with a significant reduction in memory and runtime (2-3X).
This model was contributed by ajati, vijaye12,
gsinthong, namctin,
wmgifford, kashif.
Usage example
The code snippet below shows how to randomly initialize a PatchTSMixer model. The model is compatible with the Trainer API.
thon
from transformers import PatchTSMixerConfig, PatchTSMixerForPrediction
from transformers import Trainer, TrainingArguments,
config = PatchTSMixerConfig(context_length = 512, prediction_length = 96)
model = PatchTSMixerForPrediction(config)
trainer = Trainer(model=model, args=training_args,
train_dataset=train_dataset,
eval_dataset=valid_dataset)
trainer.train()
results = trainer.evaluate(test_dataset) |
Usage tips
The model can also be used for time series classification and time series regression. See the respective [PatchTSMixerForTimeSeriesClassification] and [PatchTSMixerForRegression] classes.
Resources
A blog post explaining PatchTSMixer in depth can be found here. The blog can also be opened in Google Colab. |
PatchTSMixerConfig
[[autodoc]] PatchTSMixerConfig
PatchTSMixerModel
[[autodoc]] PatchTSMixerModel
- forward
PatchTSMixerForPrediction
[[autodoc]] PatchTSMixerForPrediction
- forward
PatchTSMixerForTimeSeriesClassification
[[autodoc]] PatchTSMixerForTimeSeriesClassification
- forward
PatchTSMixerForPretraining
[[autodoc]] PatchTSMixerForPretraining
- forward
PatchTSMixerForRegression
[[autodoc]] PatchTSMixerForRegression
- forward |
GPTBigCode
Overview
The GPTBigCode model was proposed in SantaCoder: don't reach for the stars! by BigCode. The listed authors are: Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
The abstract from the paper is the following:
The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at this https URL.
The model is an optimized GPT2 model with support for Multi-Query Attention.
Implementation details
The main differences compared to GPT2.
- Added support for Multi-Query Attention.
- Use gelu_pytorch_tanh instead of classic gelu.
- Avoid unnecessary synchronizations (this has since been added to GPT2 in #20061, but wasn't in the reference codebase).
- Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible).
- Merge _attn and _upcast_and_reordered_attn. Always merge the matmul with scaling. Rename reorder_and_upcast_attn->attention_softmax_in_fp32
- Cache the attention mask value to avoid recreating it every time.
- Use jit to fuse the attention fp32 casting, masking, softmax, and scaling.
- Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer.
- Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?)
- Use the memory layout (self.num_heads, 3, self.head_dim) instead of (3, self.num_heads, self.head_dim) for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible with the original openai-community/gpt2 model).
You can read more about the optimizations in the original pull request
Combining Starcoder and Flash Attention 2
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. |
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
To load and run a model using Flash Attention 2, refer to the snippet below:
thon |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("bigcode/gpt_bigcode-santacoder", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("bigcode/gpt_bigcode-santacoder")
prompt = "def hello_world():"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
tokenizer.batch_decode(generated_ids)[0]
'def hello_world():\n print("hello world")\n\nif name == "main":\n print("hello world")\n<|endoftext|>' |
Expected speedups
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using bigcode/starcoder checkpoint and the Flash Attention 2 version of the model using two different sequence lengths. |
GPTBigCodeConfig
[[autodoc]] GPTBigCodeConfig
GPTBigCodeModel
[[autodoc]] GPTBigCodeModel
- forward
GPTBigCodeForCausalLM
[[autodoc]] GPTBigCodeForCausalLM
- forward
GPTBigCodeForSequenceClassification
[[autodoc]] GPTBigCodeForSequenceClassification
- forward
GPTBigCodeForTokenClassification
[[autodoc]] GPTBigCodeForTokenClassification
- forward |
Nyströmformer
Overview
The Nyströmformer model was proposed in Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention by Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn
Fung, Yin Li, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a powerful tool for a broad range of natural language processing tasks. A key component
that drives the impressive performance of Transformers is the self-attention mechanism that encodes the influence or
dependence of other tokens on each specific token. While beneficial, the quadratic complexity of self-attention on the
input sequence length has limited its application to longer sequences -- a topic being actively studied in the
community. To address this limitation, we propose Nyströmformer -- a model that exhibits favorable scalability as a
function of sequence length. Our idea is based on adapting the Nyström method to approximate standard self-attention
with O(n) complexity. The scalability of Nyströmformer enables application to longer sequences with thousands of
tokens. We perform evaluations on multiple downstream tasks on the GLUE benchmark and IMDB reviews with standard
sequence length, and find that our Nyströmformer performs comparably, or in a few cases, even slightly better, than
standard self-attention. On longer sequence tasks in the Long Range Arena (LRA) benchmark, Nyströmformer performs
favorably relative to other efficient self-attention methods. Our code is available at this https URL.
This model was contributed by novice03. The original code can be found here.
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
Subsets and Splits