text
stringlengths
2
11.8k
MobileViTV2 is more like a CNN than a Transformer model. It does not work on sequence data but on batches of images. Unlike ViT, there are no embeddings. The backbone model outputs a feature map. One can use [MobileViTImageProcessor] to prepare images for the model. Note that if you do your own preprocessing, the pretrained checkpoints expect images to be in BGR pixel order (not RGB). The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). The segmentation model uses a DeepLabV3 head. The available semantic segmentation checkpoints are pre-trained on PASCAL VOC.
MobileViTV2Config [[autodoc]] MobileViTV2Config MobileViTV2Model [[autodoc]] MobileViTV2Model - forward MobileViTV2ForImageClassification [[autodoc]] MobileViTV2ForImageClassification - forward MobileViTV2ForSemanticSegmentation [[autodoc]] MobileViTV2ForSemanticSegmentation - forward
MarianMT Overview A framework for translation models, using the same models as BART. Translations should be similar, but not identical to output in the test set linked to in each model card. This model was contributed by sshleifer. Implementation Notes
Each model is about 298 MB on disk, there are more than 1,000 models. The list of supported language pairs can be found here. Models were originally trained by Jörg Tiedemann using the Marian C++ library, which supports fast training and translation. All models are transformer encoder-decoders with 6 layers in each component. Each model's performance is documented in a model card. The 80 opus models that require BPE preprocessing are not supported.
The modeling code is the same as [BartForConditionalGeneration] with a few minor modifications: static (sinusoid) positional embeddings (MarianConfig.static_position_embeddings=True) no layernorm_embedding (MarianConfig.normalize_embedding=False) the model starts generating with pad_token_id (which has 0 as a token_embedding) as the prefix (Bart uses <s/>), Code to bulk convert models can be found in convert_marian_to_pytorch.py. Naming
All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt} The language codes used to name models are inconsistent. Two digit codes can usually be found here, three digit codes require googling "language code {code}". Codes formatted like es_AR are usually code_{region}. That one is Spanish from Argentina. The models were converted in two stages. The first 1000 models use ISO-639-2 codes to identify languages, the second group use a combination of ISO-639-5 codes and ISO-639-2 codes.
Examples Since Marian models are smaller than many other translation models available in the library, they can be useful for fine-tuning experiments and integration tests. Fine-tune on GPU Multilingual Models
Multilingual Models All model names use the following format: Helsinki-NLP/opus-mt-{src}-{tgt}: If a model can output multiple languages, and you should specify a language code by prepending the desired output language to the src_text. You can see a models's supported language codes in its model card, under target constituents, like in opus-mt-en-roa. Note that if a model is only multilingual on the source side, like Helsinki-NLP/opus-mt-roa-en, no language codes are required.
New multi-lingual models from the Tatoeba-Challenge repo require 3 character language codes: thon
from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>fra<< this is a sentence in english that we want to translate to french", ">>por<< This should go to portuguese", ">>esp<< And this to Spanish", ] model_name = "Helsinki-NLP/opus-mt-en-roa" tokenizer = MarianTokenizer.from_pretrained(model_name) print(tokenizer.supported_language_codes) ['>>zlm_Latn<<', '>>mfe<<', '>>hat<<', '>>pap<<', '>>ast<<', '>>cat<<', '>>ind<<', '>>glg<<', '>>wln<<', '>>spa<<', '>>fra<<', '>>ron<<', '>>por<<', '>>ita<<', '>>oci<<', '>>arg<<', '>>min<<'] model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español']
Here is the code to see all available pretrained models on the hub: thon from huggingface_hub import list_models model_list = list_models() org = "Helsinki-NLP" model_ids = [x.modelId for x in model_list if x.modelId.startswith(org)] suffix = [x.split("/")[1] for x in model_ids] old_style_multi_models = [f"{org}/{s}" for s in suffix if s != s.lower()]
Old Style Multi-Lingual Models These are the old style multi-lingual models ported from the OPUS-MT-Train repo: and the members of each language group: python no-style ['Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU', 'Helsinki-NLP/opus-mt-ROMANCE-en', 'Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA', 'Helsinki-NLP/opus-mt-de-ZH', 'Helsinki-NLP/opus-mt-en-CELTIC', 'Helsinki-NLP/opus-mt-en-ROMANCE', 'Helsinki-NLP/opus-mt-es-NORWAY', 'Helsinki-NLP/opus-mt-fi-NORWAY', 'Helsinki-NLP/opus-mt-fi-ZH', 'Helsinki-NLP/opus-mt-fi_nb_no_nn_ru_sv_en-SAMI', 'Helsinki-NLP/opus-mt-sv-NORWAY', 'Helsinki-NLP/opus-mt-sv-ZH'] GROUP_MEMBERS = { 'ZH': ['cmn', 'cn', 'yue', 'ze_zh', 'zh_cn', 'zh_CN', 'zh_HK', 'zh_tw', 'zh_TW', 'zh_yue', 'zhs', 'zht', 'zh'], 'ROMANCE': ['fr', 'fr_BE', 'fr_CA', 'fr_FR', 'wa', 'frp', 'oc', 'ca', 'rm', 'lld', 'fur', 'lij', 'lmo', 'es', 'es_AR', 'es_CL', 'es_CO', 'es_CR', 'es_DO', 'es_EC', 'es_ES', 'es_GT', 'es_HN', 'es_MX', 'es_NI', 'es_PA', 'es_PE', 'es_PR', 'es_SV', 'es_UY', 'es_VE', 'pt', 'pt_br', 'pt_BR', 'pt_PT', 'gl', 'lad', 'an', 'mwl', 'it', 'it_IT', 'co', 'nap', 'scn', 'vec', 'sc', 'ro', 'la'], 'NORTH_EU': ['de', 'nl', 'fy', 'af', 'da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SCANDINAVIA': ['da', 'fo', 'is', 'no', 'nb', 'nn', 'sv'], 'SAMI': ['se', 'sma', 'smj', 'smn', 'sms'], 'NORWAY': ['nb_NO', 'nb', 'nn_NO', 'nn', 'nog', 'no_nb', 'no'], 'CELTIC': ['ga', 'cy', 'br', 'gd', 'kw', 'gv'] } Example of translating english to many romance languages, using old-style 2 character language codes thon
from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>fr<< this is a sentence in english that we want to translate to french", ">>pt<< This should go to portuguese", ">>es<< And this to Spanish", ] model_name = "Helsinki-NLP/opus-mt-en-ROMANCE" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) tgt_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated] ["c'est une phrase en anglais que nous voulons traduire en français", 'Isto deve ir para o português.', 'Y esto al español']
Resources Translation task guide Summarization task guide Causal language modeling task guide MarianConfig [[autodoc]] MarianConfig MarianTokenizer [[autodoc]] MarianTokenizer - build_inputs_with_special_tokens MarianModel [[autodoc]] MarianModel - forward MarianMTModel [[autodoc]] MarianMTModel - forward MarianForCausalLM [[autodoc]] MarianForCausalLM - forward TFMarianModel [[autodoc]] TFMarianModel - call TFMarianMTModel [[autodoc]] TFMarianMTModel - call
TFMarianModel [[autodoc]] TFMarianModel - call TFMarianMTModel [[autodoc]] TFMarianMTModel - call FlaxMarianModel [[autodoc]] FlaxMarianModel - call FlaxMarianMTModel [[autodoc]] FlaxMarianMTModel - call
VisionTextDualEncoder Overview The [VisionTextDualEncoderModel] can be used to initialize a vision-text dual encoder model with any pretrained vision autoencoding model as the vision encoder (e.g. ViT, BEiT, DeiT) and any pretrained text autoencoding model as the text encoder (e.g. RoBERTa, BERT). Two projection layers are added on top of both the vision and text encoder to project the output embeddings to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text training and then can be used for zero-shot vision tasks such image-classification or retrieval. In LiT: Zero-Shot Transfer with Locked-image Text Tuning it is shown how leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on new zero-shot vision tasks such as image classification or retrieval. VisionTextDualEncoderConfig [[autodoc]] VisionTextDualEncoderConfig VisionTextDualEncoderProcessor [[autodoc]] VisionTextDualEncoderProcessor
VisionTextDualEncoderModel [[autodoc]] VisionTextDualEncoderModel - forward FlaxVisionTextDualEncoderModel [[autodoc]] FlaxVisionTextDualEncoderModel - call TFVisionTextDualEncoderModel [[autodoc]] TFVisionTextDualEncoderModel - call
NLLB-MOE Overview The NLLB model was presented in No Language Left Behind: Scaling Human-Centered Machine Translation by Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. The abstract of the paper is the following: Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. This model was contributed by Arthur Zucker. The original code can be found here. Usage tips
M2M100ForConditionalGeneration is the base model for both NLLB and NLLB MoE The NLLB-MoE is very similar to the NLLB model, but it's feed forward layer is based on the implementation of SwitchTransformers. The tokenizer is the same as the NLLB models.
Implementation differences with SwitchTransformers The biggest difference is the way the tokens are routed. NLLB-MoE uses a top-2-gate which means that for each input, only the top two experts are selected based on the highest predicted probabilities from the gating network, and the remaining experts are ignored. In SwitchTransformers, only the top-1 probabilities are computed, which means that tokens have less probability of being forwarded. Moreover, if a token is not routed to any expert, SwitchTransformers still adds its unmodified hidden states (kind of like a residual connection) while they are masked in NLLB's top-2 routing mechanism. Generating with NLLB-MoE The available checkpoints require around 350GB of storage. Make sure to use accelerate if you do not have enough RAM on your machine. While generating the target text set the forced_bos_token_id to the target language id. The following example shows how to translate English to French using the facebook/nllb-200-distilled-600M model. Note that we're using the BCP-47 code for French fra_Latn. See here for the list of all BCP-47 in the Flores 200 dataset. thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") article = "Previously, Ring's CEO, Jamie Siminoff, remarked the company started when his doorbell wasn't audible from his shop in his garage." inputs = tokenizer(article, return_tensors="pt") translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"], max_length=50 ) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] "Auparavant, le PDG de Ring, Jamie Siminoff, a fait remarquer que la société avait commencé lorsque sa sonnette n'était pas audible depuis son magasin dans son garage."
Generating from any other language than English English (eng_Latn) is set as the default language from which to translate. In order to specify that you'd like to translate from a different language, you should specify the BCP-47 code in the src_lang keyword argument of the tokenizer initialization. See example below for a translation from romanian to german: thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-moe-54b", src_lang="ron_Latn") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-moe-54b") article = "Şeful ONU spune că nu există o soluţie militară în Siria" inputs = tokenizer(article, return_tensors="pt") translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["deu_Latn"], max_length=30 ) tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
Resources Translation task guide Summarization task guide NllbMoeConfig [[autodoc]] NllbMoeConfig NllbMoeTop2Router [[autodoc]] NllbMoeTop2Router - route_tokens - forward NllbMoeSparseMLP [[autodoc]] NllbMoeSparseMLP - forward NllbMoeModel [[autodoc]] NllbMoeModel - forward NllbMoeForConditionalGeneration [[autodoc]] NllbMoeForConditionalGeneration - forward
GPTSAN-japanese Overview The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama). GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can fine-tune for translation or summarization. Usage example The generate() method can be used to generate text using GPTSAN-Japanese model. thon
from transformers import AutoModel, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda() x_tok = tokenizer("は、", prefix_text="織田信長", return_tensors="pt") torch.manual_seed(0) gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20) tokenizer.decode(gen_tok[0]) '織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉'
GPTSAN Features GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models. The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text. GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details. Prefix-LM Model GPTSAN has the structure of the model named Prefix-LM in the T5 paper. (The original GPTSAN repository calls it hybrid) In GPTSAN, the Prefix part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length. Arbitrary lengths can also be specified differently for each batch. This length applies to the text entered in prefix_text for the tokenizer. The tokenizer returns the mask of the Prefix part of Prefix-LM as token_type_ids. The model treats the part where token_type_ids is 1 as a Prefix part, that is, the input can refer to both tokens before and after. Usage tips Specifying the Prefix part is done with a mask passed to self-attention. When token_type_ids=None or all zero, it is equivalent to regular causal mask for example:
x_token = tokenizer("アイウエ") input_ids: | SOT | SEG | ア | イ | ウ | エ | token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 | prefix_lm_mask: SOT | 1 0 0 0 0 0 | SEG | 1 1 0 0 0 0 | ア | 1 1 1 0 0 0 | イ | 1 1 1 1 0 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 1 | x_token = tokenizer("", prefix_text="アイウエ") input_ids: | SOT | ア | イ | ウ | エ | SEG | token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 | prefix_lm_mask: SOT | 1 1 1 1 1 0 | ア | 1 1 1 1 1 0 | イ | 1 1 1 1 1 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 0 | SEG | 1 1 1 1 1 1 | x_token = tokenizer("ウエ", prefix_text="アイ") input_ids: | SOT | ア | イ | SEG | ウ | エ | token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 | prefix_lm_mask: SOT | 1 1 1 0 0 0 | ア | 1 1 1 0 0 0 | イ | 1 1 1 0 0 0 | SEG | 1 1 1 1 0 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 1 |
Spout Vector A Spout Vector is a special vector for controlling text generation. This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens. In the pre-trained model published from Tanrei/GPTSAN-japanese, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention. The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions. GPTSanJapaneseConfig [[autodoc]] GPTSanJapaneseConfig GPTSanJapaneseTokenizer [[autodoc]] GPTSanJapaneseTokenizer GPTSanJapaneseModel [[autodoc]] GPTSanJapaneseModel GPTSanJapaneseForConditionalGeneration [[autodoc]] GPTSanJapaneseForConditionalGeneration - forward
Neighborhood Attention Transformer Overview NAT was proposed in Neighborhood Attention Transformer by Ali Hassani, Steven Walton, Jiachen Li, Shen Li, and Humphrey Shi. It is a hierarchical vision transformer based on Neighborhood Attention, a sliding-window self attention pattern. The abstract from the paper is the following: *We present Neighborhood Attention (NA), the first efficient and scalable sliding-window attention mechanism for vision. NA is a pixel-wise operation, localizing self attention (SA) to the nearest neighboring pixels, and therefore enjoys a linear time and space complexity compared to the quadratic complexity of SA. The sliding-window pattern allows NA's receptive field to grow without needing extra pixel shifts, and preserves translational equivariance, unlike Swin Transformer's Window Self Attention (WSA). We develop NATTEN (Neighborhood Attention Extension), a Python package with efficient C++ and CUDA kernels, which allows NA to run up to 40% faster than Swin's WSA while using up to 25% less memory. We further present Neighborhood Attention Transformer (NAT), a new hierarchical transformer design based on NA that boosts image classification and downstream vision performance. Experimental results on NAT are competitive; NAT-Tiny reaches 83.2% top-1 accuracy on ImageNet, 51.4% mAP on MS-COCO and 48.4% mIoU on ADE20K, which is 1.9% ImageNet accuracy, 1.0% COCO mAP, and 2.6% ADE20K mIoU improvement over a Swin model with similar size. *
Neighborhood Attention compared to other attention patterns. Taken from the original paper. This model was contributed by Ali Hassani. The original code can be found here. Usage tips
One can use the [AutoImageProcessor] API to prepare images for the model. NAT can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, height, width, num_channels).
Notes: - NAT depends on NATTEN's implementation of Neighborhood Attention. You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten, or build on your system by running pip install natten. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. - Patch size of 4 is only supported at the moment. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with NAT.
[NatForImageClassification] is supported by this example script and notebook. See also: Image classification task guide If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. NatConfig [[autodoc]] NatConfig NatModel [[autodoc]] NatModel - forward NatForImageClassification [[autodoc]] NatForImageClassification - forward
ALBERT Overview The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT: Splitting the embedding matrix into two smaller matrices. Using repeating layers split among groups.
The abstract from the paper is the following: Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. This model was contributed by lysandre. This model jax version was contributed by kamalkraj. The original code can be found here. Usage tips
ALBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. ALBERT uses repeating layers which results in a small memory footprint, however the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it's more logical to have H >> E. Also, the embedding matrix is large since it's V x E (V being the vocab size). If E < H, it has less parameters. Layers are split in groups that share parameters (to save memory). Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not.
This model was contributed by lysandre. This model jax version was contributed by kamalkraj. The original code can be found here. Resources The resources provided in the following sections consist of a list of official Hugging Face and community (indicated by 🌎) resources to help you get started with AlBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
[AlbertForSequenceClassification] is supported by this example script. [TFAlbertForSequenceClassification] is supported by this example script. [FlaxAlbertForSequenceClassification] is supported by this example script and notebook. Check the Text classification task guide on how to use the model. [AlbertForTokenClassification] is supported by this example script. [TFAlbertForTokenClassification] is supported by this example script and notebook.
[AlbertForTokenClassification] is supported by this example script. [TFAlbertForTokenClassification] is supported by this example script and notebook. [FlaxAlbertForTokenClassification] is supported by this example script. Token classification chapter of the 🤗 Hugging Face Course. Check the Token classification task guide on how to use the model.
Token classification chapter of the 🤗 Hugging Face Course. Check the Token classification task guide on how to use the model. [AlbertForMaskedLM] is supported by this example script and notebook. [TFAlbertForMaskedLM] is supported by this example script and notebook. [FlaxAlbertForMaskedLM] is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Check the Masked language modeling task guide on how to use the model.
[AlbertForQuestionAnswering] is supported by this example script and notebook. [TFAlbertForQuestionAnswering] is supported by this example script and notebook. [FlaxAlbertForQuestionAnswering] is supported by this example script. Question answering chapter of the 🤗 Hugging Face Course. Check the Question answering task guide on how to use the model. Multiple choice [AlbertForMultipleChoice] is supported by this example script and notebook.
Multiple choice [AlbertForMultipleChoice] is supported by this example script and notebook. [TFAlbertForMultipleChoice] is supported by this example script and notebook. Check the Multiple choice task guide on how to use the model.
Check the Multiple choice task guide on how to use the model. AlbertConfig [[autodoc]] AlbertConfig AlbertTokenizer [[autodoc]] AlbertTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary AlbertTokenizerFast [[autodoc]] AlbertTokenizerFast Albert specific outputs [[autodoc]] models.albert.modeling_albert.AlbertForPreTrainingOutput [[autodoc]] models.albert.modeling_tf_albert.TFAlbertForPreTrainingOutput
AlbertModel [[autodoc]] AlbertModel - forward AlbertForPreTraining [[autodoc]] AlbertForPreTraining - forward AlbertForMaskedLM [[autodoc]] AlbertForMaskedLM - forward AlbertForSequenceClassification [[autodoc]] AlbertForSequenceClassification - forward AlbertForMultipleChoice [[autodoc]] AlbertForMultipleChoice AlbertForTokenClassification [[autodoc]] AlbertForTokenClassification - forward AlbertForQuestionAnswering [[autodoc]] AlbertForQuestionAnswering - forward
TFAlbertModel [[autodoc]] TFAlbertModel - call TFAlbertForPreTraining [[autodoc]] TFAlbertForPreTraining - call TFAlbertForMaskedLM [[autodoc]] TFAlbertForMaskedLM - call TFAlbertForSequenceClassification [[autodoc]] TFAlbertForSequenceClassification - call TFAlbertForMultipleChoice [[autodoc]] TFAlbertForMultipleChoice - call TFAlbertForTokenClassification [[autodoc]] TFAlbertForTokenClassification - call TFAlbertForQuestionAnswering [[autodoc]] TFAlbertForQuestionAnswering - call
FlaxAlbertModel [[autodoc]] FlaxAlbertModel - call FlaxAlbertForPreTraining [[autodoc]] FlaxAlbertForPreTraining - call FlaxAlbertForMaskedLM [[autodoc]] FlaxAlbertForMaskedLM - call FlaxAlbertForSequenceClassification [[autodoc]] FlaxAlbertForSequenceClassification - call FlaxAlbertForMultipleChoice [[autodoc]] FlaxAlbertForMultipleChoice - call FlaxAlbertForTokenClassification [[autodoc]] FlaxAlbertForTokenClassification - call FlaxAlbertForQuestionAnswering [[autodoc]] FlaxAlbertForQuestionAnswering - call
ViTDet Overview The ViTDet model was proposed in Exploring Plain Vision Transformer Backbones for Object Detection by Yanghao Li, Hanzi Mao, Ross Girshick, Kaiming He. VitDet leverages the plain Vision Transformer for the task of object detection. The abstract from the paper is the following: We explore the plain, non-hierarchical Vision Transformer (ViT) as a backbone network for object detection. This design enables the original ViT architecture to be fine-tuned for object detection without needing to redesign a hierarchical backbone for pre-training. With minimal adaptations for fine-tuning, our plain-backbone detector can achieve competitive results. Surprisingly, we observe: (i) it is sufficient to build a simple feature pyramid from a single-scale feature map (without the common FPN design) and (ii) it is sufficient to use window attention (without shifting) aided with very few cross-window propagation blocks. With plain ViT backbones pre-trained as Masked Autoencoders (MAE), our detector, named ViTDet, can compete with the previous leading methods that were all based on hierarchical backbones, reaching up to 61.3 AP_box on the COCO dataset using only ImageNet-1K pre-training. We hope our study will draw attention to research on plain-backbone detectors. This model was contributed by nielsr. The original code can be found here. Tips:
At the moment, only the backbone is available. VitDetConfig [[autodoc]] VitDetConfig VitDetModel [[autodoc]] VitDetModel - forward
Speech2Text Overview The Speech2Text model was proposed in fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Dmytro Okhonko, Juan Pino. It's a transformer-based seq2seq (encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a convolutional downsampler to reduce the length of speech inputs by 3/4th before they are fed into the encoder. The model is trained with standard autoregressive cross-entropy loss and generates the transcripts/translations autoregressively. Speech2Text has been fine-tuned on several datasets for ASR and ST: LibriSpeech, CoVoST 2, MuST-C. This model was contributed by valhalla. The original code can be found here. Inference Speech2Text is a speech model that accepts a float tensor of log-mel filter-bank features extracted from the speech signal. It's a transformer-based seq2seq model, so the transcripts/translations are generated autoregressively. The generate() method can be used for inference. The [Speech2TextFeatureExtractor] class is responsible for extracting the log-mel filter-bank features. The [Speech2TextProcessor] wraps [Speech2TextFeatureExtractor] and [Speech2TextTokenizer] into a single instance to both extract the input features and decode the predicted token ids. The feature extractor depends on torchaudio and the tokenizer depends on sentencepiece so be sure to install those packages before running the examples. You could either install those as extra speech dependencies with pip install transformers"[speech, sentencepiece]" or install the packages separately with pip install torchaudio sentencepiece. Also torchaudio requires the development version of the libsndfile package which can be installed via a system package manager. On Ubuntu it can be installed as follows: apt install libsndfile1-dev
ASR and Speech Translation thon
import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"]) transcription = processor.batch_decode(generated_ids, skip_special_tokens=True) transcription ['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']
Multilingual speech translation For multilingual speech translation models, eos_token_id is used as the decoder_start_token_id and the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the forced_bos_token_id parameter to the generate() method. The following example shows how to transate English speech to French text using the facebook/s2t-medium-mustc-multilingual-st checkpoint. thon
import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-mustc-multilingual-st") ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt") generated_ids = model.generate( inputs["input_features"], attention_mask=inputs["attention_mask"], forced_bos_token_id=processor.tokenizer.lang_code_to_id["fr"], ) translation = processor.batch_decode(generated_ids, skip_special_tokens=True) translation ["(Vidéo) Si M. Kilder est l'apossible des classes moyennes, et nous sommes heureux d'être accueillis dans son évangile."]
See the model hub to look for Speech2Text checkpoints. Speech2TextConfig [[autodoc]] Speech2TextConfig Speech2TextTokenizer [[autodoc]] Speech2TextTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary Speech2TextFeatureExtractor [[autodoc]] Speech2TextFeatureExtractor - call Speech2TextProcessor [[autodoc]] Speech2TextProcessor - call - from_pretrained - save_pretrained - batch_decode - decode
Speech2TextModel [[autodoc]] Speech2TextModel - forward Speech2TextForConditionalGeneration [[autodoc]] Speech2TextForConditionalGeneration - forward TFSpeech2TextModel [[autodoc]] TFSpeech2TextModel - call TFSpeech2TextForConditionalGeneration [[autodoc]] TFSpeech2TextForConditionalGeneration - call
Autoformer Overview The Autoformer model was proposed in Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting by Haixu Wu, Jiehui Xu, Jianmin Wang, Mingsheng Long. This model augments the Transformer as a deep decomposition architecture, which can progressively decompose the trend and seasonal components during the forecasting process. The abstract from the paper is the following: Extending the forecasting time is a critical demand for real applications, such as extreme weather early warning and long-term energy consumption planning. This paper studies the long-term forecasting problem of time series. Prior Transformer-based models adopt various self-attention mechanisms to discover the long-range dependencies. However, intricate temporal patterns of the long-term future prohibit the model from finding reliable dependencies. Also, Transformers have to adopt the sparse versions of point-wise self-attentions for long series efficiency, resulting in the information utilization bottleneck. Going beyond Transformers, we design Autoformer as a novel decomposition architecture with an Auto-Correlation mechanism. We break with the pre-processing convention of series decomposition and renovate it as a basic inner block of deep models. This design empowers Autoformer with progressive decomposition capacities for complex time series. Further, inspired by the stochastic process theory, we design the Auto-Correlation mechanism based on the series periodicity, which conducts the dependencies discovery and representation aggregation at the sub-series level. Auto-Correlation outperforms self-attention in both efficiency and accuracy. In long-term forecasting, Autoformer yields state-of-the-art accuracy, with a 38% relative improvement on six benchmarks, covering five practical applications: energy, traffic, economics, weather and disease. This model was contributed by elisim and kashif. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Check out the Autoformer blog-post in HuggingFace blog: Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer) AutoformerConfig [[autodoc]] AutoformerConfig AutoformerModel [[autodoc]] AutoformerModel - forward AutoformerForPrediction [[autodoc]] AutoformerForPrediction - forward
CLIPSeg Overview The CLIPSeg model was proposed in Image Segmentation Using Text and Image Prompts by Timo Lüddecke and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen CLIP model for zero- and one-shot image segmentation. The abstract from the paper is the following: Image segmentation is usually addressed by training a model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system that can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text or an image. This approach enables us to create a unified model (trained once) for three common segmentation tasks, which come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation. We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense prediction. After training on an extended version of the PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail. This novel hybrid input allows for dynamic adaptation not only to the three segmentation tasks mentioned above, but to any binary segmentation task where a text or image query can be formulated. Finally, we find our system to adapt well to generalized queries involving affordances or properties CLIPSeg overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage tips
[CLIPSegForImageSegmentation] adds a decoder on top of [CLIPSegModel]. The latter is identical to [CLIPModel]. [CLIPSegForImageSegmentation] can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text (provided to the model as input_ids) or an image (provided to the model as conditional_pixel_values). One can also provide custom conditional embeddings (provided to the model as conditional_embeddings).
Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIPSeg. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. A notebook that illustrates zero-shot image segmentation with CLIPSeg.
CLIPSegConfig [[autodoc]] CLIPSegConfig - from_text_vision_configs CLIPSegTextConfig [[autodoc]] CLIPSegTextConfig CLIPSegVisionConfig [[autodoc]] CLIPSegVisionConfig CLIPSegProcessor [[autodoc]] CLIPSegProcessor CLIPSegModel [[autodoc]] CLIPSegModel - forward - get_text_features - get_image_features CLIPSegTextModel [[autodoc]] CLIPSegTextModel - forward CLIPSegVisionModel [[autodoc]] CLIPSegVisionModel - forward CLIPSegForImageSegmentation [[autodoc]] CLIPSegForImageSegmentation - forward
Conditional DETR Overview The Conditional DETR model was proposed in Conditional DETR for Fast Training Convergence by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Conditional DETR converges 6.7× to 10× faster than DETR. The abstract from the paper is the following: The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR.
Conditional DETR shows much faster convergence compared to the original DETR. Taken from the original paper. This model was contributed by DepuMeng. The original code can be found here. Resources Object detection task guide
ConditionalDetrConfig [[autodoc]] ConditionalDetrConfig ConditionalDetrImageProcessor [[autodoc]] ConditionalDetrImageProcessor - preprocess - post_process_object_detection - post_process_instance_segmentation - post_process_semantic_segmentation - post_process_panoptic_segmentation ConditionalDetrFeatureExtractor [[autodoc]] ConditionalDetrFeatureExtractor - call - post_process_object_detection - post_process_instance_segmentation - post_process_semantic_segmentation - post_process_panoptic_segmentation ConditionalDetrModel [[autodoc]] ConditionalDetrModel - forward ConditionalDetrForObjectDetection [[autodoc]] ConditionalDetrForObjectDetection - forward ConditionalDetrForSegmentation [[autodoc]] ConditionalDetrForSegmentation - forward
VisualBERT Overview The VisualBERT model was proposed in VisualBERT: A Simple and Performant Baseline for Vision and Language by Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang. VisualBERT is a neural network trained on a variety of (image, text) pairs. The abstract from the paper is the following: We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments. This model was contributed by gchhablani. The original code can be found here. Usage tips
Most of the checkpoints provided work with the [VisualBertForPreTraining] configuration. Other checkpoints provided are the fine-tuned checkpoints for down-stream tasks - VQA ('visualbert-vqa'), VCR ('visualbert-vcr'), NLVR2 ('visualbert-nlvr2'). Hence, if you are not working on these downstream tasks, it is recommended that you use the pretrained checkpoints.
For the VCR task, the authors use a fine-tuned detector for generating visual embeddings, for all the checkpoints. We do not provide the detector and its weights as a part of the package, but it will be available in the research projects, and the states can be loaded directly into the detector provided.
VisualBERT is a multi-modal vision and language model. It can be used for visual question answering, multiple choice, visual reasoning and region-to-phrase correspondence tasks. VisualBERT uses a BERT-like transformer to prepare embeddings for image-text pairs. Both the text and visual features are then projected to a latent space with identical dimension. To feed images to the model, each image is passed through a pre-trained object detector and the regions and the bounding boxes are extracted. The authors use the features generated after passing these regions through a pre-trained CNN like ResNet as visual embeddings. They also add absolute position embeddings, and feed the resulting sequence of vectors to a standard BERT model. The text input is concatenated in the front of the visual embeddings in the embedding layer, and is expected to be bound by [CLS] and a [SEP] tokens, as in BERT. The segment IDs must also be set appropriately for the textual and visual parts. The [BertTokenizer] is used to encode the text. A custom detector/image processor must be used to get the visual embeddings. The following example notebooks show how to use VisualBERT with Detectron-like models:
VisualBERT VQA demo notebook : This notebook contains an example on VisualBERT VQA. Generate Embeddings for VisualBERT (Colab Notebook) : This notebook contains an example on how to generate visual embeddings. The following example shows how to get the last hidden state using [VisualBertModel]: thon
import torch from transformers import BertTokenizer, VisualBertModel model = VisualBertModel.from_pretrained("uclanlp/visualbert-vqa-coco-pre") tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") inputs = tokenizer("What is the man eating?", return_tensors="pt") this is a custom function that returns the visual embeddings given the image path visual_embeds = get_visual_embeddings(image_path) visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float) inputs.update( { "visual_embeds": visual_embeds, "visual_token_type_ids": visual_token_type_ids, "visual_attention_mask": visual_attention_mask, } ) outputs = model(**inputs) last_hidden_state = outputs.last_hidden_state
VisualBertConfig [[autodoc]] VisualBertConfig VisualBertModel [[autodoc]] VisualBertModel - forward VisualBertForPreTraining [[autodoc]] VisualBertForPreTraining - forward VisualBertForQuestionAnswering [[autodoc]] VisualBertForQuestionAnswering - forward VisualBertForMultipleChoice [[autodoc]] VisualBertForMultipleChoice - forward VisualBertForVisualReasoning [[autodoc]] VisualBertForVisualReasoning - forward VisualBertForRegionToPhraseAlignment [[autodoc]] VisualBertForRegionToPhraseAlignment - forward
BigBirdPegasus Overview The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it has been shown that applying sparse, global, and random attention approximates full attention, while being computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context, BigBird has shown improved performance on various long document NLP tasks, such as question answering and summarization, compared to BERT or RoBERTa. The abstract from the paper is the following: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. The original code can be found here. Usage tips
For an in-detail explanation on how BigBird's attention works, see this blog post. BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using original_full is advised as there is no benefit in using block_sparse attention. The code currently uses window size of 3 blocks and 2 global blocks. Sequence length must be divisible by block size. Current implementation supports only ITC. Current implementation doesn't support num_random_blocks = 0. BigBirdPegasus uses the PegasusTokenizer. BigBird is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left.
Resources Text classification task guide Question answering task guide Causal language modeling task guide Translation task guide Summarization task guide
BigBirdPegasusConfig [[autodoc]] BigBirdPegasusConfig - all BigBirdPegasusModel [[autodoc]] BigBirdPegasusModel - forward BigBirdPegasusForConditionalGeneration [[autodoc]] BigBirdPegasusForConditionalGeneration - forward BigBirdPegasusForSequenceClassification [[autodoc]] BigBirdPegasusForSequenceClassification - forward BigBirdPegasusForQuestionAnswering [[autodoc]] BigBirdPegasusForQuestionAnswering - forward BigBirdPegasusForCausalLM [[autodoc]] BigBirdPegasusForCausalLM - forward
EfficientNet Overview The EfficientNet model was proposed in EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models. The abstract from the paper is the following: Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. This model was contributed by adirik. The original code can be found here. EfficientNetConfig [[autodoc]] EfficientNetConfig EfficientNetImageProcessor [[autodoc]] EfficientNetImageProcessor - preprocess EfficientNetModel [[autodoc]] EfficientNetModel - forward EfficientNetForImageClassification [[autodoc]] EfficientNetForImageClassification - forward
FLAN-UL2 Overview Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year. It was fine tuned using the "Flan" prompt tuning and dataset collection. Similar to Flan-T5, one can directly use FLAN-UL2 weights without finetuning the model: According to the original blog here are the notable improvements:
The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large. The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning. The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget “mode tokens” before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore. Google has released the following variants:
The original checkpoints can be found here. Running on low resource devices The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use device_map="auto" to make sure you don't have any OOM issue! thon
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2", load_in_8bit=True, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2") inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic']
Refer to T5's documentation page for API reference, tips, code examples and notebooks.
Nougat Overview The Nougat model was proposed in Nougat: Neural Optical Understanding for Academic Documents by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic. Nougat uses the same architecture as Donut, meaning an image Transformer encoder and an autoregressive text Transformer decoder to translate scientific PDFs to markdown, enabling easier access to them. The abstract from the paper is the following: Scientific knowledge is predominantly stored in books and scientific journals, often in the form of PDFs. However, the PDF format leads to a loss of semantic information, particularly for mathematical expressions. We propose Nougat (Neural Optical Understanding for Academic Documents), a Visual Transformer model that performs an Optical Character Recognition (OCR) task for processing scientific documents into a markup language, and demonstrate the effectiveness of our model on a new dataset of scientific documents. The proposed approach offers a promising solution to enhance the accessibility of scientific knowledge in the digital age, by bridging the gap between human-readable documents and machine-readable text. We release the models and code to accelerate future work on scientific text recognition.
Nougat high-level overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage tips The quickest way to get started with Nougat is by checking the tutorial notebooks, which show how to use the model at inference time as well as fine-tuning on custom data. Nougat is always used within the VisionEncoderDecoder framework. The model is identical to Donut in terms of architecture.
Inference Nougat's [VisionEncoderDecoder] model accepts images as input and makes use of [~generation.GenerationMixin.generate] to autoregressively generate text given the input image. The [NougatImageProcessor] class is responsible for preprocessing the input image and [NougatTokenizerFast] decodes the generated target tokens to the target string. The [NougatProcessor] wraps [NougatImageProcessor] and [NougatTokenizerFast] classes into a single instance to both extract the input features and decode the predicted token ids.
Step-by-step PDF transcription
from huggingface_hub import hf_hub_download import re from PIL import Image from transformers import NougatProcessor, VisionEncoderDecoderModel from datasets import load_dataset import torch processor = NougatProcessor.from_pretrained("facebook/nougat-base") model = VisionEncoderDecoderModel.from_pretrained("facebook/nougat-base") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # doctest: +IGNORE_RESULT prepare PDF image for the model filepath = hf_hub_download(repo_id="hf-internal-testing/fixtures_docvqa", filename="nougat_paper.png", repo_type="dataset") image = Image.open(filepath) pixel_values = processor(image, return_tensors="pt").pixel_values generate transcription (here we only generate 30 tokens) outputs = model.generate( pixel_values.to(device), min_length=1, max_new_tokens=30, bad_words_ids=[[processor.tokenizer.unk_token_id]], ) sequence = processor.batch_decode(outputs, skip_special_tokens=True)[0] sequence = processor.post_process_generation(sequence, fix_markdown=False) note: we're using repr here such for the sake of printing the \n characters, feel free to just print the sequence print(repr(sequence)) '\n\n# Nougat: Neural Optical Understanding for Academic Documents\n\n Lukas Blecher\n\nCorrespondence to: lblecher@'
See the model hub to look for Nougat checkpoints. The model is identical to Donut in terms of architecture. NougatImageProcessor [[autodoc]] NougatImageProcessor - preprocess NougatTokenizerFast [[autodoc]] NougatTokenizerFast NougatProcessor [[autodoc]] NougatProcessor - call - from_pretrained - save_pretrained - batch_decode - decode - post_process_generation
LLaVa Overview LLaVa is an open-source chatbot trained by fine-tuning LlamA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. In other words, it is an multi-modal version of LLMs fine-tuned for chat / instructions. The LLaVa model was proposed in Visual Instruction Tuning and improved in Improved Baselines with Visual Instruction Tuning by Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee. The abstract from the paper is the following: Large multimodal models (LMM) have recently shown encouraging progress with visual instruction tuning. In this note, we show that the fully-connected vision-language cross-modal connector in LLaVA is surprisingly powerful and data-efficient. With simple modifications to LLaVA, namely, using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, we establish stronger baselines that achieve state-of-the-art across 11 benchmarks. Our final 13B checkpoint uses merely 1.2M publicly available data, and finishes full training in ∼1 day on a single 8-A100 node. We hope this can make state-of-the-art LMM research more accessible. Code and model will be publicly available
LLaVa architecture. Taken from the original paper. This model was contributed by ArthurZ and ybelkada. The original code can be found here. Usage tips We advise users to use padding_side="left" when computing batched generation as it leads to more accurate results. Simply make sure to call processor.tokenizer.padding_side = "left" before generating.
Note the model has not been explicitly trained to process multiple images in the same prompt, although this is technically possible, you may experience inaccurate results. For better results, we recommend users to prompt the model with the correct prompt format: "USER: <image>\n<prompt>ASSISTANT:" For multiple turns conversation:
"USER: <image>\n<prompt>ASSISTANT:" For multiple turns conversation: "USER: <image>\n<prompt1>ASSISTANT: <answer1>USER: <prompt2>ASSISTANT: <answer2>USER: <prompt3>ASSISTANT:" Using Flash Attention 2 Flash Attention 2 is an even faster, optimized version of the previous optimization, please refer to the Flash Attention 2 section of performance docs. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BEiT.
A Google Colab demo on how to run Llava on a free-tier Google colab instance leveraging 4-bit inference. A similar notebook showcasing batched inference. 🌎 LlavaConfig [[autodoc]] LlavaConfig LlavaProcessor [[autodoc]] LlavaProcessor LlavaForConditionalGeneration [[autodoc]] LlavaForConditionalGeneration - forward
MegatronGPT2 Overview The MegatronGPT2 model was proposed in Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism by Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper and Bryan Catanzaro. The abstract from the paper is the following: Recent work in language modeling demonstrates that training large transformer models advances the state of the art in Natural Language Processing applications. However, very large models can be quite difficult to train due to memory constraints. In this work, we present our techniques for training very large transformer models and implement a simple, efficient intra-layer model parallel approach that enables training transformer models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging transformer based models up to 8.3 billion parameters using 512 GPUs. We sustain 15.1 PetaFLOPs across the entire application with 76% scaling efficiency when compared to a strong single GPU baseline that sustains 39 TeraFLOPs, which is 30% of peak FLOPs. To demonstrate that large language models can further advance the state of the art (SOTA), we train an 8.3 billion parameter transformer language model similar to GPT-2 and a 3.9 billion parameter model similar to BERT. We show that careful attention to the placement of layer normalization in BERT-like models is critical to achieving increased performance as the model size grows. Using the GPT-2 model we achieve SOTA results on the WikiText103 (10.8 compared to SOTA perplexity of 15.8) and LAMBADA (66.5% compared to SOTA accuracy of 63.2%) datasets. Our BERT model achieves SOTA results on the RACE dataset (90.9% compared to SOTA accuracy of 89.4%). This model was contributed by jdemouth. The original code can be found here. That repository contains a multi-GPU and multi-node implementation of the Megatron Language models. In particular, it contains a hybrid model parallel approach using "tensor parallel" and "pipeline parallel" techniques. Usage tips We have provided pretrained GPT2-345M checkpoints for use to evaluate or finetuning downstream tasks. To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the NGC documentation. Alternatively, you can directly download the checkpoints using:
wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_lm_345m/versions/v0.0/zip -O megatron_gpt2_345m_v0_0.zip Once you have obtained the checkpoint from NVIDIA GPU Cloud (NGC), you have to convert it to a format that will easily be loaded by Hugging Face Transformers GPT2 implementation. The following command allows you to do the conversion. We assume that the folder models/megatron_gpt2 contains megatron_gpt2_345m_v0_0.zip and that the command is run from that folder:
python3 $PATH_TO_TRANSFORMERS/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py megatron_gpt2_345m_v0_0.zip MegatronGPT2 architecture is the same as OpenAI GPT-2 . Refer to GPT-2 documentation for information on configuration classes and their parameters.
OPT Overview The OPT model was proposed in Open Pre-trained Transformer Language Models by Meta AI. OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3. The abstract from the paper is the following: Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models. This model was contributed by Arthur Zucker, Younes Belkada, and Patrick Von Platen. The original code can be found here. Tips: - OPT has the same architecture as [BartDecoder]. - Contrary to GPT2, OPT adds the EOS token </s> to the beginning of every prompt. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OPT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on fine-tuning OPT with PEFT, bitsandbytes, and Transformers. 🌎 A blog post on decoding strategies with OPT. Causal language modeling chapter of the 🤗 Hugging Face Course. [OPTForCausalLM] is supported by this causal language modeling example script and notebook. [TFOPTForCausalLM] is supported by this causal language modeling example script and notebook. [FlaxOPTForCausalLM] is supported by this causal language modeling example script.
Text classification task guide [OPTForSequenceClassification] is supported by this example script and notebook. [OPTForQuestionAnswering] is supported by this question answering example script and notebook. Question answering chapter of the 🤗 Hugging Face Course. ⚡️ Inference A blog post on How 🤗 Accelerate runs very large models thanks to PyTorch with OPT.
⚡️ Inference A blog post on How 🤗 Accelerate runs very large models thanks to PyTorch with OPT. Combining OPT and Flash Attention 2 First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.