text
stringlengths 2
11.8k
|
---|
Check out the Informer blog-post in HuggingFace blog: Multivariate Probabilistic Time Series Forecasting with Informer
InformerConfig
[[autodoc]] InformerConfig
InformerModel
[[autodoc]] InformerModel
- forward
InformerForPrediction
[[autodoc]] InformerForPrediction
- forward |
HerBERT
Overview
The HerBERT model was proposed in KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and
Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic
masking of whole words.
The abstract from the paper is the following:
In recent years, a series of Transformer-based models unlocked major improvements in general natural language
understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which
allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of
languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language
understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing
datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new
sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and
promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and
applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language,
which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an
extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based
models.
This model was contributed by rmroczkowski. The original code can be found
here.
Usage example
thon |
from transformers import HerbertTokenizer, RobertaModel
tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1")
model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1")
encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt")
outputs = model(encoded_input)
HerBERT can also be loaded using AutoTokenizer and AutoModel:
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1")
model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") |
Herbert implementation is the same as BERT except for the tokenization method. Refer to BERT documentation
for API reference and examples.
HerbertTokenizer
[[autodoc]] HerbertTokenizer
HerbertTokenizerFast
[[autodoc]] HerbertTokenizerFast |
EnCodec
Overview
The EnCodec neural codec model was proposed in High Fidelity Neural Audio Compression by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi.
The abstract from the paper is the following:
We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio.
This model was contributed by Matthijs, Patrick Von Platen and Arthur Zucker.
The original code can be found here.
Usage example
Here is a quick example of how to encode and decode an audio using this model:
thon |
from datasets import load_dataset, Audio
from transformers import EncodecModel, AutoProcessor
librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
model = EncodecModel.from_pretrained("facebook/encodec_24khz")
processor = AutoProcessor.from_pretrained("facebook/encodec_24khz")
librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate))
audio_sample = librispeech_dummy[-1]["audio"]["array"]
inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
or the equivalent with a forward pass
audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values |
EncodecConfig
[[autodoc]] EncodecConfig
EncodecFeatureExtractor
[[autodoc]] EncodecFeatureExtractor
- call
EncodecModel
[[autodoc]] EncodecModel
- decode
- encode
- forward |
BLIP-2
Overview
The BLIP-2 model was proposed in BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models by
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. BLIP-2 leverages frozen pre-trained image encoders and large language models (LLMs) by training a lightweight, 12-layer Transformer
encoder in between them, achieving state-of-the-art performance on various vision-language tasks. Most notably, BLIP-2 improves upon Flamingo, an 80 billion parameter model, by 8.7%
on zero-shot VQAv2 with 54x fewer trainable parameters.
The abstract from the paper is the following:
The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
BLIP-2 architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage tips |
BLIP-2 can be used for conditional text generation given an image and an optional text prompt. At inference time, it's recommended to use the [generate] method.
One can use [Blip2Processor] to prepare images for the model, and decode the predicted tokens ID's back to text.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLIP-2. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLIP-2.
Demo notebooks for BLIP-2 for image captioning, visual question answering (VQA) and chat-like conversations can be found here. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Blip2Config
[[autodoc]] Blip2Config
- from_vision_qformer_text_configs
Blip2VisionConfig
[[autodoc]] Blip2VisionConfig
Blip2QFormerConfig
[[autodoc]] Blip2QFormerConfig
Blip2Processor
[[autodoc]] Blip2Processor
Blip2VisionModel
[[autodoc]] Blip2VisionModel
- forward
Blip2QFormerModel
[[autodoc]] Blip2QFormerModel
- forward
Blip2Model
[[autodoc]] Blip2Model
- forward
- get_text_features
- get_image_features
- get_qformer_features
Blip2ForConditionalGeneration
[[autodoc]] Blip2ForConditionalGeneration
- forward
- generate |
BERT |
Overview
The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It's a
bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence
prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia.
The abstract from the paper is the following:
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations
from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional
representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result,
the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models
for a wide range of tasks, such as question answering and language inference, without substantial task-specific
architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural
language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI
accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute
improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
This model was contributed by thomwolf. The original code can be found here.
Usage tips |
BERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
BERT was trained with the masked language modeling (MLM) and next sentence prediction (NSP) objectives. It is
efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation.
Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by: |
Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by:
a special mask token with probability 0.8
a random token different from the one masked with probability 0.1
the same token with probability 0.1 |
a special mask token with probability 0.8
a random token different from the one masked with probability 0.1
the same token with probability 0.1
The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A blog post on BERT Text Classification in a different language.
A notebook for Finetuning BERT (and friends) for multi-label text classification.
A notebook on how to Finetune BERT for multi-label classification using PyTorch. 🌎
A notebook on how to warm-start an EncoderDecoder model with BERT for summarization.
[BertForSequenceClassification] is supported by this example script and notebook.
[TFBertForSequenceClassification] is supported by this example script and notebook.
[FlaxBertForSequenceClassification] is supported by this example script and notebook.
Text classification task guide |
A blog post on how to use Hugging Face Transformers with Keras: Fine-tune a non-English BERT for Named Entity Recognition.
A notebook for Finetuning BERT for named-entity recognition using only the first wordpiece of each word in the word label during tokenization. To propagate the label of the word to all wordpieces, see this version of the notebook instead.
[BertForTokenClassification] is supported by this example script and notebook.
[TFBertForTokenClassification] is supported by this example script and notebook.
[FlaxBertForTokenClassification] is supported by this example script.
Token classification chapter of the 🤗 Hugging Face Course.
Token classification task guide |
[BertForMaskedLM] is supported by this example script and notebook.
[TFBertForMaskedLM] is supported by this example script and notebook.
[FlaxBertForMaskedLM] is supported by this example script and notebook.
Masked language modeling chapter of the 🤗 Hugging Face Course.
Masked language modeling task guide |
[BertForQuestionAnswering] is supported by this example script and notebook.
[TFBertForQuestionAnswering] is supported by this example script and notebook.
[FlaxBertForQuestionAnswering] is supported by this example script.
Question answering chapter of the 🤗 Hugging Face Course.
Question answering task guide |
Multiple choice
- [BertForMultipleChoice] is supported by this example script and notebook.
- [TFBertForMultipleChoice] is supported by this example script and notebook.
- Multiple choice task guide
⚡️ Inference
- A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia.
- A blog post on how to Accelerate BERT inference with DeepSpeed-Inference on GPUs.
⚙️ Pretraining
- A blog post on Pre-Training BERT with Hugging Face Transformers and Habana Gaudi.
🚀 Deploy
- A blog post on how to Convert Transformers to ONNX with Hugging Face Optimum.
- A blog post on how to Setup Deep Learning environment for Hugging Face Transformers with Habana Gaudi on AWS.
- A blog post on Autoscaling BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module.
- A blog post on Serverless BERT with HuggingFace, AWS Lambda, and Docker.
- A blog post on Hugging Face Transformers BERT fine-tuning using Amazon SageMaker and Training Compiler.
- A blog post on Task-specific knowledge distillation for BERT using Transformers & Amazon SageMaker.
BertConfig
[[autodoc]] BertConfig
- all
BertTokenizer
[[autodoc]] BertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary |
BertTokenizerFast
[[autodoc]] BertTokenizerFast
TFBertTokenizer
[[autodoc]] TFBertTokenizer
Bert specific outputs
[[autodoc]] models.bert.modeling_bert.BertForPreTrainingOutput
[[autodoc]] models.bert.modeling_tf_bert.TFBertForPreTrainingOutput
[[autodoc]] models.bert.modeling_flax_bert.FlaxBertForPreTrainingOutput |
BertModel
[[autodoc]] BertModel
- forward
BertForPreTraining
[[autodoc]] BertForPreTraining
- forward
BertLMHeadModel
[[autodoc]] BertLMHeadModel
- forward
BertForMaskedLM
[[autodoc]] BertForMaskedLM
- forward
BertForNextSentencePrediction
[[autodoc]] BertForNextSentencePrediction
- forward
BertForSequenceClassification
[[autodoc]] BertForSequenceClassification
- forward
BertForMultipleChoice
[[autodoc]] BertForMultipleChoice
- forward
BertForTokenClassification
[[autodoc]] BertForTokenClassification
- forward
BertForQuestionAnswering
[[autodoc]] BertForQuestionAnswering
- forward |
TFBertModel
[[autodoc]] TFBertModel
- call
TFBertForPreTraining
[[autodoc]] TFBertForPreTraining
- call
TFBertModelLMHeadModel
[[autodoc]] TFBertLMHeadModel
- call
TFBertForMaskedLM
[[autodoc]] TFBertForMaskedLM
- call
TFBertForNextSentencePrediction
[[autodoc]] TFBertForNextSentencePrediction
- call
TFBertForSequenceClassification
[[autodoc]] TFBertForSequenceClassification
- call
TFBertForMultipleChoice
[[autodoc]] TFBertForMultipleChoice
- call
TFBertForTokenClassification
[[autodoc]] TFBertForTokenClassification
- call
TFBertForQuestionAnswering
[[autodoc]] TFBertForQuestionAnswering
- call |
FlaxBertModel
[[autodoc]] FlaxBertModel
- call
FlaxBertForPreTraining
[[autodoc]] FlaxBertForPreTraining
- call
FlaxBertForCausalLM
[[autodoc]] FlaxBertForCausalLM
- call
FlaxBertForMaskedLM
[[autodoc]] FlaxBertForMaskedLM
- call
FlaxBertForNextSentencePrediction
[[autodoc]] FlaxBertForNextSentencePrediction
- call
FlaxBertForSequenceClassification
[[autodoc]] FlaxBertForSequenceClassification
- call
FlaxBertForMultipleChoice
[[autodoc]] FlaxBertForMultipleChoice
- call
FlaxBertForTokenClassification
[[autodoc]] FlaxBertForTokenClassification
- call
FlaxBertForQuestionAnswering
[[autodoc]] FlaxBertForQuestionAnswering
- call |
UMT5 |
Overview
The UMT5 model was proposed in UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining by Hyung Won Chung, Xavier Garcia, Adam Roberts, Yi Tay, Orhan Firat, Sharan Narang, Noah Constant.
The abstract from the paper is the following:
Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling.
Google has released the following variants: |
google/umt5-small
google/umt5-base
google/umt5-xl
google/umt5-xxl.
This model was contributed by agemagician and stefan-it. The original code can be
found here.
Usage tips |
This model was contributed by agemagician and stefan-it. The original code can be
found here.
Usage tips
UMT5 was only pre-trained on mC4 excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model.
Since umT5 was pre-trained in an unsupervised manner, there's no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix. |
Differences with mT5?
UmT5 is based on mT5, with a non-shared relative positional bias that is computed for each layer. This means that the model set has_relative_bias for each layer.
The conversion script is also different because the model was saved in t5x's latest checkpointing format.
Sample usage
thon |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("google/umt5-small")
tokenizer = AutoTokenizer.from_pretrained("google/umt5-small")
inputs = tokenizer(
"A walks into a bar and orders a with pinch of .",
return_tensors="pt",
)
outputs = model.generate(**inputs)
print(tokenizer.batch_decode(outputs))
['nyone who drink a alcohol A A. This I']
Refer to T5's documentation page for more tips, code examples and notebooks. |
UMT5Config
[[autodoc]] UMT5Config
UMT5Model
[[autodoc]] UMT5Model
- forward
UMT5ForConditionalGeneration
[[autodoc]] UMT5ForConditionalGeneration
- forward
UMT5EncoderModel
[[autodoc]] UMT5EncoderModel
- forward
UMT5ForSequenceClassification
[[autodoc]] UMT5ForSequenceClassification
- forward
UMT5ForTokenClassification
[[autodoc]] UMT5ForTokenClassification
- forward
UMT5ForQuestionAnswering
[[autodoc]] UMT5ForQuestionAnswering
- forward |
UDOP
Overview
The UDOP model was proposed in Unifying Vision, Text, and Layout for Universal Document Processing by Zineng Tang, Ziyi Yang, Guoxin Wang, Yuwei Fang, Yang Liu, Chenguang Zhu, Michael Zeng, Cha Zhang, Mohit Bansal.
UDOP adopts an encoder-decoder Transformer architecture based on T5 for document AI tasks like document image classification, document parsing and document visual question answering.
The abstract from the paper is the following:
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).* |
UDOP architecture. Taken from the original paper.
Usage tips |
In addition to input_ids, [UdopForConditionalGeneration] also expects the input bbox, which are
the bounding boxes (i.e. 2D-positions) of the input tokens. These can be obtained using an external OCR engine such
as Google's Tesseract (there's a Python wrapper available). Each bounding box should be in (x0, y0, x1, y1) format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1) represents the
position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on a 0-1000
scale. To normalize, you can use the following function: |
python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs. Those can be obtained using the Python Image Library (PIL) library for example, as follows:
thon
from PIL import Image
Document can be a png, jpg, etc. PDFs must be converted to images.
image = Image.open(name_of_your_document).convert("RGB")
width, height = image.size |
At inference time, it's recommended to use the generate method to autoregressively generate text given a document image.
One can use [UdopProcessor] to prepare images and text for the model. By default, this class uses the Tesseract engine to extract a list of words
and boxes (coordinates) from a given document. Its functionality is equivalent to that of [LayoutLMv3Processor], hence it supports passing either
apply_ocr=False in case you prefer to use your own OCR engine or apply_ocr=True in case you want the default OCR engine to be used. |
This model was contributed by nielsr.
The original code can be found here.
UdopConfig
[[autodoc]] UdopConfig
UdopTokenizer
[[autodoc]] UdopTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
UdopTokenizerFast
[[autodoc]] UdopTokenizerFast
UdopProcessor
[[autodoc]] UdopProcessor
- call
UdopModel
[[autodoc]] UdopModel
- forward
UdopForConditionalGeneration
[[autodoc]] UdopForConditionalGeneration
- forward
UdopEncoderModel
[[autodoc]] UdopEncoderModel
- forward |
BertJapanese
Overview
The BERT models trained on Japanese text.
There are models with two different tokenization methods:
Tokenize with MeCab and WordPiece. This requires some extra dependencies, fugashi which is a wrapper around MeCab.
Tokenize into characters. |
Tokenize with MeCab and WordPiece. This requires some extra dependencies, fugashi which is a wrapper around MeCab.
Tokenize into characters.
To use MecabTokenizer, you should pip install transformers["ja"] (or pip install -e .["ja"] if you install
from source) to install dependencies.
See details on cl-tohoku repository.
Example of using a model with MeCab and WordPiece tokenization:
thon |
import torch
from transformers import AutoModel, AutoTokenizer
bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese")
tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
Input Japanese Text
line = "吾輩は猫である。"
inputs = tokenizer(line, return_tensors="pt")
print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾輩 は 猫 で ある 。 [SEP]
outputs = bertjapanese(**inputs)
Example of using a model with Character tokenization:
thon |
Example of using a model with Character tokenization:
thon
bertjapanese = AutoModel.from_pretrained("cl-tohoku/bert-base-japanese-char")
tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese-char")
Input Japanese Text
line = "吾輩は猫である。"
inputs = tokenizer(line, return_tensors="pt")
print(tokenizer.decode(inputs["input_ids"][0]))
[CLS] 吾 輩 は 猫 で あ る 。 [SEP]
outputs = bertjapanese(**inputs) |
This model was contributed by cl-tohoku.
This implementation is the same as BERT, except for tokenization method. Refer to BERT documentation for
API reference information.
BertJapaneseTokenizer
[[autodoc]] BertJapaneseTokenizer |
ALIGN
Overview
The ALIGN model was proposed in Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with EfficientNet as its vision encoder and BERT as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe.
The abstract from the paper is the following:
Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.
This model was contributed by Alara Dirik.
The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper.
Usage example
ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score.
[AlignProcessor] wraps [EfficientNetImageProcessor] and [BertTokenizer] into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using [AlignProcessor] and [AlignModel].
thon
import requests
import torch
from PIL import Image
from transformers import AlignProcessor, AlignModel
processor = AlignProcessor.from_pretrained("kakaobrain/align-base")
model = AlignModel.from_pretrained("kakaobrain/align-base")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["an image of a cat", "an image of a dog"]
inputs = processor(text=candidate_labels, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
this is the image-text similarity score
logits_per_image = outputs.logits_per_image
we can take the softmax to get the label probabilities
probs = logits_per_image.softmax(dim=1)
print(probs) |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN.
A blog post on ALIGN and the COYO-700M dataset.
A zero-shot image classification demo.
Model card of kakaobrain/align-base model. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
AlignConfig
[[autodoc]] AlignConfig
- from_text_vision_configs
AlignTextConfig
[[autodoc]] AlignTextConfig
AlignVisionConfig
[[autodoc]] AlignVisionConfig
AlignProcessor
[[autodoc]] AlignProcessor
AlignModel
[[autodoc]] AlignModel
- forward
- get_text_features
- get_image_features
AlignTextModel
[[autodoc]] AlignTextModel
- forward
AlignVisionModel
[[autodoc]] AlignVisionModel
- forward |
DialoGPT
Overview
DialoGPT was proposed in DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao,
Jianfeng Gao, Jingjing Liu, Bill Dolan. It's a GPT2 Model trained on 147M conversation-like exchanges extracted from
Reddit.
The abstract from the paper is the following:
We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained
transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning
from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human
both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems
that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline
systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response
generation and the development of more intelligent open-domain dialogue systems.
The original code can be found here.
Usage tips |
DialoGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather
than the left.
DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful
at response generation in open-domain dialogue systems.
DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on DialoGPT's model card. |
Training:
In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: We
follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language
modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,, x_N (N is the
sequence length), ended by the end-of-text token. For more information please confer to the original paper. |
DialoGPT's architecture is based on the GPT2 model, refer to GPT2's documentation page for API reference and examples. |
RegNet
Overview
The RegNet model was proposed in Designing Network Design Spaces by Ilija Radosavovic, Raj Prateek Kosaraju, Ross Girshick, Kaiming He, Piotr Dollár.
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.
The abstract from the paper is the following:
In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular EfficientNet models while being up to 5x faster on GPUs.
This model was contributed by Francesco. The TensorFlow version of the model
was contributed by sayakpaul and ariG23498.
The original code can be found here.
The huge 10B model from Self-supervised Pretraining of Visual Features in the Wild,
trained on one billion Instagram images, is available on the hub
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with RegNet. |
[RegNetForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
RegNetConfig
[[autodoc]] RegNetConfig |
RegNetModel
[[autodoc]] RegNetModel
- forward
RegNetForImageClassification
[[autodoc]] RegNetForImageClassification
- forward
TFRegNetModel
[[autodoc]] TFRegNetModel
- call
TFRegNetForImageClassification
[[autodoc]] TFRegNetForImageClassification
- call
FlaxRegNetModel
[[autodoc]] FlaxRegNetModel
- call
FlaxRegNetForImageClassification
[[autodoc]] FlaxRegNetForImageClassification
- call |
Depth Anything
Overview
The Depth Anything model was proposed in Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. Depth Anything is based on the DPT architecture, trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
The abstract from the paper is the following:
This work presents Depth Anything, a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further, through fine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet. |
Depth Anything overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Usage example
There are 2 main ways to use Depth Anything: either using the pipeline API, which abstracts away all the complexity for you, or by using the DepthAnythingForDepthEstimation class yourself.
Pipeline API
The pipeline allows to use the model in a few lines of code:
thon |
from transformers import pipeline
from PIL import Image
import requests
load pipe
pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-small-hf")
load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
inference
depth = pipe(image)["depth"]
Using the model yourself
If you want to do the pre- and postprocessing yourself, here's how to do that:
thon |
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf")
model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf")
prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted) |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything.
Monocular depth estimation task guide
A notebook showcasing inference with [DepthAnythingForDepthEstimation] can be found here. 🌎 |
Monocular depth estimation task guide
A notebook showcasing inference with [DepthAnythingForDepthEstimation] can be found here. 🌎
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DepthAnythingConfig
[[autodoc]] DepthAnythingConfig
DepthAnythingForDepthEstimation
[[autodoc]] DepthAnythingForDepthEstimation
- forward |
YOLOS
Overview
The YOLOS model was proposed in You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
YOLOS proposes to just leverage the plain Vision Transformer (ViT) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN.
The abstract from the paper is the following:
Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS. |
YOLOS architecture. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS.
All example notebooks illustrating inference + fine-tuning [YolosForObjectDetection] on a custom dataset can be found here.
See also: Object detection task guide |
All example notebooks illustrating inference + fine-tuning [YolosForObjectDetection] on a custom dataset can be found here.
See also: Object detection task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
Use [YolosImageProcessor] for preparing images (and optional targets) for the model. Contrary to DETR, YOLOS doesn't require a pixel_mask to be created. |
YolosConfig
[[autodoc]] YolosConfig
YolosImageProcessor
[[autodoc]] YolosImageProcessor
- preprocess
- pad
- post_process_object_detection
YolosFeatureExtractor
[[autodoc]] YolosFeatureExtractor
- call
- pad
- post_process_object_detection
YolosModel
[[autodoc]] YolosModel
- forward
YolosForObjectDetection
[[autodoc]] YolosForObjectDetection
- forward |
Mistral
Overview
Mistral was introduced in the this blogpost by Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
The introduction of the blog post says:
Mistral AI team is proud to release Mistral 7B, the most powerful language model for its size to date.
Mistral-7B is the first large language model (LLM) released by mistral.ai.
Architectural details
Mistral-7B is a decoder-only Transformer with the following architectural choices: |
Sliding Window Attention - Trained with 8k context length and fixed cache size, with a theoretical attention span of 128K tokens
GQA (Grouped Query Attention) - allowing faster inference and lower cache size.
Byte-fallback BPE tokenizer - ensures that characters are never mapped to out of vocabulary tokens.
For more details refer to the release blog post.
License
Mistral-7B is released under the Apache 2.0 license.
Usage tips
The Mistral team has released 3 checkpoints: |
a base model, Mistral-7B-v0.1, which has been pre-trained to predict the next token on internet-scale data.
an instruction tuned model, Mistral-7B-Instruct-v0.1, which is the base model optimized for chat purposes using supervised fine-tuning (SFT) and direct preference optimization (DPO).
an improved instruction tuned model, Mistral-7B-Instruct-v0.2, which improves upon v1.
The base model can be used as follows:
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
prompt = "My favourite condiment is"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"My favourite condiment is to " |
The instruction tuned model can be used as follows:
thon |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"Mayonnaise can be made as follows: ()" |
As can be seen, the instruction-tuned model requires a chat template to be applied to make sure the inputs are prepared in the right format.
Speeding up Mistral by using Flash Attention
The code snippets above showcase inference without any optimization tricks. However, one can drastically speed up the model by leveraging Flash Attention, which is a faster implementation of the attention mechanism used inside the model.
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. |
pip install -U flash-attn --no-build-isolation
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the flash attention repository. Make also sure to load your model in half-precision (e.g. torch.float16)
To load and run a model using Flash Attention-2, refer to the snippet below:
thon |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1")
prompt = "My favourite condiment is"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"My favourite condiment is to ()" |
Expected speedups
Below is a expected speedup diagram that compares pure inference time between the native implementation in transformers using mistralai/Mistral-7B-v0.1 checkpoint and the Flash Attention 2 version of the model. |
Sliding window Attention
The current implementation supports the sliding window attention mechanism and memory efficient cache management.
To enable sliding window attention, just make sure to have a flash-attn version that is compatible with sliding window attention (>=2.3.0).
The Flash Attention-2 model uses also a more memory efficient cache slicing mechanism - as recommended per the official implementation of Mistral model that use rolling cache mechanism we keep the cache size fixed (self.config.sliding_window), support batched generation only for padding_side="left" and use the absolute position of the current token to compute the positional embedding.
Shrinking down Mistral using quantization
As the Mistral model has 7 billion parameters, that would require about 14GB of GPU RAM in half precision (float16), since each parameter is stored in 2 bytes. However, one can shrink down the size of the model using quantization. If the model is quantized to 4 bits (or half a byte per parameter),that requires only about 3.5GB of RAM.
Quantizing a model is as simple as passing a quantization_config to the model. Below, we'll leverage the BitsAndyBytes quantization (but refer to this page for other quantization methods):
thon |
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
specify how to quantize the model
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="torch.float16",
)
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2", quantization_config=True, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
prompt = "My favourite condiment is"
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
tokenizer.batch_decode(generated_ids)[0]
"The expected output" |
This model was contributed by Younes Belkada and Arthur Zucker .
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Mistral. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A demo notebook to perform supervised fine-tuning (SFT) of Mistral-7B can be found here. 🌎
A blog post on how to fine-tune LLMs in 2024 using Hugging Face tooling. 🌎
The Alignment Handbook by Hugging Face includes scripts and recipes to perform supervised fine-tuning (SFT) and direct preference optimization with Mistral-7B. This includes scripts for full fine-tuning, QLoRa on a single GPU as well as multi-GPU fine-tuning.
Causal language modeling task guide |
MistralConfig
[[autodoc]] MistralConfig
MistralModel
[[autodoc]] MistralModel
- forward
MistralForCausalLM
[[autodoc]] MistralForCausalLM
- forward
MistralForSequenceClassification
[[autodoc]] MistralForSequenceClassification
- forward
FlaxMistralModel
[[autodoc]] FlaxMistralModel
- call
FlaxMistralForCausalLM
[[autodoc]] FlaxMistralForCausalLM
- call |
Swin2SR
Overview
The Swin2SR model was proposed in Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
Swin2R improves the SwinIR model by incorporating Swin Transformer v2 layers which mitigates issues such as training instability, resolution gaps between pre-training
and fine-tuning, and hunger on data.
The abstract from the paper is the following:
Compression plays an important role on the efficient transmission and storage of images and videos through band-limited systems such as streaming services, virtual reality or videogames. However, compression unavoidably leads to artifacts and the loss of the original information, which may severely degrade the visual quality. For these reasons, quality enhancement of compressed images has become a popular research topic. While most state-of-the-art image restoration methods are based on convolutional neural networks, other transformers-based methods such as SwinIR, show impressive performance on these tasks.
In this paper, we explore the novel Swin Transformer V2, to improve SwinIR for image super-resolution, and in particular, the compressed input scenario. Using this method we can tackle the major issues in training transformer vision models, such as training instability, resolution gaps between pre-training and fine-tuning, and hunger on data. We conduct experiments on three representative tasks: JPEG compression artifacts removal, image super-resolution (classical and lightweight), and compressed image super-resolution. Experimental results demonstrate that our method, Swin2SR, can improve the training convergence and performance of SwinIR, and is a top-5 solution at the "AIM 2022 Challenge on Super-Resolution of Compressed Image and Video". |
Swin2SR architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
Demo notebooks for Swin2SR can be found here.
A demo Space for image super-resolution with SwinSR can be found here.
Swin2SRImageProcessor
[[autodoc]] Swin2SRImageProcessor
- preprocess
Swin2SRConfig
[[autodoc]] Swin2SRConfig
Swin2SRModel
[[autodoc]] Swin2SRModel
- forward
Swin2SRForImageSuperResolution
[[autodoc]] Swin2SRForImageSuperResolution
- forward |
FLAVA
Overview
The FLAVA model was proposed in FLAVA: A Foundational Language And Vision Alignment Model by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022.
The paper aims at creating a single unified foundation model which can work across vision, language
as well as vision-and-language multimodal tasks.
The abstract from the paper is the following:
State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety
of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal
(with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising
direction would be to use a single holistic universal model, as a "foundation", that targets all modalities
at once -- a true vision and language foundation model should be good at vision tasks, language tasks, and
cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate
impressive performance on a wide range of 35 tasks spanning these target modalities.
This model was contributed by aps. The original code can be found here.
FlavaConfig
[[autodoc]] FlavaConfig
FlavaTextConfig
[[autodoc]] FlavaTextConfig
FlavaImageConfig
[[autodoc]] FlavaImageConfig
FlavaMultimodalConfig
[[autodoc]] FlavaMultimodalConfig
FlavaImageCodebookConfig
[[autodoc]] FlavaImageCodebookConfig
FlavaProcessor
[[autodoc]] FlavaProcessor
FlavaFeatureExtractor
[[autodoc]] FlavaFeatureExtractor
FlavaImageProcessor
[[autodoc]] FlavaImageProcessor
- preprocess
FlavaForPreTraining
[[autodoc]] FlavaForPreTraining
- forward
FlavaModel
[[autodoc]] FlavaModel
- forward
- get_text_features
- get_image_features
FlavaImageCodebook
[[autodoc]] FlavaImageCodebook
- forward
- get_codebook_indices
- get_codebook_probs
FlavaTextModel
[[autodoc]] FlavaTextModel
- forward
FlavaImageModel
[[autodoc]] FlavaImageModel
- forward
FlavaMultimodalModel
[[autodoc]] FlavaMultimodalModel
- forward |
Dilated Neighborhood Attention Transformer
Overview
DiNAT was proposed in Dilated Neighborhood Attention Transformer
by Ali Hassani and Humphrey Shi.
It extends NAT by adding a Dilated Neighborhood Attention pattern to capture global context,
and shows significant performance improvements over it.
The abstract from the paper is the following:
*Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities,
domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have
also gained significant attention, thanks to their performance and easy integration into existing frameworks.
These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA)
or Swin Transformer's Shifted Window Self Attention. While effective at reducing self attention's quadratic complexity,
local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling,
and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and
efficient extension to NA that can capture more global context and expand receptive fields exponentially at no
additional cost. NA's local attention and DiNA's sparse global attention complement each other, and therefore we
introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both.
DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt.
Our large model is faster and ahead of its Swin counterpart by 1.5% box AP in COCO object detection,
1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation.
Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.2 PQ)
and ADE20K (48.5 PQ), and instance segmentation model on Cityscapes (44.5 AP) and ADE20K (35.4 AP) (no extra data).
It also matches the state of the art specialized semantic segmentation models on ADE20K (58.2 mIoU),
and ranks second on Cityscapes (84.5 mIoU) (no extra data). * |
Neighborhood Attention with different dilation values.
Taken from the original paper.
This model was contributed by Ali Hassani.
The original code can be found here.
Usage tips
DiNAT can be used as a backbone. When output_hidden_states = True,
it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, height, width, num_channels).
Notes:
- DiNAT depends on NATTEN's implementation of Neighborhood Attention and Dilated Neighborhood Attention.
You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten, or build on your system by running pip install natten.
Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet.
- Patch size of 4 is only supported at the moment.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiNAT. |
[DinatForImageClassification] is supported by this example script and notebook.
See also: Image classification task guide |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DinatConfig
[[autodoc]] DinatConfig
DinatModel
[[autodoc]] DinatModel
- forward
DinatForImageClassification
[[autodoc]] DinatForImageClassification
- forward |
Wav2Vec2-Conformer
Overview
The Wav2Vec2-Conformer was added to an updated version of fairseq S2T: Fast Speech-to-Text Modeling with fairseq by Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino.
The official results of the model can be found in Table 3 and Table 4 of the paper.
The Wav2Vec2-Conformer weights were released by the Meta AI team within the Fairseq library.
This model was contributed by patrickvonplaten.
The original code can be found here.
Usage tips |
Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the Attention-block with a Conformer-block
as introduced in Conformer: Convolution-augmented Transformer for Speech Recognition.
For the same number of layers, Wav2Vec2-Conformer requires more parameters than Wav2Vec2, but also yields
an improved word error rate.
Wav2Vec2-Conformer uses the same tokenizer and feature extractor as Wav2Vec2.
Wav2Vec2-Conformer can use either no relative position embeddings, Transformer-XL-like position embeddings, or
rotary position embeddings by setting the correct config.position_embeddings_type. |
Resources
Audio classification task guide
Automatic speech recognition task guide |
Wav2Vec2ConformerConfig
[[autodoc]] Wav2Vec2ConformerConfig
Wav2Vec2Conformer specific outputs
[[autodoc]] models.wav2vec2_conformer.modeling_wav2vec2_conformer.Wav2Vec2ConformerForPreTrainingOutput
Wav2Vec2ConformerModel
[[autodoc]] Wav2Vec2ConformerModel
- forward
Wav2Vec2ConformerForCTC
[[autodoc]] Wav2Vec2ConformerForCTC
- forward
Wav2Vec2ConformerForSequenceClassification
[[autodoc]] Wav2Vec2ConformerForSequenceClassification
- forward
Wav2Vec2ConformerForAudioFrameClassification
[[autodoc]] Wav2Vec2ConformerForAudioFrameClassification
- forward
Wav2Vec2ConformerForXVector
[[autodoc]] Wav2Vec2ConformerForXVector
- forward
Wav2Vec2ConformerForPreTraining
[[autodoc]] Wav2Vec2ConformerForPreTraining
- forward |
MarkupLM
Overview
The MarkupLM model was proposed in MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document
Understanding by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. MarkupLM is BERT, but
applied to HTML pages instead of raw text documents. The model incorporates additional embedding layers to improve
performance, similar to LayoutLM.
The model can be used for tasks like question answering on web pages or information extraction from web pages. It obtains
state-of-the-art results on 2 important benchmarks:
- WebSRC, a dataset for Web-Based Structural Reading Comprehension (a bit like SQuAD but for web pages)
- SWDE, a dataset
for information extraction from web pages (basically named-entity recognition on web pages)
The abstract from the paper is the following:
Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document
Understanding (VrDU), especially the fixed-layout documents such as scanned document images. While, there are still a
large number of digital documents where the layout information is not fixed and needs to be interactively and
dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this
paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone such as
HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the
pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding
tasks. The pre-trained model and code will be publicly available.
This model was contributed by nielsr. The original code can be found here.
Usage tips |
In addition to input_ids, [~MarkupLMModel.forward] expects 2 additional inputs, namely xpath_tags_seq and xpath_subs_seq.
These are the XPATH tags and subscripts respectively for each token in the input sequence.
One can use [MarkupLMProcessor] to prepare all data for the model. Refer to the usage guide for more info. |
MarkupLM architecture. Taken from the original paper.
Usage: MarkupLMProcessor
The easiest way to prepare data for the model is to use [MarkupLMProcessor], which internally combines a feature extractor
([MarkupLMFeatureExtractor]) and a tokenizer ([MarkupLMTokenizer] or [MarkupLMTokenizerFast]). The feature extractor is
used to extract all nodes and xpaths from the HTML strings, which are then provided to the tokenizer, which turns them into the
token-level inputs of the model (input_ids etc.). Note that you can still use the feature extractor and tokenizer separately,
if you only want to handle one of the two tasks.
thon
from transformers import MarkupLMFeatureExtractor, MarkupLMTokenizerFast, MarkupLMProcessor
feature_extractor = MarkupLMFeatureExtractor()
tokenizer = MarkupLMTokenizerFast.from_pretrained("microsoft/markuplm-base")
processor = MarkupLMProcessor(feature_extractor, tokenizer) |
In short, one can provide HTML strings (and possibly additional data) to [MarkupLMProcessor],
and it will create the inputs expected by the model. Internally, the processor first uses
[MarkupLMFeatureExtractor] to get a list of nodes and corresponding xpaths. The nodes and
xpaths are then provided to [MarkupLMTokenizer] or [MarkupLMTokenizerFast], which converts them
to token-level input_ids, attention_mask, token_type_ids, xpath_subs_seq, xpath_tags_seq.
Optionally, one can provide node labels to the processor, which are turned into token-level labels.
[MarkupLMFeatureExtractor] uses Beautiful Soup, a Python library for
pulling data out of HTML and XML files, under the hood. Note that you can still use your own parsing solution of
choice, and provide the nodes and xpaths yourself to [MarkupLMTokenizer] or [MarkupLMTokenizerFast].
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True
This is the simplest case, in which the processor will use the feature extractor to get all nodes and xpaths from the HTML.
thon |
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
html_string = """
<!DOCTYPE html>
Hello world
Welcome
Here is my website.
"""
note that you can also add provide all tokenizer parameters here such as padding, truncation
encoding = processor(html_string, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) |
Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False
In case one already has obtained all nodes and xpaths, one doesn't need the feature extractor. In that case, one should
provide the nodes and corresponding xpaths themselves to the processor, and make sure to set parse_html to False.
thon |
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
encoding = processor(nodes=nodes, xpaths=xpaths, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq']) |
Use case 3: token classification (training), parse_html=False
For token classification tasks (such as SWDE), one can also provide the
corresponding node labels in order to train a model. The processor will then convert these into token-level labels.
By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
ignore_index of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with only_label_first_subword set to False.
thon |
from transformers import MarkupLMProcessor
processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")
processor.parse_html = False
nodes = ["hello", "world", "how", "are"]
xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"]
node_labels = [1, 2, 2, 1]
encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq', 'labels']) |
Subsets and Splits