text
stringlengths 2
11.8k
|
---|
NystromformerConfig
[[autodoc]] NystromformerConfig
NystromformerModel
[[autodoc]] NystromformerModel
- forward
NystromformerForMaskedLM
[[autodoc]] NystromformerForMaskedLM
- forward
NystromformerForSequenceClassification
[[autodoc]] NystromformerForSequenceClassification
- forward
NystromformerForMultipleChoice
[[autodoc]] NystromformerForMultipleChoice
- forward
NystromformerForTokenClassification
[[autodoc]] NystromformerForTokenClassification
- forward
NystromformerForQuestionAnswering
[[autodoc]] NystromformerForQuestionAnswering
- forward |
GLPN
This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a Github Issue. |
Overview
The GLPN model was proposed in Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
GLPN combines SegFormer's hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably
less computational complexity.
The abstract from the paper is the following:
Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models. |
Summary of the approach. Taken from the original paper.
This model was contributed by nielsr. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GLPN.
Demo notebooks for [GLPNForDepthEstimation] can be found here.
Monocular depth estimation task guide |
Demo notebooks for [GLPNForDepthEstimation] can be found here.
Monocular depth estimation task guide
GLPNConfig
[[autodoc]] GLPNConfig
GLPNFeatureExtractor
[[autodoc]] GLPNFeatureExtractor
- call
GLPNImageProcessor
[[autodoc]] GLPNImageProcessor
- preprocess
GLPNModel
[[autodoc]] GLPNModel
- forward
GLPNForDepthEstimation
[[autodoc]] GLPNForDepthEstimation
- forward |
Gemma
Overview
The Gemma model was proposed in Gemma: Open Models Based on Gemini Technology and Research by Gemma Team, Google.
Gemma models are trained on 6T tokens, and released with 2 versions, 2b and 7b.
The abstract from the paper is the following:
This work introduces Gemma, a new family of open language models demonstrating strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of our model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations
Tips: |
The original checkpoints can be converted using the conversion script src/transformers/models/gemma/convert_gemma_weights_to_hf.py |
This model was contributed by Arthur Zucker, Younes Belkada, Sanchit Gandhi, Pedro Cuenca.
GemmaConfig
[[autodoc]] GemmaConfig
GemmaTokenizer
[[autodoc]] GemmaTokenizer
GemmaTokenizerFast
[[autodoc]] GemmaTokenizerFast
GemmaModel
[[autodoc]] GemmaModel
- forward
GemmaForCausalLM
[[autodoc]] GemmaForCausalLM
- forward
GemmaForSequenceClassification
[[autodoc]] GemmaForSequenceClassification
- forward
FlaxGemmaModel
[[autodoc]] FlaxGemmaModel
- call
FlaxGemmaForCausalLM
[[autodoc]] FlaxGemmaForCausalLM
- call |
WavLM
Overview
The WavLM model was proposed in WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen,
Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu,
Michael Zeng, Furu Wei.
The abstract from the paper is the following:
Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been
attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker
identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is
challenging. In this paper, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks.
WavLM is built based on the HuBERT framework, with an emphasis on both spoken content modeling and speaker identity
preservation. We first equip the Transformer structure with gated relative position bias to improve its capability on
recognition tasks. For better speaker discrimination, we propose an utterance mixing training strategy, where
additional overlapped utterances are created unsupervisedly and incorporated during model training. Lastly, we scale up
the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB
benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks.
Relevant checkpoints can be found under https://huggingface.co/models?other=wavlm.
This model was contributed by patrickvonplaten. The Authors' code can be
found here.
Usage tips |
WavLM is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Please use
[Wav2Vec2Processor] for the feature extraction.
WavLM model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded
using [Wav2Vec2CTCTokenizer].
WavLM performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
Resources |
Resources
Audio classification task guide
Automatic speech recognition task guide
WavLMConfig
[[autodoc]] WavLMConfig
WavLMModel
[[autodoc]] WavLMModel
- forward
WavLMForCTC
[[autodoc]] WavLMForCTC
- forward
WavLMForSequenceClassification
[[autodoc]] WavLMForSequenceClassification
- forward
WavLMForAudioFrameClassification
[[autodoc]] WavLMForAudioFrameClassification
- forward
WavLMForXVector
[[autodoc]] WavLMForXVector
- forward |
UniSpeech-SAT
Overview
The UniSpeech-SAT model was proposed in UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware
Pre-Training by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen,
Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu .
The abstract from the paper is the following:
Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled
data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in
speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In
this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are
introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to
the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function.
Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where
additional overlapped utterances are created unsupervisedly and incorporate during training. We integrate the proposed
methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves
state-of-the-art performance in universal representation learning, especially for speaker identification oriented
tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training
dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.
This model was contributed by patrickvonplaten. The Authors' code can be
found here.
Usage tips |
UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
Please use [Wav2Vec2Processor] for the feature extraction.
UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2CTCTokenizer].
UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks.
Resources |
Resources
Audio classification task guide
Automatic speech recognition task guide |
UniSpeechSatConfig
[[autodoc]] UniSpeechSatConfig
UniSpeechSat specific outputs
[[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput
UniSpeechSatModel
[[autodoc]] UniSpeechSatModel
- forward
UniSpeechSatForCTC
[[autodoc]] UniSpeechSatForCTC
- forward
UniSpeechSatForSequenceClassification
[[autodoc]] UniSpeechSatForSequenceClassification
- forward
UniSpeechSatForAudioFrameClassification
[[autodoc]] UniSpeechSatForAudioFrameClassification
- forward
UniSpeechSatForXVector
[[autodoc]] UniSpeechSatForXVector
- forward
UniSpeechSatForPreTraining
[[autodoc]] UniSpeechSatForPreTraining
- forward |
DPR |
Overview
Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. It was
introduced in Dense Passage Retrieval for Open-Domain Question Answering by
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih.
The abstract from the paper is the following:
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional
sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can
be practically implemented using dense representations alone, where embeddings are learned from a small number of
questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets,
our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage
retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA
benchmarks.
This model was contributed by lhoestq. The original code can be found here.
Usage tips |
DPR consists in three models:
Question encoder: encode questions as vectors
Context encoder: encode contexts as vectors
Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question). |
DPRConfig
[[autodoc]] DPRConfig
DPRContextEncoderTokenizer
[[autodoc]] DPRContextEncoderTokenizer
DPRContextEncoderTokenizerFast
[[autodoc]] DPRContextEncoderTokenizerFast
DPRQuestionEncoderTokenizer
[[autodoc]] DPRQuestionEncoderTokenizer
DPRQuestionEncoderTokenizerFast
[[autodoc]] DPRQuestionEncoderTokenizerFast
DPRReaderTokenizer
[[autodoc]] DPRReaderTokenizer
DPRReaderTokenizerFast
[[autodoc]] DPRReaderTokenizerFast
DPR specific outputs
[[autodoc]] models.dpr.modeling_dpr.DPRContextEncoderOutput
[[autodoc]] models.dpr.modeling_dpr.DPRQuestionEncoderOutput
[[autodoc]] models.dpr.modeling_dpr.DPRReaderOutput |
DPRContextEncoder
[[autodoc]] DPRContextEncoder
- forward
DPRQuestionEncoder
[[autodoc]] DPRQuestionEncoder
- forward
DPRReader
[[autodoc]] DPRReader
- forward
TFDPRContextEncoder
[[autodoc]] TFDPRContextEncoder
- call
TFDPRQuestionEncoder
[[autodoc]] TFDPRQuestionEncoder
- call
TFDPRReader
[[autodoc]] TFDPRReader
- call |
REALM
Overview
The REALM model was proposed in REALM: Retrieval-Augmented Language Model Pre-Training by Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat and Ming-Wei Chang. It's a
retrieval-augmented language model that firstly retrieves documents from a textual knowledge corpus and then
utilizes retrieved documents to process question answering tasks.
The abstract from the paper is the following:
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks
such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network,
requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we
augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend
over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the
first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language
modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We
demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the
challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both
explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous
methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as
interpretability and modularity.
This model was contributed by qqaatw. The original code can be found
here.
RealmConfig
[[autodoc]] RealmConfig
RealmTokenizer
[[autodoc]] RealmTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
- batch_encode_candidates
RealmTokenizerFast
[[autodoc]] RealmTokenizerFast
- batch_encode_candidates
RealmRetriever
[[autodoc]] RealmRetriever
RealmEmbedder
[[autodoc]] RealmEmbedder
- forward
RealmScorer
[[autodoc]] RealmScorer
- forward
RealmKnowledgeAugEncoder
[[autodoc]] RealmKnowledgeAugEncoder
- forward
RealmReader
[[autodoc]] RealmReader
- forward
RealmForOpenQA
[[autodoc]] RealmForOpenQA
- block_embedding_to
- forward |
MGP-STR
Overview
The MGP-STR model was proposed in Multi-Granularity Prediction for Scene Text Recognition by Peng Wang, Cheng Da, and Cong Yao. MGP-STR is a conceptually simple yet powerful vision Scene Text Recognition (STR) model, which is built upon the Vision Transformer (ViT). To integrate linguistic knowledge, Multi-Granularity Prediction (MGP) strategy is proposed to inject information from the language modality into the model in an implicit way.
The abstract from the paper is the following:
Scene text recognition (STR) has been an active research topic in computer vision for years. To tackle this challenging problem, numerous innovative methods have been successively proposed and incorporating linguistic knowledge into STR models has recently become a prominent trend. In this work, we first draw inspiration from the recent progress in Vision Transformer (ViT) to construct a conceptually simple yet powerful vision STR model, which is built upon ViT and outperforms previous state-of-the-art models for scene text recognition, including both pure vision models and language-augmented methods. To integrate linguistic knowledge, we further propose a Multi-Granularity Prediction strategy to inject information from the language modality into the model in an implicit way, i.e. , subword representations (BPE and WordPiece) widely-used in NLP are introduced into the output space, in addition to the conventional character level representation, while no independent language model (LM) is adopted. The resultant algorithm (termed MGP-STR) is able to push the performance envelop of STR to an even higher level. Specifically, it achieves an average recognition accuracy of 93.35% on standard benchmarks. |
MGP-STR architecture. Taken from the original paper.
MGP-STR is trained on two synthetic datasets MJSynth (MJ) and SynthText (ST) without fine-tuning on other datasets. It achieves state-of-the-art results on six standard Latin scene text benchmarks, including 3 regular text datasets (IC13, SVT, IIIT) and 3 irregular ones (IC15, SVTP, CUTE).
This model was contributed by yuekun. The original code can be found here.
Inference example
[MgpstrModel] accepts images as input and generates three types of predictions, which represent textual information at different granularities.
The three types of predictions are fused to give the final prediction result.
The [ViTImageProcessor] class is responsible for preprocessing the input image and
[MgpstrTokenizer] decodes the generated character tokens to the target string. The
[MgpstrProcessor] wraps [ViTImageProcessor] and [MgpstrTokenizer]
into a single instance to both extract the input features and decode the predicted token ids. |
Step-by-step Optical Character Recognition (OCR) |
from transformers import MgpstrProcessor, MgpstrForSceneTextRecognition
import requests
from PIL import Image
processor = MgpstrProcessor.from_pretrained('alibaba-damo/mgp-str-base')
model = MgpstrForSceneTextRecognition.from_pretrained('alibaba-damo/mgp-str-base')
load image from the IIIT-5k dataset
url = "https://i.postimg.cc/ZKwLg2Gw/367-14.png"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
pixel_values = processor(images=image, return_tensors="pt").pixel_values
outputs = model(pixel_values)
generated_text = processor.batch_decode(outputs.logits)['generated_text'] |
MgpstrConfig
[[autodoc]] MgpstrConfig
MgpstrTokenizer
[[autodoc]] MgpstrTokenizer
- save_vocabulary
MgpstrProcessor
[[autodoc]] MgpstrProcessor
- call
- batch_decode
MgpstrModel
[[autodoc]] MgpstrModel
- forward
MgpstrForSceneTextRecognition
[[autodoc]] MgpstrForSceneTextRecognition
- forward |
MEGA
Overview
The MEGA model was proposed in Mega: Moving Average Equipped Gated Attention by Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, and Luke Zettlemoyer.
MEGA proposes a new approach to self-attention with each encoder layer having a multi-headed exponential moving average in addition to a single head of standard dot-product attention, giving the attention mechanism
stronger positional biases. This allows MEGA to perform competitively to Transformers on standard benchmarks including LRA
while also having significantly fewer parameters. MEGA's compute efficiency allows it to scale to very long sequences, making it an
attractive option for long-document NLP tasks.
The abstract from the paper is the following:
*The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models. *
This model was contributed by mnaylor.
The original code can be found here.
Usage tips |
MEGA can perform quite well with relatively few parameters. See Appendix D in the MEGA paper for examples of architectural specs which perform well in various settings. If using MEGA as a decoder, be sure to set bidirectional=False to avoid errors with default bidirectional.
Mega-chunk is a variant of mega that reduces time and spaces complexity from quadratic to linear. Utilize chunking with MegaConfig.use_chunking and control chunk size with MegaConfig.chunk_size
Implementation Notes |
Implementation Notes
The original implementation of MEGA had an inconsistent expectation of attention masks for padding and causal self-attention between the softmax attention and Laplace/squared ReLU method. This implementation addresses that inconsistency.
The original implementation did not include token type embeddings; this implementation adds support for these, with the option controlled by MegaConfig.add_token_type_embeddings |
MegaConfig
[[autodoc]] MegaConfig
MegaModel
[[autodoc]] MegaModel
- forward
MegaForCausalLM
[[autodoc]] MegaForCausalLM
- forward
MegaForMaskedLM
[[autodoc]] MegaForMaskedLM
- forward
MegaForSequenceClassification
[[autodoc]] MegaForSequenceClassification
- forward
MegaForMultipleChoice
[[autodoc]] MegaForMultipleChoice
- forward
MegaForTokenClassification
[[autodoc]] MegaForTokenClassification
- forward
MegaForQuestionAnswering
[[autodoc]] MegaForQuestionAnswering
- forward |
CodeGen
Overview
The CodeGen model was proposed in A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong.
CodeGen is an autoregressive language model for program synthesis trained sequentially on The Pile, BigQuery, and BigPython.
The abstract from the paper is the following:
Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: this https URL.
This model was contributed by Hiroaki Hayashi.
The original code can be found here.
Checkpoint Naming |
CodeGen model checkpoints are available on different pre-training data with variable sizes.
The format is: Salesforce/codegen-{size}-{data}, where
size: 350M, 2B, 6B, 16B
data:
nl: Pre-trained on the Pile
multi: Initialized with nl, then further pre-trained on multiple programming languages data
mono: Initialized with multi, then further pre-trained on Python data |
For example, Salesforce/codegen-350M-mono offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python.
Usage example
thon |
Usage example
thon
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "Salesforce/codegen-350M-mono"
model = AutoModelForCausalLM.from_pretrained(checkpoint)
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
text = "def hello_world():"
completion = model.generate(**tokenizer(text, return_tensors="pt"))
print(tokenizer.decode(completion[0]))
def hello_world():
print("Hello World")
hello_world()
Resources
Causal language modeling task guide |
hello_world()
Resources
Causal language modeling task guide
CodeGenConfig
[[autodoc]] CodeGenConfig
- all
CodeGenTokenizer
[[autodoc]] CodeGenTokenizer
- save_vocabulary
CodeGenTokenizerFast
[[autodoc]] CodeGenTokenizerFast
CodeGenModel
[[autodoc]] CodeGenModel
- forward
CodeGenForCausalLM
[[autodoc]] CodeGenForCausalLM
- forward |
ProphetNet |
Overview
The ProphetNet model was proposed in ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei
Zhang, Ming Zhou on 13 Jan, 2020.
ProphetNet is an encoder-decoder model and can predict n-future tokens for "ngram" language modeling instead of just
the next token.
The abstract from the paper is the following:
In this paper, we present a new sequence-to-sequence pretraining model called ProphetNet, which introduces a novel
self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism. Instead of
the optimization of one-step ahead prediction in traditional sequence-to-sequence model, the ProphetNet is optimized by
n-step ahead prediction which predicts the next n tokens simultaneously based on previous context tokens at each time
step. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent
overfitting on strong local correlations. We pre-train ProphetNet using a base scale dataset (16GB) and a large scale
dataset (160GB) respectively. Then we conduct experiments on CNN/DailyMail, Gigaword, and SQuAD 1.1 benchmarks for
abstractive summarization and question generation tasks. Experimental results show that ProphetNet achieves new
state-of-the-art results on all these datasets compared to the models using the same scale pretraining corpus.
The Authors' code can be found here.
Usage tips |
ProphetNet is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than
the left.
The model architecture is based on the original Transformer, but replaces the “standard” self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism.
Resources
Causal language modeling task guide
Translation task guide
Summarization task guide |
ProphetNetConfig
[[autodoc]] ProphetNetConfig
ProphetNetTokenizer
[[autodoc]] ProphetNetTokenizer
ProphetNet specific outputs
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqLMOutput
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetSeq2SeqModelOutput
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderModelOutput
[[autodoc]] models.prophetnet.modeling_prophetnet.ProphetNetDecoderLMOutput
ProphetNetModel
[[autodoc]] ProphetNetModel
- forward
ProphetNetEncoder
[[autodoc]] ProphetNetEncoder
- forward
ProphetNetDecoder
[[autodoc]] ProphetNetDecoder
- forward
ProphetNetForConditionalGeneration
[[autodoc]] ProphetNetForConditionalGeneration
- forward
ProphetNetForCausalLM
[[autodoc]] ProphetNetForCausalLM
- forward |
GPT-NeoX
Overview
We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will
be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge,
the largest dense autoregressive model that has publicly available weights at the time of submission. In this work,
we describe GPT-NeoX-20B's architecture and training and evaluate its performance on a range of language-understanding,
mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and
gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source
the training and evaluation code, as well as the model weights, at https://github.com/EleutherAI/gpt-neox.
Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with
generous the support of CoreWeave.
GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows:
python
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b").half().cuda()
GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates
additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation.
Usage example
The generate() method can be used to generate text using GPT Neo model.
thon |
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b")
tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b")
prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0] |
Using Flash Attention 2
Flash Attention 2 is an faster, optimized version of the model.
Installation
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the official documentation. If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered above.
Next, install the latest version of Flash Attention 2: |
pip install -U flash-attn --no-build-isolation
Usage
To load a model using Flash Attention 2, we can pass the argument attn_implementation="flash_attention_2" to .from_pretrained. We'll also load the model in half-precision (e.g. torch.float16), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference:
thon
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast |
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device)
Expected speedups
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using stockmark/gpt-neox-japanese-1.4b checkpoint and the Flash Attention 2 version of the model using a sequence length of 2048.
Resources |
Resources
Causal language modeling task guide |
Causal language modeling task guide
GPTNeoXConfig
[[autodoc]] GPTNeoXConfig
GPTNeoXTokenizerFast
[[autodoc]] GPTNeoXTokenizerFast
GPTNeoXModel
[[autodoc]] GPTNeoXModel
- forward
GPTNeoXForCausalLM
[[autodoc]] GPTNeoXForCausalLM
- forward
GPTNeoXForQuestionAnswering
[[autodoc]] GPTNeoXForQuestionAnswering
- forward
GPTNeoXForSequenceClassification
[[autodoc]] GPTNeoXForSequenceClassification
- forward
GPTNeoXForTokenClassification
[[autodoc]] GPTNeoXForTokenClassification
- forward |
FSMT
Overview
FSMT (FairSeq MachineTranslation) models were introduced in Facebook FAIR's WMT19 News Translation Task Submission by Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov.
The abstract of the paper is the following:
This paper describes Facebook FAIR's submission to the WMT19 shared news translation task. We participate in two
language pairs and four language directions, English <-> German and English <-> Russian. Following our submission from
last year, our baseline systems are large BPE-based transformer models trained with the Fairseq sequence modeling
toolkit which rely on sampled back-translations. This year we experiment with different bitext data filtering schemes,
as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific
data, then decode using noisy channel model reranking. Our submissions are ranked first in all four directions of the
human evaluation campaign. On En->De, our system significantly outperforms other systems as well as human translations.
This system improves upon our WMT'18 submission by 4.5 BLEU points.
This model was contributed by stas. The original code can be found
here.
Implementation Notes |
FSMT uses source and target vocabulary pairs that aren't combined into one. It doesn't share embeddings tokens
either. Its tokenizer is very similar to [XLMTokenizer] and the main model is derived from
[BartModel]. |
FSMTConfig
[[autodoc]] FSMTConfig
FSMTTokenizer
[[autodoc]] FSMTTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
FSMTModel
[[autodoc]] FSMTModel
- forward
FSMTForConditionalGeneration
[[autodoc]] FSMTForConditionalGeneration
- forward |
mT5 |
Overview
The mT5 model was presented in mT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya
Siddhant, Aditya Barua, Colin Raffel.
The abstract from the paper is the following:
The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain
state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a
multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail
the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual
benchmarks. We also describe a simple technique to prevent "accidental translation" in the zero-shot setting, where a
generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model
checkpoints used in this work are publicly available.
Note: mT5 was only pre-trained on mC4 excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task, unlike the original T5 model.
Since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task
fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.
Google has released the following variants: |
google/mt5-small
google/mt5-base
google/mt5-large
google/mt5-xl
google/mt5-xxl.
This model was contributed by patrickvonplaten. The original code can be
found here.
Resources
Translation task guide
Summarization task guide
MT5Config
[[autodoc]] MT5Config
MT5Tokenizer
[[autodoc]] MT5Tokenizer
See [T5Tokenizer] for all details.
MT5TokenizerFast
[[autodoc]] MT5TokenizerFast
See [T5TokenizerFast] for all details. |
MT5Model
[[autodoc]] MT5Model
MT5ForConditionalGeneration
[[autodoc]] MT5ForConditionalGeneration
MT5EncoderModel
[[autodoc]] MT5EncoderModel
MT5ForSequenceClassification
[[autodoc]] MT5ForSequenceClassification
MT5ForTokenClassification
[[autodoc]] MT5ForTokenClassification
MT5ForQuestionAnswering
[[autodoc]] MT5ForQuestionAnswering
TFMT5Model
[[autodoc]] TFMT5Model
TFMT5ForConditionalGeneration
[[autodoc]] TFMT5ForConditionalGeneration
TFMT5EncoderModel
[[autodoc]] TFMT5EncoderModel |
TFMT5Model
[[autodoc]] TFMT5Model
TFMT5ForConditionalGeneration
[[autodoc]] TFMT5ForConditionalGeneration
TFMT5EncoderModel
[[autodoc]] TFMT5EncoderModel
FlaxMT5Model
[[autodoc]] FlaxMT5Model
FlaxMT5ForConditionalGeneration
[[autodoc]] FlaxMT5ForConditionalGeneration
FlaxMT5EncoderModel
[[autodoc]] FlaxMT5EncoderModel |
ConvBERT |
Overview
The ConvBERT model was proposed in ConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng
Yan.
The abstract from the paper is the following:
Pre-trained language models like BERT and its variants have recently achieved impressive performance in various
natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers
large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for
generating the attention map from a global perspective, we observe some heads only need to learn local dependencies,
which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to
replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the
rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context
learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that
ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and
fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while
using less than 1/4 training cost. Code and pre-trained models will be released.
This model was contributed by abhishek. The original implementation can be found
here: https://github.com/yitu-opensource/ConvBert
Usage tips
ConvBERT training tips are similar to those of BERT. For usage tips refer to BERT documentation.
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide
ConvBertConfig
[[autodoc]] ConvBertConfig
ConvBertTokenizer
[[autodoc]] ConvBertTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
ConvBertTokenizerFast
[[autodoc]] ConvBertTokenizerFast |
ConvBertModel
[[autodoc]] ConvBertModel
- forward
ConvBertForMaskedLM
[[autodoc]] ConvBertForMaskedLM
- forward
ConvBertForSequenceClassification
[[autodoc]] ConvBertForSequenceClassification
- forward
ConvBertForMultipleChoice
[[autodoc]] ConvBertForMultipleChoice
- forward
ConvBertForTokenClassification
[[autodoc]] ConvBertForTokenClassification
- forward
ConvBertForQuestionAnswering
[[autodoc]] ConvBertForQuestionAnswering
- forward |
TFConvBertModel
[[autodoc]] TFConvBertModel
- call
TFConvBertForMaskedLM
[[autodoc]] TFConvBertForMaskedLM
- call
TFConvBertForSequenceClassification
[[autodoc]] TFConvBertForSequenceClassification
- call
TFConvBertForMultipleChoice
[[autodoc]] TFConvBertForMultipleChoice
- call
TFConvBertForTokenClassification
[[autodoc]] TFConvBertForTokenClassification
- call
TFConvBertForQuestionAnswering
[[autodoc]] TFConvBertForQuestionAnswering
- call |
SAM
Overview
SAM (Segment Anything Model) was proposed in Segment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
The model can be used to predict segmentation masks of any object of interest given an input image. |
The abstract from the paper is the following:
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.
Tips: |
The model predicts binary masks that states the presence or not of the object of interest given an image.
The model predicts much better results if input 2D points and/or input bounding boxes are provided
You can prompt multiple points for the same image, and predict a single mask.
Fine-tuning the model is not supported yet
According to the paper, textual input should be also supported. However, at this time of writing this seems to be not supported according to the official repository. |
This model was contributed by ybelkada and ArthurZ.
The original code can be found here.
Below is an example on how to run mask generation given an image and a 2D point:
thon
import torch
from PIL import Image
import requests
from transformers import SamModel, SamProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device)
processor = SamProcessor.from_pretrained("facebook/sam-vit-huge")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores |
You can also process your own masks alongside the input images in the processor to be passed to the model.
thon
import torch
from PIL import Image
import requests
from transformers import SamModel, SamProcessor
device = "cuda" if torch.cuda.is_available() else "cpu"
model = SamModel.from_pretrained("facebook/sam-vit-huge").to(device)
processor = SamProcessor.from_pretrained("facebook/sam-vit-huge")
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
mask_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
segmentation_map = Image.open(requests.get(mask_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_points=input_points, segmentation_maps=mask, return_tensors="pt").to(device)
with torch.no_grad():
outputs = model(**inputs)
masks = processor.image_processor.post_process_masks(
outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu()
)
scores = outputs.iou_scores |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SAM.
Demo notebook for using the model.
Demo notebook for using the automatic mask generation pipeline.
Demo notebook for inference with MedSAM, a fine-tuned version of SAM on the medical domain. 🌎
Demo notebook for fine-tuning the model on custom data. 🌎 |
SlimSAM
SlimSAM, a pruned version of SAM, was proposed in 0.1% Data Makes Segment Anything Slim by Zigeng Chen et al. SlimSAM reduces the size of the SAM models considerably while maintaining the same performance.
Checkpoints can be found on the hub, and they can be used as a drop-in replacement of SAM.
SamConfig
[[autodoc]] SamConfig
SamVisionConfig
[[autodoc]] SamVisionConfig
SamMaskDecoderConfig
[[autodoc]] SamMaskDecoderConfig
SamPromptEncoderConfig
[[autodoc]] SamPromptEncoderConfig
SamProcessor
[[autodoc]] SamProcessor
SamImageProcessor
[[autodoc]] SamImageProcessor
SamModel
[[autodoc]] SamModel
- forward
TFSamModel
[[autodoc]] TFSamModel
- call |
Falcon
Overview
Falcon is a class of causal decoder-only models built by TII. The largest Falcon checkpoints
have been trained on >=1T tokens of text, with a particular emphasis on the RefinedWeb
corpus. They are made available under the Apache 2.0 license.
Falcon's architecture is modern and optimized for inference, with multi-query attention and support for efficient
attention variants like FlashAttention. Both 'base' models trained only as causal language models as well as
'instruct' models that have received further fine-tuning are available.
Falcon models are (as of 2023) some of the largest and most powerful open-source language models,
and consistently rank highly in the OpenLLM leaderboard.
Converting custom checkpoints |
Falcon models were initially added to the Hugging Face Hub as custom code checkpoints. However, Falcon is now fully
supported in the Transformers library. If you fine-tuned a model from a custom code checkpoint, we recommend converting
your checkpoint to the new in-library format, as this should give significant improvements to stability and
performance, especially for generation, as well as removing the need to use trust_remote_code=True! |
You can convert custom code checkpoints to full Transformers checkpoints using the convert_custom_code_checkpoint.py
script located in the
Falcon model directory
of the Transformers library. To use this script, simply call it with
python convert_custom_code_checkpoint.py --checkpoint_dir my_model. This will convert your checkpoint in-place, and
you can immediately load it from the directory afterwards with e.g. from_pretrained(). If your model hasn't been
uploaded to the Hub, we recommend making a backup before attempting the conversion, just in case!
FalconConfig
[[autodoc]] FalconConfig
- all
FalconModel
[[autodoc]] FalconModel
- forward
FalconForCausalLM
[[autodoc]] FalconForCausalLM
- forward
FalconForSequenceClassification
[[autodoc]] FalconForSequenceClassification
- forward
FalconForTokenClassification
[[autodoc]] FalconForTokenClassification
- forward
FalconForQuestionAnswering
[[autodoc]] FalconForQuestionAnswering
- forward |
BARThez
Overview
The BARThez model was proposed in BARThez: a Skilled Pretrained French Sequence-to-Sequence Model by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct,
2020.
The abstract of the paper:
Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing
(NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language
understanding tasks. While there are some notable exceptions, most of the available models and research have been
conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language
(to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research
that we adapted to suit BART's perturbation schemes. Unlike already existing BERT-based French language models such as
CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also
its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel
summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already
pretrained multilingual BART on BARThez's corpus, and we show that the resulting model, which we call mBARTHez,
provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.
This model was contributed by moussakam. The Authors' code can be found here.
BARThez implementation is the same as BART, except for tokenization. Refer to BART documentation for information on
configuration classes and their parameters. BARThez-specific tokenizers are documented below. |
Resources
BARThez can be fine-tuned on sequence-to-sequence tasks in a similar way as BART, check:
examples/pytorch/summarization/.
BarthezTokenizer
[[autodoc]] BarthezTokenizer
BarthezTokenizerFast
[[autodoc]] BarthezTokenizerFast |
LUKE
Overview
The LUKE model was proposed in LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention by Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda and Yuji Matsumoto.
It is based on RoBERTa and adds entity embeddings as well as an entity-aware self-attention mechanism, which helps
improve performance on various downstream tasks involving reasoning about entities such as named entity recognition,
extractive and cloze-style question answering, entity typing, and relation classification.
The abstract from the paper is the following:
Entity representations are useful in natural language tasks involving entities. In this paper, we propose new
pretrained contextualized representations of words and entities based on the bidirectional transformer. The proposed
model treats words and entities in a given text as independent tokens, and outputs contextualized representations of
them. Our model is trained using a new pretraining task based on the masked language model of BERT. The task involves
predicting randomly masked words and entities in a large entity-annotated corpus retrieved from Wikipedia. We also
propose an entity-aware self-attention mechanism that is an extension of the self-attention mechanism of the
transformer, and considers the types of tokens (words or entities) when computing attention scores. The proposed model
achieves impressive empirical performance on a wide range of entity-related tasks. In particular, it obtains
state-of-the-art results on five well-known datasets: Open Entity (entity typing), TACRED (relation classification),
CoNLL-2003 (named entity recognition), ReCoRD (cloze-style question answering), and SQuAD 1.1 (extractive question
answering).
This model was contributed by ikuyamada and nielsr. The original code can be found here.
Usage tips |
This implementation is the same as [RobertaModel] with the addition of entity embeddings as well
as an entity-aware self-attention mechanism, which improves performance on tasks involving reasoning about entities.
LUKE treats entities as input tokens; therefore, it takes entity_ids, entity_attention_mask,
entity_token_type_ids and entity_position_ids as extra input. You can obtain those using
[LukeTokenizer]. |
[LukeTokenizer] takes entities and entity_spans (character-based start and end
positions of the entities in the input text) as extra input. entities typically consist of [MASK] entities or
Wikipedia entities. The brief description when inputting these entities are as follows: |
Inputting [MASK] entities to compute entity representations: The [MASK] entity is used to mask entities to be
predicted during pretraining. When LUKE receives the [MASK] entity, it tries to predict the original entity by
gathering the information about the entity from the input text. Therefore, the [MASK] entity can be used to address
downstream tasks requiring the information of entities in text such as entity typing, relation classification, and
named entity recognition. |
Inputting Wikipedia entities to compute knowledge-enhanced token representations: LUKE learns rich information
(or knowledge) about Wikipedia entities during pretraining and stores the information in its entity embedding. By
using Wikipedia entities as input tokens, LUKE outputs token representations enriched by the information stored in
the embeddings of these entities. This is particularly effective for tasks requiring real-world knowledge, such as
question answering. |
There are three head models for the former use case:
[LukeForEntityClassification], for tasks to classify a single entity in an input text such as
entity typing, e.g. the Open Entity dataset.
This model places a linear head on top of the output entity representation. |
[LukeForEntityPairClassification], for tasks to classify the relationship between two entities
such as relation classification, e.g. the TACRED dataset. This
model places a linear head on top of the concatenated output representation of the pair of given entities.
[LukeForEntitySpanClassification], for tasks to classify the sequence of entity spans, such as
named entity recognition (NER). This model places a linear head on top of the output entity representations. You
can address NER using this model by inputting all possible entity spans in the text to the model. |
[LukeTokenizer] has a task argument, which enables you to easily create an input to these
head models by specifying task="entity_classification", task="entity_pair_classification", or
task="entity_span_classification". Please refer to the example code of each head models.
Usage example:
thon
from transformers import LukeTokenizer, LukeModel, LukeForEntityPairClassification
model = LukeModel.from_pretrained("studio-ousia/luke-base")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-base") |
Example 1: Computing the contextualized entity representation corresponding to the entity mention "Beyoncé"
text = "Beyoncé lives in Los Angeles."
entity_spans = [(0, 7)] # character-based entity span corresponding to "Beyoncé"
inputs = tokenizer(text, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs = model(**inputs)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state |
Example 2: Inputting Wikipedia entities to obtain enriched contextualized representations |
entities = [
"Beyoncé",
"Los Angeles",
] # Wikipedia entity titles corresponding to the entity mentions "Beyoncé" and "Los Angeles"
entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
inputs = tokenizer(text, entities=entities, entity_spans=entity_spans, add_prefix_space=True, return_tensors="pt")
outputs = model(**inputs)
word_last_hidden_state = outputs.last_hidden_state
entity_last_hidden_state = outputs.entity_last_hidden_state |
Example 3: Classifying the relationship between two entities using LukeForEntityPairClassification head model |
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
entity_spans = [(0, 7), (17, 28)] # character-based entity spans corresponding to "Beyoncé" and "Los Angeles"
inputs = tokenizer(text, entity_spans=entity_spans, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_class_idx = int(logits[0].argmax())
print("Predicted class:", model.config.id2label[predicted_class_idx]) |
Resources
A demo notebook on how to fine-tune [LukeForEntityPairClassification] for relation classification
Notebooks showcasing how you to reproduce the results as reported in the paper with the HuggingFace implementation of LUKE
Text classification task guide
Token classification task guide
Question answering task guide
Masked language modeling task guide
Multiple choice task guide |
LukeConfig
[[autodoc]] LukeConfig
LukeTokenizer
[[autodoc]] LukeTokenizer
- call
- save_vocabulary
LukeModel
[[autodoc]] LukeModel
- forward
LukeForMaskedLM
[[autodoc]] LukeForMaskedLM
- forward
LukeForEntityClassification
[[autodoc]] LukeForEntityClassification
- forward
LukeForEntityPairClassification
[[autodoc]] LukeForEntityPairClassification
- forward
LukeForEntitySpanClassification
[[autodoc]] LukeForEntitySpanClassification
- forward
LukeForSequenceClassification
[[autodoc]] LukeForSequenceClassification
- forward
LukeForMultipleChoice
[[autodoc]] LukeForMultipleChoice
- forward
LukeForTokenClassification
[[autodoc]] LukeForTokenClassification
- forward
LukeForQuestionAnswering
[[autodoc]] LukeForQuestionAnswering
- forward |
FocalNet
Overview
The FocalNet model was proposed in Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao.
FocalNets completely replace self-attention (used in models like ViT and Swin) by a focal modulation mechanism for modeling token interactions in vision.
The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation.
The abstract from the paper is the following:
We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its
content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3.
This model was contributed by nielsr.
The original code can be found here.
FocalNetConfig
[[autodoc]] FocalNetConfig
FocalNetModel
[[autodoc]] FocalNetModel
- forward
FocalNetForMaskedImageModeling
[[autodoc]] FocalNetForMaskedImageModeling
- forward
FocalNetForImageClassification
[[autodoc]] FocalNetForImageClassification
- forward |
ERNIE
Overview
ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks,
including ERNIE1.0, ERNIE2.0,
ERNIE3.0, ERNIE-Gram, ERNIE-health, etc.
These models are contributed by nghuyong and the official code can be found in PaddleNLP (in PaddlePaddle).
Usage example
Take ernie-1.0-base-zh as an example:
Python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh")
model = AutoModel.from_pretrained("nghuyong/ernie-1.0-base-zh")
Model checkpoints
| Model Name | Language | Description |
|:-------------------:|:--------:|:-------------------------------:|
| ernie-1.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
| ernie-2.0-base-en | English | Layer:12, Heads:12, Hidden:768 |
| ernie-2.0-large-en | English | Layer:24, Heads:16, Hidden:1024 |
| ernie-3.0-base-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
| ernie-3.0-medium-zh | Chinese | Layer:6, Heads:12, Hidden:768 |
| ernie-3.0-mini-zh | Chinese | Layer:6, Heads:12, Hidden:384 |
| ernie-3.0-micro-zh | Chinese | Layer:4, Heads:12, Hidden:384 |
| ernie-3.0-nano-zh | Chinese | Layer:4, Heads:12, Hidden:312 |
| ernie-health-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
| ernie-gram-zh | Chinese | Layer:12, Heads:12, Hidden:768 |
You can find all the supported models from huggingface's model hub: huggingface.co/nghuyong, and model details from paddle's official
repo: PaddleNLP
and ERNIE.
Resources |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
ErnieConfig
[[autodoc]] ErnieConfig
- all
Ernie specific outputs
[[autodoc]] models.ernie.modeling_ernie.ErnieForPreTrainingOutput
ErnieModel
[[autodoc]] ErnieModel
- forward
ErnieForPreTraining
[[autodoc]] ErnieForPreTraining
- forward
ErnieForCausalLM
[[autodoc]] ErnieForCausalLM
- forward
ErnieForMaskedLM
[[autodoc]] ErnieForMaskedLM
- forward
ErnieForNextSentencePrediction
[[autodoc]] ErnieForNextSentencePrediction
- forward
ErnieForSequenceClassification
[[autodoc]] ErnieForSequenceClassification
- forward
ErnieForMultipleChoice
[[autodoc]] ErnieForMultipleChoice
- forward
ErnieForTokenClassification
[[autodoc]] ErnieForTokenClassification
- forward
ErnieForQuestionAnswering
[[autodoc]] ErnieForQuestionAnswering
- forward |
CLIP
Overview
The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP
(Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be
instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing
for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
The abstract from the paper is the following:
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This
restricted form of supervision limits their generality and usability since additional labeled data is needed to specify
any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a
much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes
with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400
million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference
learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study
the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks
such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The
model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need
for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot
without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained
model weights at this https URL.
This model was contributed by valhalla. The original code can be found here.
Usage tips and example
CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image
classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text
features. Both the text and visual features are then projected to a latent space with identical dimension. The dot
product between the projected image and text features is then used as a similar score.
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors
also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder.
The [CLIPImageProcessor] can be used to resize (or rescale) and normalize images for the model.
The [CLIPTokenizer] is used to encode the text. The [CLIPProcessor] wraps
[CLIPImageProcessor] and [CLIPTokenizer] into a single instance to both
encode the text and prepare the images. The following example shows how to get the image-text similarity scores using
[CLIPProcessor] and [CLIPModel].
thon |
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP.
Fine tuning CLIP with Remote Sensing (Satellite) images and captions, a blog post about how to fine-tune CLIP with RSICD dataset and comparison of performance changes due to data augmentation.
This example script shows how to train a CLIP-like vision-text dual encoder model using a pre-trained vision and text encoder using COCO dataset. |
A notebook on how to use a pretrained CLIP for inference with beam search for image captioning. 🌎
Image retrieval
A notebook on image retrieval using pretrained CLIP and computing MRR(Mean Reciprocal Rank) score. 🌎
A notebook on image retrieval and showing the similarity score. 🌎
A notebook on how to map images and texts to the same vector space using Multilingual CLIP. 🌎
A notebook on how to run CLIP on semantic image search using Unsplash and TMDB datasets. 🌎
Explainability |
Explainability
A notebook on how to visualize similarity between input token and image segment. 🌎 |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
The resource should ideally demonstrate something new instead of duplicating an existing resource.
CLIPConfig
[[autodoc]] CLIPConfig
- from_text_vision_configs
CLIPTextConfig
[[autodoc]] CLIPTextConfig
CLIPVisionConfig
[[autodoc]] CLIPVisionConfig
CLIPTokenizer
[[autodoc]] CLIPTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
CLIPTokenizerFast
[[autodoc]] CLIPTokenizerFast
CLIPImageProcessor
[[autodoc]] CLIPImageProcessor
- preprocess
CLIPFeatureExtractor
[[autodoc]] CLIPFeatureExtractor
CLIPProcessor
[[autodoc]] CLIPProcessor |
CLIPModel
[[autodoc]] CLIPModel
- forward
- get_text_features
- get_image_features
CLIPTextModel
[[autodoc]] CLIPTextModel
- forward
CLIPTextModelWithProjection
[[autodoc]] CLIPTextModelWithProjection
- forward
CLIPVisionModelWithProjection
[[autodoc]] CLIPVisionModelWithProjection
- forward
CLIPVisionModel
[[autodoc]] CLIPVisionModel
- forward
CLIPForImageClassification
[[autodoc]] CLIPForImageClassification
- forward |
TFCLIPModel
[[autodoc]] TFCLIPModel
- call
- get_text_features
- get_image_features
TFCLIPTextModel
[[autodoc]] TFCLIPTextModel
- call
TFCLIPVisionModel
[[autodoc]] TFCLIPVisionModel
- call |
FlaxCLIPModel
[[autodoc]] FlaxCLIPModel
- call
- get_text_features
- get_image_features
FlaxCLIPTextModel
[[autodoc]] FlaxCLIPTextModel
- call
FlaxCLIPTextModelWithProjection
[[autodoc]] FlaxCLIPTextModelWithProjection
- call
FlaxCLIPVisionModel
[[autodoc]] FlaxCLIPVisionModel
- call |
Informer
Overview
The Informer model was proposed in Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting by Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang.
This method introduces a Probabilistic Attention mechanism to select the "active" queries rather than the "lazy" queries and provides a sparse Transformer thus mitigating the quadratic compute and memory requirements of vanilla attention.
The abstract from the paper is the following:
Many real-world applications require the prediction of long sequence time-series, such as electricity consumption planning. Long sequence time-series forecasting (LSTF) demands a high prediction capacity of the model, which is the ability to capture precise long-range dependency coupling between output and input efficiently. Recent studies have shown the potential of Transformer to increase the prediction capacity. However, there are several severe issues with Transformer that prevent it from being directly applicable to LSTF, including quadratic time complexity, high memory usage, and inherent limitation of the encoder-decoder architecture. To address these issues, we design an efficient transformer-based model for LSTF, named Informer, with three distinctive characteristics: (i) a ProbSparse self-attention mechanism, which achieves O(L logL) in time complexity and memory usage, and has comparable performance on sequences' dependency alignment. (ii) the self-attention distilling highlights dominating attention by halving cascading layer input, and efficiently handles extreme long input sequences. (iii) the generative style decoder, while conceptually simple, predicts the long time-series sequences at one forward operation rather than a step-by-step way, which drastically improves the inference speed of long-sequence predictions. Extensive experiments on four large-scale datasets demonstrate that Informer significantly outperforms existing methods and provides a new solution to the LSTF problem.
This model was contributed by elisim and kashif.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
Subsets and Splits