text
stringlengths
2
11.8k
ChineseCLIPConfig [[autodoc]] ChineseCLIPConfig - from_text_vision_configs ChineseCLIPTextConfig [[autodoc]] ChineseCLIPTextConfig ChineseCLIPVisionConfig [[autodoc]] ChineseCLIPVisionConfig ChineseCLIPImageProcessor [[autodoc]] ChineseCLIPImageProcessor - preprocess ChineseCLIPFeatureExtractor [[autodoc]] ChineseCLIPFeatureExtractor ChineseCLIPProcessor [[autodoc]] ChineseCLIPProcessor ChineseCLIPModel [[autodoc]] ChineseCLIPModel - forward - get_text_features - get_image_features ChineseCLIPTextModel [[autodoc]] ChineseCLIPTextModel - forward ChineseCLIPVisionModel [[autodoc]] ChineseCLIPVisionModel - forward
PhoBERT Overview The PhoBERT model was proposed in PhoBERT: Pre-trained language models for Vietnamese by Dat Quoc Nguyen, Anh Tuan Nguyen. The abstract from the paper is the following: We present PhoBERT with two versions, PhoBERT-base and PhoBERT-large, the first public large-scale monolingual language models pre-trained for Vietnamese. Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R (Conneau et al., 2020) and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part-of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference. This model was contributed by dqnguyen. The original code can be found here. Usage example thon
import torch from transformers import AutoModel, AutoTokenizer phobert = AutoModel.from_pretrained("vinai/phobert-base") tokenizer = AutoTokenizer.from_pretrained("vinai/phobert-base") INPUT TEXT MUST BE ALREADY WORD-SEGMENTED! line = "Tôi là sinh_viên trường đại_học Công_nghệ ." input_ids = torch.tensor([tokenizer.encode(line)]) with torch.no_grad(): features = phobert(input_ids) # Models outputs are now tuples With TensorFlow 2.0+: from transformers import TFAutoModel phobert = TFAutoModel.from_pretrained("vinai/phobert-base")
PhoBERT implementation is the same as BERT, except for tokenization. Refer to EART documentation for information on configuration classes and their parameters. PhoBERT-specific tokenizer is documented below. PhobertTokenizer [[autodoc]] PhobertTokenizer
Fuyu Overview The Fuyu model was created by ADEPT, and authored by Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar. The authors introduced Fuyu-8B, a decoder-only multimodal model based on the classic transformers architecture, with query and key normalization. A linear encoder is added to create multimodal embeddings from image inputs. By treating image tokens like text tokens and using a special image-newline character, the model knows when an image line ends. Image positional embeddings are removed. This avoids the need for different training phases for various image resolutions. With 8 billion parameters and licensed under CC-BY-NC, Fuyu-8B is notable for its ability to handle both text and images, its impressive context size of 16K, and its overall performance.
The Fuyu models were trained using bfloat16, but the original inference uses float16 The checkpoints uploaded on the hub use torch_dtype = 'float16' which will be used by the AutoModel API to cast the checkpoints from torch.float32 to torch.float16. The dtype of the online weights is mostly irrelevant, unless you are using torch_dtype="auto" when initializing a model using model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto"). The reason is that the model will first be downloaded ( using the dtype of the checkpoints online) then it will be cast to the default dtype of torch (becomes torch.float32). Users should specify the torch_dtype they want, and if they don't it will be torch.float32. Finetuning the model in float16 is not recommended and known to produce nan, as such the model should be fine-tuned in bfloat16.
Tips: To convert the model, you need to clone the original repository using git clone https://github.com/persimmon-ai-labs/adept-inference, then get the checkpoints:
git clone https://github.com/persimmon-ai-labs/adept-inference wget path/to/fuyu-8b-model-weights.tar tar -xvf fuyu-8b-model-weights.tar python src/transformers/models/fuyu/convert_fuyu_weights_to_hf.py --input_dir /path/to/downloaded/fuyu/weights/ --output_dir /output/path \ --pt_model_path /path/to/fuyu_8b_release/iter_0001251/mp_rank_00/model_optim_rng.pt --ada_lib_path /path/to/adept-inference For the chat model:
wget https://axtkn4xl5cip.objectstorage.us-phoenix-1.oci.customer-oci.com/n/axtkn4xl5cip/b/adept-public-data/o/8b_chat_model_release.tar tar -xvf 8b_base_model_release.tar Then, model can be loaded via: py from transformers import FuyuConfig, FuyuForCausalLM model_config = FuyuConfig() model = FuyuForCausalLM(model_config).from_pretrained('/output/path') Inputs need to be passed through a specific Processor to have the correct formats. A processor requires an image_processor and a tokenizer. Hence, inputs can be loaded via:
from PIL import Image from transformers import AutoTokenizer from transformers.models.fuyu.processing_fuyu import FuyuProcessor from transformers.models.fuyu.image_processing_fuyu import FuyuImageProcessor tokenizer = AutoTokenizer.from_pretrained('adept-hf-collab/fuyu-8b') image_processor = FuyuImageProcessor() processor = FuyuProcessor(image_processor=image_processor, tokenizer=tokenizer) text_prompt = "Generate a coco-style caption.\n" bus_image_url = "https://huggingface.co/datasets/hf-internal-testing/fixtures-captioning/resolve/main/bus.png" bus_image_pil = Image.open(io.BytesIO(requests.get(bus_image_url).content)) inputs_to_model = processor(text=text_prompt, images=image_pil)
This model was contributed by Molbap. The original code can be found here. Fuyu uses a sentencepiece based tokenizer, with a Unigram model. It supports bytefallback, which is only available in tokenizers==0.14.0 for the fast tokenizer. The LlamaTokenizer is used as it is a standard wrapper around sentencepiece. The authors suggest to use the following prompt for image captioning: f"Generate a coco-style caption.\\n"
The authors suggest to use the following prompt for image captioning: f"Generate a coco-style caption.\\n" FuyuConfig [[autodoc]] FuyuConfig FuyuForCausalLM [[autodoc]] FuyuForCausalLM - forward FuyuImageProcessor [[autodoc]] FuyuImageProcessor - call FuyuProcessor [[autodoc]] FuyuProcessor - call
Vision Encoder Decoder Models Overview The [VisionEncoderDecoderModel] can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder (e.g. ViT, BEiT, DeiT, Swin) and any pretrained language model as the decoder (e.g. RoBERTa, GPT2, BERT, DistilBERT). The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. After such a [VisionEncoderDecoderModel] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below for more information). An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates the caption. Another example is optical character recognition. Refer to TrOCR, which is an instance of [VisionEncoderDecoderModel]. Randomly initializing VisionEncoderDecoderModel from model configurations. [VisionEncoderDecoderModel] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [ViTModel] configuration for the encoder and the default [BertForCausalLM] configuration for the decoder. thon
from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel config_encoder = ViTConfig() config_decoder = BertConfig() config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) model = VisionEncoderDecoderModel(config=config)
Initialising VisionEncoderDecoderModel from a pretrained encoder and a pretrained decoder. [VisionEncoderDecoderModel] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, e.g. Swin, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing [VisionEncoderDecoderModel] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post. To do so, the VisionEncoderDecoderModel class provides a [VisionEncoderDecoderModel.from_encoder_decoder_pretrained] method. thon
from transformers import VisionEncoderDecoderModel model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( "microsoft/swin-base-patch4-window7-224-in22k", "google-bert/bert-base-uncased" )
Loading an existing VisionEncoderDecoderModel checkpoint and perform inference. To load fine-tuned checkpoints of the VisionEncoderDecoderModel class, [VisionEncoderDecoderModel] provides the from_pretrained() method just like any other model architecture in Transformers. To perform inference, one uses the [generate] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. thon
import requests from PIL import Image from transformers import GPT2TokenizerFast, ViTImageProcessor, VisionEncoderDecoderModel load a fine-tuned image captioning model and corresponding tokenizer and image processor model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning") image_processor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") let's perform inference on an image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) pixel_values = image_processor(image, return_tensors="pt").pixel_values autoregressively generate caption (uses greedy decoding by default) generated_ids = model.generate(pixel_values) generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_text) a cat laying on a blanket next to a cat laying on a bed
Loading a PyTorch checkpoint into TFVisionEncoderDecoderModel. [TFVisionEncoderDecoderModel.from_pretrained] currently doesn't support initializing the model from a PyTorch checkpoint. Passing from_pt=True to this method will throw an exception. If there are only PyTorch checkpoints for a particular vision encoder-decoder model, a workaround is: thon
from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel _model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") _model.encoder.save_pretrained("./encoder") _model.decoder.save_pretrained("./decoder") model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ) This is only for copying some specific attributes of this particular model. model.config = _model.config
Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. As you can see, only 2 inputs are required for the model in order to compute a loss: pixel_values (which are the images) and labels (which are the input_ids of the encoded target sequence). thon
from transformers import ViTImageProcessor, BertTokenizer, VisionEncoderDecoderModel from datasets import load_dataset image_processor = ViTImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") tokenizer = BertTokenizer.from_pretrained("google-bert/bert-base-uncased") model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( "google/vit-base-patch16-224-in21k", "google-bert/bert-base-uncased" ) model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] pixel_values = image_processor(image, return_tensors="pt").pixel_values labels = tokenizer( "an image of two cats chilling on a couch", return_tensors="pt", ).input_ids the forward function automatically creates the correct decoder_input_ids loss = model(pixel_values=pixel_values, labels=labels).loss
This model was contributed by nielsr. This model's TensorFlow and Flax versions were contributed by ydshieh. VisionEncoderDecoderConfig [[autodoc]] VisionEncoderDecoderConfig VisionEncoderDecoderModel [[autodoc]] VisionEncoderDecoderModel - forward - from_encoder_decoder_pretrained TFVisionEncoderDecoderModel [[autodoc]] TFVisionEncoderDecoderModel - call - from_encoder_decoder_pretrained
TFVisionEncoderDecoderModel [[autodoc]] TFVisionEncoderDecoderModel - call - from_encoder_decoder_pretrained FlaxVisionEncoderDecoderModel [[autodoc]] FlaxVisionEncoderDecoderModel - call - from_encoder_decoder_pretrained
MatCha Overview MatCha has been proposed in the paper MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering, from Fangyu Liu, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Yasemin Altun, Nigel Collier, Julian Martin Eisenschlos. The abstract of the paper states the following: Visual language data such as plots, charts, and infographics are ubiquitous in the human world. However, state-of-the-art vision-language models do not perform well on these data. We propose MatCha (Math reasoning and Chart derendering pretraining) to enhance visual language models' capabilities in jointly modeling charts/plots and language data. Specifically, we propose several pretraining tasks that cover plot deconstruction and numerical reasoning which are the key capabilities in visual language modeling. We perform the MatCha pretraining starting from Pix2Struct, a recently proposed image-to-text visual language model. On standard benchmarks such as PlotQA and ChartQA, the MatCha model outperforms state-of-the-art methods by as much as nearly 20%. We also examine how well MatCha pretraining transfers to domains such as screenshots, textbook diagrams, and document figures and observe overall improvement, verifying the usefulness of MatCha pretraining on broader visual language tasks. Model description MatCha is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation. MatCha is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer. Usage Currently 6 checkpoints are available for MatCha:
google/matcha: the base MatCha model, used to fine-tune MatCha on downstream tasks google/matcha-chartqa: MatCha model fine-tuned on ChartQA dataset. It can be used to answer questions about charts. google/matcha-plotqa-v1: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. google/matcha-plotqa-v2: MatCha model fine-tuned on PlotQA dataset. It can be used to answer questions about plots. google/matcha-chart2text-statista: MatCha model fine-tuned on Statista dataset. google/matcha-chart2text-pew: MatCha model fine-tuned on Pew dataset.
The models finetuned on chart2text-pew and chart2text-statista are more suited for summarization, whereas the models finetuned on plotqa and chartqa are more suited for question answering. You can use these models as follows (example on a ChatQA dataset): thon from transformers import AutoProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image model = Pix2StructForConditionalGeneration.from_pretrained("google/matcha-chartqa").to(0) processor = AutoProcessor.from_pretrained("google/matcha-chartqa") url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/20294671002019.png" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, text="Is the sum of all 4 places greater than Laos?", return_tensors="pt").to(0) predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True))
Fine-tuning To fine-tune MatCha, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faste convergence: thon from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05) scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
MatCha is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation.
SwitchTransformers Overview The SwitchTransformers model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, Noam Shazeer. The Switch Transformer model uses a sparse T5 encoder-decoder architecture, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale. During a forward pass, only a fraction of the weights are used. The routing mechanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations. The abstract from the paper is the following: In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model. This model was contributed by Younes Belkada and Arthur Zucker. The original code can be found here. Usage tips
SwitchTransformers uses the [T5Tokenizer], which can be loaded directly from each model's repository. The released weights are pretrained on English Masked Language Modeling task, and should be finetuned. Resources Translation task guide Summarization task guide
SwitchTransformersConfig [[autodoc]] SwitchTransformersConfig SwitchTransformersTop1Router [[autodoc]] SwitchTransformersTop1Router - _compute_router_probabilities - forward SwitchTransformersSparseMLP [[autodoc]] SwitchTransformersSparseMLP - forward SwitchTransformersModel [[autodoc]] SwitchTransformersModel - forward SwitchTransformersForConditionalGeneration [[autodoc]] SwitchTransformersForConditionalGeneration - forward SwitchTransformersEncoderModel [[autodoc]] SwitchTransformersEncoderModel - forward
VITS Overview The VITS model was proposed in Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech by Jaehyeon Kim, Jungil Kong, Juhee Son. VITS (Variational Inference with adversarial learning for end-to-end Text-to-Speech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. The abstract from the paper is the following: Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth. This model can also be used with TTS checkpoints from Massively Multilingual Speech (MMS) as these checkpoints use the same architecture and a slightly modified tokenizer. This model was contributed by Matthijs and sanchit-gandhi. The original code can be found here. Usage examples Both the VITS and MMS-TTS checkpoints can be used with the same API. Since the flow-based model is non-deterministic, it is good practice to set a seed to ensure reproducibility of the outputs. For languages with a Roman alphabet, such as English or French, the tokenizer can be used directly to pre-process the text inputs. The following code example runs a forward pass using the MMS-TTS English checkpoint: thon import torch from transformers import VitsTokenizer, VitsModel, set_seed tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng") model = VitsModel.from_pretrained("facebook/mms-tts-eng") inputs = tokenizer(text="Hello - my dog is cute", return_tensors="pt") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(**inputs) waveform = outputs.waveform[0]
The resulting waveform can be saved as a .wav file: thon import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=waveform) Or displayed in a Jupyter Notebook / Google Colab: thon from IPython.display import Audio Audio(waveform, rate=model.config.sampling_rate)
For certain languages with a non-Roman alphabet, such as Arabic, Mandarin or Hindi, the uroman perl package is required to pre-process the text inputs to the Roman alphabet. You can check whether you require the uroman package for your language by inspecting the is_uroman attribute of the pre-trained tokenizer: thon from transformers import VitsTokenizer tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng") print(tokenizer.is_uroman)
If required, you should apply the uroman package to your text inputs prior to passing them to the VitsTokenizer, since currently the tokenizer does not support performing the pre-processing itself. To do this, first clone the uroman repository to your local machine and set the bash variable UROMAN to the local path:
git clone https://github.com/isi-nlp/uroman.git cd uroman export UROMAN=$(pwd) You can then pre-process the text input using the following code snippet. You can either rely on using the bash variable UROMAN to point to the uroman repository, or you can pass the uroman directory as an argument to the uromaize function: thon import torch from transformers import VitsTokenizer, VitsModel, set_seed import os import subprocess tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-kor") model = VitsModel.from_pretrained("facebook/mms-tts-kor") def uromanize(input_string, uroman_path): """Convert non-Roman strings to Roman using the uroman perl package.""" script_path = os.path.join(uroman_path, "bin", "uroman.pl") command = ["perl", script_path]
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # Execute the perl command stdout, stderr = process.communicate(input=input_string.encode()) if process.returncode != 0: raise ValueError(f"Error {process.returncode}: {stderr.decode()}") # Return the output as a string and skip the new-line character at the end return stdout.decode()[:-1]
# Return the output as a string and skip the new-line character at the end return stdout.decode()[:-1] text = "이봐 무슨 일이야" uromaized_text = uromanize(text, uroman_path=os.environ["UROMAN"]) inputs = tokenizer(text=uromaized_text, return_tensors="pt") set_seed(555) # make deterministic with torch.no_grad(): outputs = model(inputs["input_ids"]) waveform = outputs.waveform[0]
VitsConfig [[autodoc]] VitsConfig VitsTokenizer [[autodoc]] VitsTokenizer - call - save_vocabulary VitsModel [[autodoc]] VitsModel - forward
RAG
Overview Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks. It is based on the paper Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. The abstract from the paper is the following: Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline. This model was contributed by ola13. Usage tips Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models. RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks. RagConfig [[autodoc]] RagConfig RagTokenizer [[autodoc]] RagTokenizer Rag specific outputs [[autodoc]] models.rag.modeling_rag.RetrievAugLMMarginOutput [[autodoc]] models.rag.modeling_rag.RetrievAugLMOutput RagRetriever [[autodoc]] RagRetriever
RagModel [[autodoc]] RagModel - forward RagSequenceForGeneration [[autodoc]] RagSequenceForGeneration - forward - generate RagTokenForGeneration [[autodoc]] RagTokenForGeneration - forward - generate TFRagModel [[autodoc]] TFRagModel - call TFRagSequenceForGeneration [[autodoc]] TFRagSequenceForGeneration - call - generate TFRagTokenForGeneration [[autodoc]] TFRagTokenForGeneration - call - generate
MobileBERT Overview The MobileBERT model was proposed in MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. It's a bidirectional transformer based on the BERT model, which is compressed and accelerated using several approaches. The abstract from the paper is the following: Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we first train a specially designed teacher model, an inverted-bottleneck incorporated BERT_LARGE model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3x smaller and 5.5x faster than BERT_BASE while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, MobileBERT achieves a GLUEscore o 77.7 (0.6 lower than BERT_BASE), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, MobileBERT achieves a dev F1 score of 90.0/79.2 (1.5/2.1 higher than BERT_BASE). This model was contributed by vshampor. The original code can be found here. Usage tips
MobileBERT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. MobileBERT is similar to BERT and therefore relies on the masked language modeling (MLM) objective. It is therefore efficient at predicting masked tokens and at NLU in general, but is not optimal for text generation. Models trained with a causal language modeling (CLM) objective are better in that regard. Resources
Resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide
Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide MobileBertConfig [[autodoc]] MobileBertConfig MobileBertTokenizer [[autodoc]] MobileBertTokenizer MobileBertTokenizerFast [[autodoc]] MobileBertTokenizerFast MobileBert specific outputs [[autodoc]] models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput [[autodoc]] models.mobilebert.modeling_tf_mobilebert.TFMobileBertForPreTrainingOutput
MobileBertModel [[autodoc]] MobileBertModel - forward MobileBertForPreTraining [[autodoc]] MobileBertForPreTraining - forward MobileBertForMaskedLM [[autodoc]] MobileBertForMaskedLM - forward MobileBertForNextSentencePrediction [[autodoc]] MobileBertForNextSentencePrediction - forward MobileBertForSequenceClassification [[autodoc]] MobileBertForSequenceClassification - forward MobileBertForMultipleChoice [[autodoc]] MobileBertForMultipleChoice - forward MobileBertForTokenClassification [[autodoc]] MobileBertForTokenClassification - forward MobileBertForQuestionAnswering [[autodoc]] MobileBertForQuestionAnswering - forward
TFMobileBertModel [[autodoc]] TFMobileBertModel - call TFMobileBertForPreTraining [[autodoc]] TFMobileBertForPreTraining - call TFMobileBertForMaskedLM [[autodoc]] TFMobileBertForMaskedLM - call TFMobileBertForNextSentencePrediction [[autodoc]] TFMobileBertForNextSentencePrediction - call TFMobileBertForSequenceClassification [[autodoc]] TFMobileBertForSequenceClassification - call TFMobileBertForMultipleChoice [[autodoc]] TFMobileBertForMultipleChoice - call TFMobileBertForTokenClassification [[autodoc]] TFMobileBertForTokenClassification - call TFMobileBertForQuestionAnswering [[autodoc]] TFMobileBertForQuestionAnswering - call
TrOCR Overview The TrOCR model was proposed in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. TrOCR consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform optical character recognition (OCR). The abstract from the paper is the following: Text recognition is a long-standing research problem for document digitalization. Existing approaches for text recognition are usually built based on CNN for image understanding and RNN for char-level text generation. In addition, another language model is usually needed to improve the overall accuracy as a post-processing step. In this paper, we propose an end-to-end text recognition approach with pre-trained image Transformer and text Transformer models, namely TrOCR, which leverages the Transformer architecture for both image understanding and wordpiece-level text generation. The TrOCR model is simple but effective, and can be pre-trained with large-scale synthetic data and fine-tuned with human-labeled datasets. Experiments show that the TrOCR model outperforms the current state-of-the-art models on both printed and handwritten text recognition tasks.
TrOCR architecture. Taken from the original paper. Please refer to the [VisionEncoderDecoder] class on how to use this model. This model was contributed by nielsr. The original code can be found here. Usage tips
The quickest way to get started with TrOCR is by checking the tutorial notebooks, which show how to use the model at inference time as well as fine-tuning on custom data. TrOCR is pre-trained in 2 stages before being fine-tuned on downstream datasets. It achieves state-of-the-art results on both printed (e.g. the SROIE dataset and handwritten (e.g. the IAM Handwriting dataset text recognition tasks. For more information, see the official models. TrOCR is always used within the VisionEncoderDecoder framework.
Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with TrOCR. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A blog post on Accelerating Document AI with TrOCR. A blog post on how to Document AI with TrOCR. A notebook on how to finetune TrOCR on IAM Handwriting Database using Seq2SeqTrainer. A notebook on inference with TrOCR and Gradio demo. A notebook on finetune TrOCR on the IAM Handwriting Database using native PyTorch. A notebook on evaluating TrOCR on the IAM test set. Casual language modeling task guide. ⚡️ Inference An interactive-demo on TrOCR handwritten character recognition.
Inference TrOCR's [VisionEncoderDecoder] model accepts images as input and makes use of [~generation.GenerationMixin.generate] to autoregressively generate text given the input image. The [ViTImageProcessor/DeiTImageProcessor] class is responsible for preprocessing the input image and [RobertaTokenizer/XLMRobertaTokenizer] decodes the generated target tokens to the target string. The [TrOCRProcessor] wraps [ViTImageProcessor/DeiTImageProcessor] and [RobertaTokenizer/XLMRobertaTokenizer] into a single instance to both extract the input features and decode the predicted token ids.
Step-by-step Optical Character Recognition (OCR) ``` py
from transformers import TrOCRProcessor, VisionEncoderDecoderModel import requests from PIL import Image processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") load image from the IAM dataset url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
See the model hub to look for TrOCR checkpoints. TrOCRConfig [[autodoc]] TrOCRConfig TrOCRProcessor [[autodoc]] TrOCRProcessor - call - from_pretrained - save_pretrained - batch_decode - decode TrOCRForCausalLM [[autodoc]] TrOCRForCausalLM - forward
BARTpho Overview The BARTpho model was proposed in BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen. The abstract from the paper is the following: We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future research and applications of generative Vietnamese NLP tasks. This model was contributed by dqnguyen. The original code can be found here. Usage example thon
import torch from transformers import AutoModel, AutoTokenizer bartpho = AutoModel.from_pretrained("vinai/bartpho-syllable") tokenizer = AutoTokenizer.from_pretrained("vinai/bartpho-syllable") line = "Chúng tôi là những nghiên cứu viên." input_ids = tokenizer(line, return_tensors="pt") with torch.no_grad(): features = bartpho(**input_ids) # Models outputs are now tuples With TensorFlow 2.0+: from transformers import TFAutoModel bartpho = TFAutoModel.from_pretrained("vinai/bartpho-syllable") input_ids = tokenizer(line, return_tensors="tf") features = bartpho(**input_ids)
Usage tips Following mBART, BARTpho uses the "large" architecture of BART with an additional layer-normalization layer on top of both the encoder and decoder. Thus, usage examples in the documentation of BART, when adapting to use with BARTpho, should be adjusted by replacing the BART-specialized classes with the mBART-specialized counterparts. For example: thon
thon from transformers import MBartForConditionalGeneration bartpho = MBartForConditionalGeneration.from_pretrained("vinai/bartpho-syllable") TXT = "Chúng tôi là nghiên cứu viên." input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"] logits = bartpho(input_ids).logits masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) print(tokenizer.decode(predictions).split())
This implementation is only for tokenization: "monolingual_vocab_file" consists of Vietnamese-specialized types extracted from the pre-trained SentencePiece model "vocab_file" that is available from the multilingual XLM-RoBERTa. Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword segmentation, can reuse BartphoTokenizer with their own language-specialized "monolingual_vocab_file". BartphoTokenizer [[autodoc]] BartphoTokenizer
BioGPT Overview The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch. The abstract from the paper is the following: Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms. This model was contributed by kamalkraj. The original code can be found here. Usage tips
BioGPT is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script. The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.
Resources Causal language modeling task guide BioGptConfig [[autodoc]] BioGptConfig BioGptTokenizer [[autodoc]] BioGptTokenizer - save_vocabulary BioGptModel [[autodoc]] BioGptModel - forward BioGptForCausalLM [[autodoc]] BioGptForCausalLM - forward BioGptForTokenClassification [[autodoc]] BioGptForTokenClassification - forward BioGptForSequenceClassification [[autodoc]] BioGptForSequenceClassification - forward
SpeechT5 Overview The SpeechT5 model was proposed in SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei. The abstract from the paper is the following: Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification. This model was contributed by Matthijs. The original code can be found here. SpeechT5Config [[autodoc]] SpeechT5Config SpeechT5HifiGanConfig [[autodoc]] SpeechT5HifiGanConfig SpeechT5Tokenizer [[autodoc]] SpeechT5Tokenizer - call - save_vocabulary - decode - batch_decode SpeechT5FeatureExtractor [[autodoc]] SpeechT5FeatureExtractor - call SpeechT5Processor [[autodoc]] SpeechT5Processor - call - pad - from_pretrained - save_pretrained - batch_decode - decode SpeechT5Model [[autodoc]] SpeechT5Model - forward SpeechT5ForSpeechToText [[autodoc]] SpeechT5ForSpeechToText - forward SpeechT5ForTextToSpeech [[autodoc]] SpeechT5ForTextToSpeech - forward - generate SpeechT5ForSpeechToSpeech [[autodoc]] SpeechT5ForSpeechToSpeech - forward - generate_speech SpeechT5HifiGan [[autodoc]] SpeechT5HifiGan - forward
MobileNet V1 Overview The MobileNet model was proposed in MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. The abstract from the paper is the following: We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization. This model was contributed by matthijs. The original code and weights can be found here. Usage tips
The checkpoints are named mobilenet_v1_depth_size, for example mobilenet_v1_1.0_224, where 1.0 is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and 224 is the resolution of the input images the model was trained on. Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32. One can use [MobileNetV1ImageProcessor] to prepare images for the model.
One can use [MobileNetV1ImageProcessor] to prepare images for the model. The available image classification checkpoints are pre-trained on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0).
The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [MobileNetV1Config] with tf_padding = False. Unsupported features:
Unsupported features: The [MobileNetV1Model] outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this.
It is currently not possible to specify an output_stride. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32. The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights.
The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights. It's common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using output_hidden_states=True returns the output from all intermediate layers. There is currently no way to limit this to specific layers.
Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1. [MobileNetV1ForImageClassification] is supported by this example script and notebook. See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. MobileNetV1Config [[autodoc]] MobileNetV1Config MobileNetV1FeatureExtractor [[autodoc]] MobileNetV1FeatureExtractor - preprocess MobileNetV1ImageProcessor [[autodoc]] MobileNetV1ImageProcessor - preprocess MobileNetV1Model [[autodoc]] MobileNetV1Model - forward MobileNetV1ForImageClassification [[autodoc]] MobileNetV1ForImageClassification - forward
XLM-RoBERTa-XL Overview The XLM-RoBERTa-XL model was proposed in Larger-Scale Transformers for Multilingual Masked Language Modeling by Naman Goyal, Jingfei Du, Myle Ott, Giri Anantharaman, Alexis Conneau. The abstract from the paper is the following: Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available. This model was contributed by Soonhwan-Kwon and stefan-it. The original code can be found here. Usage tips XLM-RoBERTa-XL is a multilingual model trained on 100 different languages. Unlike some XLM multilingual models, it does not require lang tensors to understand which language is used, and should be able to determine the correct language from the input ids. Resources
Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide
XLMRobertaXLConfig [[autodoc]] XLMRobertaXLConfig XLMRobertaXLModel [[autodoc]] XLMRobertaXLModel - forward XLMRobertaXLForCausalLM [[autodoc]] XLMRobertaXLForCausalLM - forward XLMRobertaXLForMaskedLM [[autodoc]] XLMRobertaXLForMaskedLM - forward XLMRobertaXLForSequenceClassification [[autodoc]] XLMRobertaXLForSequenceClassification - forward XLMRobertaXLForMultipleChoice [[autodoc]] XLMRobertaXLForMultipleChoice - forward XLMRobertaXLForTokenClassification [[autodoc]] XLMRobertaXLForTokenClassification - forward XLMRobertaXLForQuestionAnswering [[autodoc]] XLMRobertaXLForQuestionAnswering - forward
Big Transfer (BiT) Overview The BiT model was proposed in Big Transfer (BiT): General Visual Representation Learning by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. BiT is a simple recipe for scaling up pre-training of ResNet-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning. The abstract from the paper is the following: Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance. This model was contributed by nielsr. The original code can be found here. Usage tips
BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by group normalization, 2) weight standardization is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant impact on transfer learning. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT.
Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT. [BitForImageClassification] is supported by this example script and notebook. See also: Image classification task guide
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. BitConfig [[autodoc]] BitConfig BitImageProcessor [[autodoc]] BitImageProcessor - preprocess BitModel [[autodoc]] BitModel - forward BitForImageClassification [[autodoc]] BitForImageClassification - forward
IDEFICS Overview The IDEFICS model was proposed in OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents by Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, Thomas Wang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, Victor Sanh The abstract from the paper is the following: Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks that require reasoning over one or multiple images to generate a text. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELISC, we train an 80 billion parameters vision and language model on the dataset and obtain competitive performance on various multimodal benchmarks. We release the code to reproduce the dataset along with the dataset itself. This model was contributed by HuggingFaceM4. The original code can be found here. (TODO: don't have a public link yet).
IDEFICS modeling code in Transformers is for finetuning and inferencing the pre-trained IDEFICS models. To train a new IDEFICS model from scratch use the m4 codebase (a link will be provided once it's made public)
IdeficsConfig [[autodoc]] IdeficsConfig IdeficsModel [[autodoc]] IdeficsModel - forward IdeficsForVisionText2Text [[autodoc]] IdeficsForVisionText2Text - forward IdeficsImageProcessor [[autodoc]] IdeficsImageProcessor - preprocess IdeficsProcessor [[autodoc]] IdeficsProcessor - call
ViLT Overview The ViLT model was proposed in ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision by Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design for Vision-and-Language Pre-training (VLP). The abstract from the paper is the following: Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance.
ViLT architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Usage tips
The quickest way to get started with ViLT is by checking the example notebooks (which showcase both inference and fine-tuning on custom data). ViLT is a model that takes both pixel_values and input_ids as input. One can use [ViltProcessor] to prepare data for the model. This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one. ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a pixel_mask that indicates which pixel values are real and which are padding. [ViltProcessor] automatically creates this for you. The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes additional embedding layers for the language modality. The PyTorch version of this model is only available in torch 1.10 and higher.
ViltConfig [[autodoc]] ViltConfig ViltFeatureExtractor [[autodoc]] ViltFeatureExtractor - call ViltImageProcessor [[autodoc]] ViltImageProcessor - preprocess ViltProcessor [[autodoc]] ViltProcessor - call ViltModel [[autodoc]] ViltModel - forward ViltForMaskedLM [[autodoc]] ViltForMaskedLM - forward ViltForQuestionAnswering [[autodoc]] ViltForQuestionAnswering - forward ViltForImagesAndTextClassification [[autodoc]] ViltForImagesAndTextClassification - forward ViltForImageAndTextRetrieval [[autodoc]] ViltForImageAndTextRetrieval - forward ViltForTokenClassification [[autodoc]] ViltForTokenClassification - forward
MPT Overview The MPT model was proposed by the MosaicML team and released with multiple sizes and finetuned variants. The MPT models is a series of open source and commercially usable LLMs pre-trained on 1T tokens. MPT models are GPT-style decoder-only transformers with several improvements: performance-optimized layer implementations, architecture changes that provide greater training stability, and the elimination of context length limits by replacing positional embeddings with ALiBi.
MPT base: MPT base pre-trained models on next token prediction MPT instruct: MPT base models fine-tuned on instruction based tasks MPT storywriter: MPT base models fine-tuned for 2500 steps on 65k-token excerpts of fiction books contained in the books3 corpus, this enables the model to handle very long sequences The original code is available at the llm-foundry repository. Read more about it in the release blogpost Usage tips
The original code is available at the llm-foundry repository. Read more about it in the release blogpost Usage tips Learn more about some techniques behind training of the model in this section of llm-foundry repository If you want to use the advanced version of the model (triton kernels, direct flash attention integration), you can still use the original model implementation by adding trust_remote_code=True when calling from_pretrained. Resources
Resources Fine-tuning Notebook on how to fine-tune MPT-7B on a free Google Colab instance to turn the model into a Chatbot. MptConfig [[autodoc]] MptConfig - all MptModel [[autodoc]] MptModel - forward MptForCausalLM [[autodoc]] MptForCausalLM - forward MptForSequenceClassification [[autodoc]] MptForSequenceClassification - forward MptForTokenClassification [[autodoc]] MptForTokenClassification - forward MptForQuestionAnswering [[autodoc]] MptForQuestionAnswering - forward
CodeLlama Overview The Code Llama model was proposed in Code Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve. The abstract from the paper is the following: We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use. Check out all Code Llama model checkpoints here and the officially released ones in the codellama org. This model was contributed by ArthurZucker. The original code of the authors can be found here. Usage tips and examples
The Llama2 family models, on which Code Llama is based, were trained using bfloat16, but the original inference uses float16. Let's look at the different precisions:
float32: PyTorch convention on model initialization is to load models in float32, no matter with which dtype the model weights were stored. transformers also follows this convention for consistency with PyTorch. This will be picked by default. If you want the AutoModel API to cast the load the checkpoints with the storage weights type, you must specify torch_dtype="auto", e.g. model = AutoModelForCausalLM.from_pretrained("path", torch_dtype = "auto"). bfloat16: Code Llama was trained with this precision, so we recommend using it for further training or fine-tuning. float16: We recommend running inference using this precision, as it's usually faster than bfloat16, and evaluation metrics show no discernible degradation with respect to bfloat16. You can also run inference using bfloat16, and we recommend you check inference results with both float16 and bfloat16 after fine-tuning.
As mentioned above, the dtype of the storage weights is mostly irrelevant unless you are using torch_dtype="auto" when initializing a model using. The reason is that the model will first be downloaded (using the dtype of the checkpoints online) and then will be casted to the default dtype of torch (becomes torch.float32). If there is a specified torch_dtype, it will be used instead.
Tips: - The infilling task is supported out of the box. You should be using the tokenizer.fill_token where you want your input to be filled. - The model conversion script is the same as for the Llama2 family: Here is a sample usage:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). After conversion, the model and tokenizer can be loaded via: thon