text
stringlengths 2
11.8k
|
---|
from transformers import LlamaForCausalLM, CodeLlamaTokenizer
tokenizer = CodeLlamaTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
model = LlamaForCausalLM.from_pretrained("codellama/CodeLlama-7b-hf")
PROMPT = '''def remove_non_ascii(s: str) -> str:
"""
return result
'''
input_ids = tokenizer(PROMPT, return_tensors="pt")["input_ids"]
generated_ids = model.generate(input_ids, max_new_tokens=128)
filling = tokenizer.batch_decode(generated_ids[:, input_ids.shape[1]:], skip_special_tokens = True)[0]
print(PROMPT.replace("", filling))
def remove_non_ascii(s: str) -> str:
""" Remove non-ASCII characters from a string. |
Args:
s: The string to remove non-ASCII characters from.
Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
If you only want the infilled part:
thon |
Returns:
The string with non-ASCII characters removed.
"""
result = ""
for c in s:
if ord(c) < 128:
result += c
return result
If you only want the infilled part:
thon
from transformers import pipeline
import torch
generator = pipeline("text-generation",model="codellama/CodeLlama-7b-hf",torch_dtype=torch.float16, device_map="auto")
generator('def remove_non_ascii(s: str) -> str:\n """ \n return result', max_new_tokens = 128, return_type = 1) |
Under the hood, the tokenizer automatically splits by <FILL_ME> to create a formatted input string that follows the original training pattern. This is more robust than preparing the pattern yourself: it avoids pitfalls, such as token glueing, that are very hard to debug. To see how much CPU and GPU memory you need for this model or others, try this calculator which can help determine that value.
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string. |
Code Llama has the same architecture as the Llama2 models, refer to Llama2's documentation page for the API reference.
Find Code Llama tokenizer reference below. |
CodeLlamaTokenizer
[[autodoc]] CodeLlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
CodeLlamaTokenizerFast
[[autodoc]] CodeLlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary |
XGLM
Overview
The XGLM model was proposed in Few-shot Learning with Multilingual Language Models
by Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal,
Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo,
Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, Xian Li.
The abstract from the paper is the following:
Large-scale autoregressive language models such as GPT-3 are few-shot learners that can perform a wide range of language
tasks without fine-tuning. While these models are known to be able to jointly represent many different languages,
their training data is dominated by English, potentially limiting their cross-lingual generalization.
In this work, we train multilingual autoregressive language models on a balanced corpus covering a diverse set of languages,
and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters
sets new state of the art in few-shot learning in more than 20 representative languages, outperforming GPT-3 of comparable size
in multilingual commonsense reasoning (with +7.4% absolute accuracy improvement in 0-shot settings and +9.4% in 4-shot settings)
and natural language inference (+5.4% in each of 0-shot and 4-shot settings). On the FLORES-101 machine translation benchmark,
our model outperforms GPT-3 on 171 out of 182 translation directions with 32 training examples, while surpassing the
official supervised baseline in 45 directions. We present a detailed analysis of where the model succeeds and fails,
showing in particular that it enables cross-lingual in-context learning on some tasks, while there is still room for improvement
on surface form robustness and adaptation to tasks that do not have a natural cloze form. Finally, we evaluate our models
in social value tasks such as hate speech detection in five languages and find it has limitations similar to comparable sized GPT-3 models.
This model was contributed by Suraj. The original code can be found here.
Resources |
Causal language modeling task guide
XGLMConfig
[[autodoc]] XGLMConfig
XGLMTokenizer
[[autodoc]] XGLMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
XGLMTokenizerFast
[[autodoc]] XGLMTokenizerFast
XGLMModel
[[autodoc]] XGLMModel
- forward
XGLMForCausalLM
[[autodoc]] XGLMForCausalLM
- forward
TFXGLMModel
[[autodoc]] TFXGLMModel
- call
TFXGLMForCausalLM
[[autodoc]] TFXGLMForCausalLM
- call |
TFXGLMModel
[[autodoc]] TFXGLMModel
- call
TFXGLMForCausalLM
[[autodoc]] TFXGLMForCausalLM
- call
FlaxXGLMModel
[[autodoc]] FlaxXGLMModel
- call
FlaxXGLMForCausalLM
[[autodoc]] FlaxXGLMForCausalLM
- call |
LayoutLMV2
Overview
The LayoutLMV2 model was proposed in LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding by Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu,
Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou. LayoutLMV2 improves LayoutLM to obtain
state-of-the-art results across several document image understanding benchmarks: |
information extraction from scanned documents: the FUNSD dataset (a
collection of 199 annotated forms comprising more than 30,000 words), the CORD
dataset (a collection of 800 receipts for training, 100 for validation and 100 for testing), the SROIE dataset (a collection of 626 receipts for training and 347 receipts for testing)
and the Kleister-NDA dataset (a collection of non-disclosure
agreements from the EDGAR database, including 254 documents for training, 83 documents for validation, and 203
documents for testing).
document image classification: the RVL-CDIP dataset (a collection of
400,000 images belonging to one of 16 classes).
document visual question answering: the DocVQA dataset (a collection of 50,000
questions defined on 12,000+ document images). |
The abstract from the paper is the following:
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to
its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents. In this
paper, we present LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model
architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked
visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training
stage, where cross-modality interaction is better learned. Meanwhile, it also integrates a spatial-aware self-attention
mechanism into the Transformer architecture, so that the model can fully understand the relative positional
relationship among different text blocks. Experiment results show that LayoutLMv2 outperforms strong baselines and
achieves new state-of-the-art results on a wide variety of downstream visually-rich document understanding tasks,
including FUNSD (0.7895 -> 0.8420), CORD (0.9493 -> 0.9601), SROIE (0.9524 -> 0.9781), Kleister-NDA (0.834 -> 0.852),
RVL-CDIP (0.9443 -> 0.9564), and DocVQA (0.7295 -> 0.8672). The pre-trained LayoutLMv2 model is publicly available at
this https URL.
LayoutLMv2 depends on detectron2, torchvision and tesseract. Run the
following to install them: |
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
python -m pip install torchvision tesseract
(If you are developing for LayoutLMv2, note that passing the doctests also requires the installation of these packages.)
Usage tips |
The main difference between LayoutLMv1 and LayoutLMv2 is that the latter incorporates visual embeddings during
pre-training (while LayoutLMv1 only adds visual embeddings during fine-tuning).
LayoutLMv2 adds both a relative 1D attention bias as well as a spatial 2D attention bias to the attention scores in
the self-attention layers. Details can be found on page 5 of the paper.
Demo notebooks on how to use the LayoutLMv2 model on RVL-CDIP, FUNSD, DocVQA, CORD can be found here.
LayoutLMv2 uses Facebook AI's Detectron2 package for its visual
backbone. See this link for installation
instructions.
In addition to input_ids, [~LayoutLMv2Model.forward] expects 2 additional inputs, namely
image and bbox. The image input corresponds to the original document image in which the text
tokens occur. The model expects each document image to be of size 224x224. This means that if you have a batch of
document images, image should be a tensor of shape (batch_size, 3, 224, 224). This can be either a
torch.Tensor or a Detectron2.structures.ImageList. You don't need to normalize the channels, as this is
done by the model. Important to note is that the visual backbone expects BGR channels instead of RGB, as all models
in Detectron2 are pre-trained using the BGR format. The bbox input are the bounding boxes (i.e. 2D-positions)
of the input text tokens. This is identical to [LayoutLMModel]. These can be obtained using an
external OCR engine such as Google's Tesseract (there's a Python
wrapper available). Each bounding box should be in (x0, y0, x1, y1)
format, where (x0, y0) corresponds to the position of the upper left corner in the bounding box, and (x1, y1)
represents the position of the lower right corner. Note that one first needs to normalize the bounding boxes to be on
a 0-1000 scale. To normalize, you can use the following function: |
python
def normalize_bbox(bbox, width, height):
return [
int(1000 * (bbox[0] / width)),
int(1000 * (bbox[1] / height)),
int(1000 * (bbox[2] / width)),
int(1000 * (bbox[3] / height)),
]
Here, width and height correspond to the width and height of the original document in which the token
occurs (before resizing the image). Those can be obtained using the Python Image Library (PIL) library for example, as
follows:
thon
from PIL import Image
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
)
width, height = image.size |
However, this model includes a brand new [~transformers.LayoutLMv2Processor] which can be used to directly
prepare data for the model (including applying OCR under the hood). More information can be found in the "Usage"
section below. |
Internally, [~transformers.LayoutLMv2Model] will send the image input through its visual backbone to
obtain a lower-resolution feature map, whose shape is equal to the image_feature_pool_shape attribute of
[~transformers.LayoutLMv2Config]. This feature map is then flattened to obtain a sequence of image tokens. As
the size of the feature map is 7x7 by default, one obtains 49 image tokens. These are then concatenated with the text
tokens, and send through the Transformer encoder. This means that the last hidden states of the model will have a
length of 512 + 49 = 561, if you pad the text tokens up to the max length. More generally, the last hidden states
will have a shape of seq_length + image_feature_pool_shape[0] *
config.image_feature_pool_shape[1].
When calling [~transformers.LayoutLMv2Model.from_pretrained], a warning will be printed with a long list of
parameter names that are not initialized. This is not a problem, as these parameters are batch normalization
statistics, which are going to have values when fine-tuning on a custom dataset.
If you want to train the model in a distributed environment, make sure to call [synchronize_batch_norm] on the
model in order to properly synchronize the batch normalization layers of the visual backbone. |
In addition, there's LayoutXLM, which is a multilingual version of LayoutLMv2. More information can be found on
LayoutXLM's documentation page.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LayoutLMv2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. |
A notebook on how to finetune LayoutLMv2 for text-classification on RVL-CDIP dataset.
See also: Text classification task guide
A notebook on how to finetune LayoutLMv2 for question-answering on DocVQA dataset.
See also: Question answering task guide
See also: Document question answering task guide
A notebook on how to finetune LayoutLMv2 for token-classification on CORD dataset.
A notebook on how to finetune LayoutLMv2 for token-classification on FUNSD dataset.
See also: Token classification task guide |
Usage: LayoutLMv2Processor
The easiest way to prepare data for the model is to use [LayoutLMv2Processor], which internally
combines a image processor ([LayoutLMv2ImageProcessor]) and a tokenizer
([LayoutLMv2Tokenizer] or [LayoutLMv2TokenizerFast]). The image processor
handles the image modality, while the tokenizer handles the text modality. A processor combines both, which is ideal
for a multi-modal model like LayoutLMv2. Note that you can still use both separately, if you only want to handle one
modality.
thon
from transformers import LayoutLMv2ImageProcessor, LayoutLMv2TokenizerFast, LayoutLMv2Processor
image_processor = LayoutLMv2ImageProcessor() # apply_ocr is set to True by default
tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(image_processor, tokenizer) |
In short, one can provide a document image (and possibly additional data) to [LayoutLMv2Processor],
and it will create the inputs expected by the model. Internally, the processor first uses
[LayoutLMv2ImageProcessor] to apply OCR on the image to get a list of words and normalized
bounding boxes, as well to resize the image to a given size in order to get the image input. The words and
normalized bounding boxes are then provided to [LayoutLMv2Tokenizer] or
[LayoutLMv2TokenizerFast], which converts them to token-level input_ids,
attention_mask, token_type_ids, bbox. Optionally, one can provide word labels to the processor,
which are turned into token-level labels.
[LayoutLMv2Processor] uses PyTesseract, a Python
wrapper around Google's Tesseract OCR engine, under the hood. Note that you can still use your own OCR engine of
choice, and provide the words and normalized boxes yourself. This requires initializing
[LayoutLMv2ImageProcessor] with apply_ocr set to False.
In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these
use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).
Use case 1: document image classification (training, inference) + token classification (inference), apply_ocr =
True
This is the simplest case, in which the processor (actually the image processor) will perform OCR on the image to get
the words and normalized bounding boxes.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
encoding = processor(
image, return_tensors="pt"
) # you can also add all tokenizer parameters here such as padding, truncation
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) |
Use case 2: document image classification (training, inference) + token classification (inference), apply_ocr=False
In case one wants to do OCR themselves, one can initialize the image processor with apply_ocr set to
False. In that case, one should provide the words and corresponding (normalized) bounding boxes themselves to
the processor.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
encoding = processor(image, words, boxes=boxes, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) |
Use case 3: token classification (training), apply_ocr=False
For token classification tasks (such as FUNSD, CORD, SROIE, Kleister-NDA), one can also provide the corresponding word
labels in order to train a model. The processor will then convert these into token-level labels. By default, it
will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is the
ignore_index of PyTorch's CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can
initialize the tokenizer with only_label_first_subword set to False.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
word_labels = [1, 2]
encoding = processor(image, words, boxes=boxes, word_labels=word_labels, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'labels', 'image']) |
Use case 4: visual question answering (inference), apply_ocr=True
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. By default, the
processor will apply OCR on the image, and create [CLS] question tokens [SEP] word tokens [SEP].
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
question = "What's his name?"
encoding = processor(image, question, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) |
Use case 5: visual question answering (inference), apply_ocr=False
For visual question answering tasks (such as DocVQA), you can provide a question to the processor. If you want to
perform OCR yourself, you can provide your own words and (normalized) bounding boxes to the processor.
thon
from transformers import LayoutLMv2Processor
from PIL import Image
processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr")
image = Image.open(
"name_of_your_document - can be a png, jpg, etc. of your documents (PDFs must be converted to images)."
).convert("RGB")
question = "What's his name?"
words = ["hello", "world"]
boxes = [[1, 2, 3, 4], [5, 6, 7, 8]] # make sure to normalize your bounding boxes
encoding = processor(image, question, words, boxes=boxes, return_tensors="pt")
print(encoding.keys())
dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'bbox', 'image']) |
LayoutLMv2Config
[[autodoc]] LayoutLMv2Config
LayoutLMv2FeatureExtractor
[[autodoc]] LayoutLMv2FeatureExtractor
- call
LayoutLMv2ImageProcessor
[[autodoc]] LayoutLMv2ImageProcessor
- preprocess
LayoutLMv2Tokenizer
[[autodoc]] LayoutLMv2Tokenizer
- call
- save_vocabulary
LayoutLMv2TokenizerFast
[[autodoc]] LayoutLMv2TokenizerFast
- call
LayoutLMv2Processor
[[autodoc]] LayoutLMv2Processor
- call
LayoutLMv2Model
[[autodoc]] LayoutLMv2Model
- forward
LayoutLMv2ForSequenceClassification
[[autodoc]] LayoutLMv2ForSequenceClassification
LayoutLMv2ForTokenClassification
[[autodoc]] LayoutLMv2ForTokenClassification
LayoutLMv2ForQuestionAnswering
[[autodoc]] LayoutLMv2ForQuestionAnswering |
RetriBERT
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: pip install -U transformers==4.30.0. |
Overview
The RetriBERT model was proposed in the blog post Explain Anything Like I'm Five: A Model for Open Domain Long Form
Question Answering. RetriBERT is a small model that uses either a single or
pair of BERT encoders with lower-dimension projection for dense semantic indexing of text.
This model was contributed by yjernite. Code to train and use the model can be
found here.
RetriBertConfig
[[autodoc]] RetriBertConfig
RetriBertTokenizer
[[autodoc]] RetriBertTokenizer
RetriBertTokenizerFast
[[autodoc]] RetriBertTokenizerFast
RetriBertModel
[[autodoc]] RetriBertModel
- forward |
Bark
Overview
Bark is a transformer-based text-to-speech model proposed by Suno AI in suno-ai/bark.
Bark is made of 4 main models: |
[BarkSemanticModel] (also referred to as the 'text' model): a causal auto-regressive transformer model that takes as input tokenized text, and predicts semantic text tokens that capture the meaning of the text.
[BarkCoarseModel] (also referred to as the 'coarse acoustics' model): a causal autoregressive transformer, that takes as input the results of the [BarkSemanticModel] model. It aims at predicting the first two audio codebooks necessary for EnCodec.
[BarkFineModel] (the 'fine acoustics' model), this time a non-causal autoencoder transformer, which iteratively predicts the last codebooks based on the sum of the previous codebooks embeddings.
having predicted all the codebook channels from the [EncodecModel], Bark uses it to decode the output audio array. |
It should be noted that each of the first three modules can support conditional speaker embeddings to condition the output sound according to specific predefined voice.
This model was contributed by Yoach Lacombe (ylacombe) and Sanchit Gandhi (sanchit-gandhi).
The original code can be found here.
Optimizing Bark
Bark can be optimized with just a few extra lines of code, which significantly reduces its memory footprint and accelerates inference.
Using half-precision
You can speed up inference and reduce memory footprint by 50% simply by loading the model in half-precision.
thon
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16).to(device) |
Using CPU offload
As mentioned above, Bark is made up of 4 sub-models, which are called up sequentially during audio generation. In other words, while one sub-model is in use, the other sub-models are idle.
If you're using a CUDA device, a simple solution to benefit from an 80% reduction in memory footprint is to offload the submodels from GPU to CPU when they're idle. This operation is called CPU offloading. You can use it with one line of code as follows:
python
model.enable_cpu_offload()
Note that 🤗 Accelerate must be installed before using this feature. Here's how to install it.
Using Better Transformer
Better Transformer is an 🤗 Optimum feature that performs kernel fusion under the hood. You can gain 20% to 30% in speed with zero performance degradation. It only requires one line of code to export the model to 🤗 Better Transformer:
python
model = model.to_bettertransformer()
Note that 🤗 Optimum must be installed before using this feature. Here's how to install it.
Using Flash Attention 2
Flash Attention 2 is an even faster, optimized version of the previous optimization.
Installation
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the official documentation. If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered above.
Next, install the latest version of Flash Attention 2: |
pip install -U flash-attn --no-build-isolation
Usage
To load a model using Flash Attention 2, we can pass the attn_implementation="flash_attention_2" flag to .from_pretrained. We'll also load the model in half-precision (e.g. torch.float16), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference:
python
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device)
Performance comparison
The following diagram shows the latency for the native attention implementation (no optimisation) against Better Transformer and Flash Attention 2. In all cases, we generate 400 semantic tokens on a 40GB A100 GPU with PyTorch 2.1. Flash Attention 2 is also consistently faster than Better Transformer, and its performance improves even more as batch sizes increase: |
To put this into perspective, on an NVIDIA A100 and when generating 400 semantic tokens with a batch size of 16, you can get 17 times the throughput and still be 2 seconds faster than generating sentences one by one with the native model implementation. In other words, all the samples will be generated 17 times faster.
At batch size 8, on an NVIDIA A100, Flash Attention 2 is also 10% faster than Better Transformer, and at batch size 16, 25%.
Combining optimization techniques
You can combine optimization techniques, and use CPU offload, half-precision and Flash Attention 2 (or 🤗 Better Transformer) all at once.
thon
from transformers import BarkModel
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
load in fp16 and use Flash Attention 2
model = BarkModel.from_pretrained("suno/bark-small", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device)
enable CPU offload
model.enable_cpu_offload() |
Find out more on inference optimization techniques here.
Usage tips
Suno offers a library of voice presets in a number of languages here.
These presets are also uploaded in the hub here or here.
thon |
from transformers import AutoProcessor, BarkModel
processor = AutoProcessor.from_pretrained("suno/bark")
model = BarkModel.from_pretrained("suno/bark")
voice_preset = "v2/en_speaker_6"
inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects.
thon |
Multilingual speech - simplified Chinese
inputs = processor("惊人的!我会说中文")
Multilingual speech - French - let's use a voice_preset as well
inputs = processor("Incroyable! Je peux générer du son.", voice_preset="fr_speaker_5")
Bark can also generate music. You can help it out by adding music notes around your lyrics.
inputs = processor("♪ Hello, my dog is cute ♪")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze() |
The model can also produce nonverbal communications like laughing, sighing and crying.
thon
Adding non-speech cues to the input text
inputs = processor("Hello uh [clears throat], my dog is cute [laughter]")
audio_array = model.generate(**inputs)
audio_array = audio_array.cpu().numpy().squeeze()
To save the audio, simply take the sample rate from the model config and some scipy utility:
thon |
To save the audio, simply take the sample rate from the model config and some scipy utility:
thon
from scipy.io.wavfile import write as write_wav
save audio to disk, but first take the sample rate from the model config
sample_rate = model.generation_config.sample_rate
write_wav("bark_generation.wav", sample_rate, audio_array) |
BarkConfig
[[autodoc]] BarkConfig
- all
BarkProcessor
[[autodoc]] BarkProcessor
- all
- call
BarkModel
[[autodoc]] BarkModel
- generate
- enable_cpu_offload
BarkSemanticModel
[[autodoc]] BarkSemanticModel
- forward
BarkCoarseModel
[[autodoc]] BarkCoarseModel
- forward
BarkFineModel
[[autodoc]] BarkFineModel
- forward
BarkCausalModel
[[autodoc]] BarkCausalModel
- forward
BarkCoarseConfig
[[autodoc]] BarkCoarseConfig
- all
BarkFineConfig
[[autodoc]] BarkFineConfig
- all
BarkSemanticConfig
[[autodoc]] BarkSemanticConfig
- all |
ErnieM
Overview
The ErnieM model was proposed in ERNIE-M: Enhanced Multilingual Representation by Aligning
Cross-lingual Semantics with Monolingual Corpora by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun,
Hao Tian, Hua Wu, Haifeng Wang.
The abstract from the paper is the following:
Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks.
This model was contributed by Susnato Dhar. The original code can be found here.
Usage tips |
Ernie-M is a BERT-like model so it is a stacked Transformer Encoder.
Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: Cross-attention Masked Language Modeling and Back-translation Masked Language Modeling. For now these two LMHead objectives are not implemented here.
It is a multilingual language model.
Next Sentence Prediction was not used in pretraining process.
Resources |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Multiple choice task guide |
ErnieMConfig
[[autodoc]] ErnieMConfig
ErnieMTokenizer
[[autodoc]] ErnieMTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
ErnieMModel
[[autodoc]] ErnieMModel
- forward
ErnieMForSequenceClassification
[[autodoc]] ErnieMForSequenceClassification
- forward
ErnieMForMultipleChoice
[[autodoc]] ErnieMForMultipleChoice
- forward
ErnieMForTokenClassification
[[autodoc]] ErnieMForTokenClassification
- forward
ErnieMForQuestionAnswering
[[autodoc]] ErnieMForQuestionAnswering
- forward
ErnieMForInformationExtraction
[[autodoc]] ErnieMForInformationExtraction
- forward |
SegGPT
Overview
The SegGPT model was proposed in SegGPT: Segmenting Everything In Context by Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, Tiejun Huang. SegGPT employs a decoder-only Transformer that can generate a segmentation mask given an input image, a prompt image and its corresponding prompt mask. The model achieves remarkable one-shot results with 56.1 mIoU on COCO-20 and 85.6 mIoU on FSS-1000.
The abstract from the paper is the following:
We present SegGPT, a generalist model for segmenting everything in context. We unify various segmentation tasks into a generalist in-context learning framework that accommodates different kinds of segmentation data by transforming them into the same format of images. The training of SegGPT is formulated as an in-context coloring problem with random color mapping for each data sample. The objective is to accomplish diverse tasks according to the context, rather than relying on specific colors. After training, SegGPT can perform arbitrary segmentation tasks in images or videos via in-context inference, such as object instance, stuff, part, contour, and text. SegGPT is evaluated on a broad range of tasks, including few-shot semantic segmentation, video object segmentation, semantic segmentation, and panoptic segmentation. Our results show strong capabilities in segmenting in-domain and out-of
Tips:
- One can use [SegGptImageProcessor] to prepare image input, prompt and mask to the model.
- It's highly advisable to pass num_labels (not considering background) during preprocessing and postprocessing with [SegGptImageProcessor] for your use case.
- When doing infenrece with [SegGptForImageSegmentation] if your batch_size is greater than 1 you can use feature ensemble across your images by passing feature_ensemble=True in the forward method.
Here's how to use the model for one-shot semantic segmentation:
thon
import torch
from datasets import load_dataset
from transformers import SegGptImageProcessor, SegGptForImageSegmentation
model_id = "BAAI/seggpt-vit-large"
image_processor = SegGptImageProcessor.from_pretrained(checkpoint)
model = SegGptForImageSegmentation.from_pretrained(checkpoint)
dataset_id = "EduardoPacheco/FoodSeg103"
ds = load_dataset(dataset_id, split="train")
Number of labels in FoodSeg103 (not including background)
num_labels = 103
image_input = ds[4]["image"]
ground_truth = ds[4]["label"]
image_prompt = ds[29]["image"]
mask_prompt = ds[29]["label"]
inputs = image_processor(
images=image_input,
prompt_images=image_prompt,
prompt_masks=mask_prompt,
num_labels=num_labels,
return_tensors="pt"
)
with torch.no_grad():
outputs = model(**inputs)
target_sizes = [image_input.size[::-1]]
mask = image_processor.post_process_semantic_segmentation(outputs, target_sizes, num_labels=num_labels)[0] |
This model was contributed by EduardoPacheco.
The original code can be found here.
SegGptConfig
[[autodoc]] SegGptConfig
SegGptImageProcessor
[[autodoc]] SegGptImageProcessor
- preprocess
- post_process_semantic_segmentation
SegGptModel
[[autodoc]] SegGptModel
- forward
SegGptForImageSegmentation
[[autodoc]] SegGptForImageSegmentation
- forward |
FastSpeech2Conformer
Overview
The FastSpeech2Conformer model was proposed with the paper Recent Developments On Espnet Toolkit Boosted By Conformer by Pengcheng Guo, Florian Boyer, Xuankai Chang, Tomoki Hayashi, Yosuke Higuchi, Hirofumi Inaguma, Naoyuki Kamo, Chenda Li, Daniel Garcia-Romero, Jiatong Shi, Jing Shi, Shinji Watanabe, Kun Wei, Wangyou Zhang, and Yuekai Zhang.
The abstract from the original FastSpeech2 paper is the following:
Non-autoregressive text to speech (TTS) models such as FastSpeech (Ren et al., 2019) can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. Audio samples are available at https://speechresearch.github.io/fastspeech2/.
This model was contributed by Connor Henderson. The original code can be found here.
🤗 Model Architecture
FastSpeech2's general structure with a Mel-spectrogram decoder was implemented, and the traditional transformer blocks were replaced with with conformer blocks as done in the ESPnet library.
FastSpeech2 Model Architecture |
Conformer Blocks
Convolution Module
🤗 Transformers Usage
You can run FastSpeech2Conformer locally with the 🤗 Transformers library.
First install the 🤗 Transformers library, g2p-en:
pip install --upgrade pip
pip install --upgrade transformers g2p-en
Run inference via the Transformers modelling code with the model and hifigan separately |
thon
from transformers import FastSpeech2ConformerTokenizer, FastSpeech2ConformerModel, FastSpeech2ConformerHifiGan
import soundfile as sf
tokenizer = FastSpeech2ConformerTokenizer.from_pretrained("espnet/fastspeech2_conformer")
inputs = tokenizer("Hello, my dog is cute.", return_tensors="pt")
input_ids = inputs["input_ids"]
model = FastSpeech2ConformerModel.from_pretrained("espnet/fastspeech2_conformer")
output_dict = model(input_ids, return_dict=True)
spectrogram = output_dict["spectrogram"]
hifigan = FastSpeech2ConformerHifiGan.from_pretrained("espnet/fastspeech2_conformer_hifigan")
waveform = hifigan(spectrogram)
sf.write("speech.wav", waveform.squeeze().detach().numpy(), samplerate=22050) |
Run inference via the Transformers modelling code with the model and hifigan combined |
thon
from transformers import FastSpeech2ConformerTokenizer, FastSpeech2ConformerWithHifiGan
import soundfile as sf
tokenizer = FastSpeech2ConformerTokenizer.from_pretrained("espnet/fastspeech2_conformer")
inputs = tokenizer("Hello, my dog is cute.", return_tensors="pt")
input_ids = inputs["input_ids"]
model = FastSpeech2ConformerWithHifiGan.from_pretrained("espnet/fastspeech2_conformer_with_hifigan")
output_dict = model(input_ids, return_dict=True)
waveform = output_dict["waveform"]
sf.write("speech.wav", waveform.squeeze().detach().numpy(), samplerate=22050) |
Run inference with a pipeline and specify which vocoder to use
thon
from transformers import pipeline, FastSpeech2ConformerHifiGan
import soundfile as sf
vocoder = FastSpeech2ConformerHifiGan.from_pretrained("espnet/fastspeech2_conformer_hifigan")
synthesiser = pipeline(model="espnet/fastspeech2_conformer", vocoder=vocoder)
speech = synthesiser("Hello, my dog is cooler than you!")
sf.write("speech.wav", speech["audio"].squeeze(), samplerate=speech["sampling_rate"]) |
FastSpeech2ConformerConfig
[[autodoc]] FastSpeech2ConformerConfig
FastSpeech2ConformerHifiGanConfig
[[autodoc]] FastSpeech2ConformerHifiGanConfig
FastSpeech2ConformerWithHifiGanConfig
[[autodoc]] FastSpeech2ConformerWithHifiGanConfig
FastSpeech2ConformerTokenizer
[[autodoc]] FastSpeech2ConformerTokenizer
- call
- save_vocabulary
- decode
- batch_decode
FastSpeech2ConformerModel
[[autodoc]] FastSpeech2ConformerModel
- forward
FastSpeech2ConformerHifiGan
[[autodoc]] FastSpeech2ConformerHifiGan
- forward
FastSpeech2ConformerWithHifiGan
[[autodoc]] FastSpeech2ConformerWithHifiGan
- forward |
X-CLIP
Overview
The X-CLIP model was proposed in Expanding Language-Image Pretrained Models for General Video Recognition by Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, Haibin Ling.
X-CLIP is a minimal extension of CLIP for video. The model consists of a text encoder, a cross-frame vision encoder, a multi-frame integration Transformer, and a video-specific prompt generator.
The abstract from the paper is the following:
Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable "zero-shot" generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited.
Tips: |
Usage of X-CLIP is identical to CLIP.
X-CLIP architecture. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with X-CLIP.
Demo notebooks for X-CLIP can be found here. |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
XCLIPProcessor
[[autodoc]] XCLIPProcessor
XCLIPConfig
[[autodoc]] XCLIPConfig
- from_text_vision_configs
XCLIPTextConfig
[[autodoc]] XCLIPTextConfig
XCLIPVisionConfig
[[autodoc]] XCLIPVisionConfig
XCLIPModel
[[autodoc]] XCLIPModel
- forward
- get_text_features
- get_video_features
XCLIPTextModel
[[autodoc]] XCLIPTextModel
- forward
XCLIPVisionModel
[[autodoc]] XCLIPVisionModel
- forward |
VideoMAE
Overview
The VideoMAE model was proposed in VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training by Zhan Tong, Yibing Song, Jue Wang, Limin Wang.
VideoMAE extends masked auto encoders (MAE) to video, claiming state-of-the-art performance on several video classification benchmarks.
The abstract from the paper is the following:
Pre-training video transformers on extra large-scale datasets is generally required to achieve premier performance on relatively small datasets. In this paper, we show that video masked autoencoders (VideoMAE) are data-efficient learners for self-supervised video pre-training (SSVP). We are inspired by the recent ImageMAE and propose customized video tube masking and reconstruction. These simple designs turn out to be effective for overcoming information leakage caused by the temporal correlation during video reconstruction. We obtain three important findings on SSVP: (1) An extremely high proportion of masking ratio (i.e., 90% to 95%) still yields favorable performance of VideoMAE. The temporally redundant video content enables higher masking ratio than that of images. (2) VideoMAE achieves impressive results on very small datasets (i.e., around 3k-4k videos) without using any extra data. This is partially ascribed to the challenging task of video reconstruction to enforce high-level structure learning. (3) VideoMAE shows that data quality is more important than data quantity for SSVP. Domain shift between pre-training and target datasets are important issues in SSVP. Notably, our VideoMAE with the vanilla ViT backbone can achieve 83.9% on Kinects-400, 75.3% on Something-Something V2, 90.8% on UCF101, and 61.1% on HMDB51 without using any extra data. |
VideoMAE pre-training. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with VideoMAE. If
you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll
review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
Video classification
- A notebook that shows how
to fine-tune a VideoMAE model on a custom dataset.
- Video classification task guide
- A 🤗 Space showing how to perform inference with a video classification model.
VideoMAEConfig
[[autodoc]] VideoMAEConfig
VideoMAEFeatureExtractor
[[autodoc]] VideoMAEFeatureExtractor
- call
VideoMAEImageProcessor
[[autodoc]] VideoMAEImageProcessor
- preprocess
VideoMAEModel
[[autodoc]] VideoMAEModel
- forward
VideoMAEForPreTraining
VideoMAEForPreTraining includes the decoder on top for self-supervised pre-training.
[[autodoc]] transformers.VideoMAEForPreTraining
- forward
VideoMAEForVideoClassification
[[autodoc]] transformers.VideoMAEForVideoClassification
- forward |
Vision Transformer (ViT)
Overview
The Vision Transformer (ViT) model was proposed in An Image is Worth 16x16 Words: Transformers for Image Recognition
at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob
Uszkoreit, Neil Houlsby. It's the first paper that successfully trains a Transformer encoder on ImageNet, attaining
very good results compared to familiar convolutional architectures.
The abstract from the paper is the following:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its
applications to computer vision remain limited. In vision, attention is either applied in conjunction with
convolutional networks, or used to replace certain components of convolutional networks while keeping their overall
structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to
sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of
data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.),
Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring
substantially fewer computational resources to train. |
ViT architecture. Taken from the original paper.
Following the original Vision Transformer, some follow-up works have been made: |
DeiT (Data-efficient Image Transformers) by Facebook AI. DeiT models are distilled vision transformers.
The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into [ViTModel] or
[ViTForImageClassification]. There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224,
facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should
use [DeiTImageProcessor] in order to prepare images for the model. |
BEiT (BERT pre-training of Image Transformers) by Microsoft Research. BEiT models outperform supervised pre-trained
vision transformers using a self-supervised method inspired by BERT (masked image modeling) and based on a VQ-VAE. |
DINO (a method for self-supervised training of Vision Transformers) by Facebook AI. Vision Transformers trained using
the DINO method show very interesting properties not seen with convolutional models. They are capable of segmenting
objects, without having ever been trained to do so. DINO checkpoints can be found on the hub. |
MAE (Masked Autoencoders) by Facebook AI. By pre-training Vision Transformers to reconstruct pixel values for a high portion
(75%) of masked patches (using an asymmetric encoder-decoder architecture), the authors show that this simple method outperforms
supervised pre-training after fine-tuning. |
This model was contributed by nielsr. The original code (written in JAX) can be
found here.
Note that we converted the weights from Ross Wightman's timm library,
who already converted the weights from JAX to PyTorch. Credits go to him!
Usage tips |
To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches,
which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image, which can be
used for classification. The authors also add absolute position embeddings, and feed the resulting sequence of
vectors to a standard Transformer encoder.
As the Vision Transformer expects each image to be of the same size (resolution), one can use
[ViTImageProcessor] to resize (or rescale) and normalize images for the model.
Both the patch resolution and image resolution used during pre-training or fine-tuning are reflected in the name of
each checkpoint. For example, google/vit-base-patch16-224 refers to a base-sized architecture with patch
resolution of 16x16 and fine-tuning resolution of 224x224. All checkpoints can be found on the hub.
The available checkpoints are either (1) pre-trained on ImageNet-21k (a collection of
14 million images and 21k classes) only, or (2) also fine-tuned on ImageNet (also referred to as ILSVRC 2012, a collection of 1.3 million
images and 1,000 classes).
The Vision Transformer was pre-trained using a resolution of 224x224. During fine-tuning, it is often beneficial to
use a higher resolution than pre-training (Touvron et al., 2019), (Kolesnikov
et al., 2020). In order to fine-tune at higher resolution, the authors perform
2D interpolation of the pre-trained position embeddings, according to their location in the original image.
The best results are obtained with supervised pre-training, which is not the case in NLP. The authors also performed
an experiment with a self-supervised pre-training objective, namely masked patched prediction (inspired by masked
language modeling). With this approach, the smaller ViT-B/16 model achieves 79.9% accuracy on ImageNet, a significant
improvement of 2% to training from scratch, but still 4% behind supervised pre-training. |
Resources
Demo notebooks regarding inference as well as fine-tuning ViT on custom data can be found here.
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ViT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ViTForImageClassification is supported by: |
A blog post on how to Fine-Tune ViT for Image Classification with Hugging Face Transformers
A blog post on Image Classification with Hugging Face Transformers and Keras
A notebook on Fine-tuning for Image Classification with Hugging Face Transformers
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with the Hugging Face Trainer
A notebook on how to Fine-tune the Vision Transformer on CIFAR-10 with PyTorch Lightning
⚗️ Optimization |
⚗️ Optimization
A blog post on how to Accelerate Vision Transformer (ViT) with Quantization using Optimum
⚡️ Inference
A notebook on Quick demo: Vision Transformer (ViT) by Google Brain
🚀 Deploy
A blog post on Deploying Tensorflow Vision Models in Hugging Face with TF Serving
A blog post on Deploying Hugging Face ViT on Vertex AI
A blog post on Deploying Hugging Face ViT on Kubernetes with TF Serving |
ViTConfig
[[autodoc]] ViTConfig
ViTFeatureExtractor
[[autodoc]] ViTFeatureExtractor
- call
ViTImageProcessor
[[autodoc]] ViTImageProcessor
- preprocess
ViTModel
[[autodoc]] ViTModel
- forward
ViTForMaskedImageModeling
[[autodoc]] ViTForMaskedImageModeling
- forward
ViTForImageClassification
[[autodoc]] ViTForImageClassification
- forward
TFViTModel
[[autodoc]] TFViTModel
- call
TFViTForImageClassification
[[autodoc]] TFViTForImageClassification
- call |
TFViTModel
[[autodoc]] TFViTModel
- call
TFViTForImageClassification
[[autodoc]] TFViTForImageClassification
- call
FlaxVitModel
[[autodoc]] FlaxViTModel
- call
FlaxViTForImageClassification
[[autodoc]] FlaxViTForImageClassification
- call |
LLaMA
Overview
The LLaMA model was proposed in LLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample. It is a collection of foundation language models ranging from 7B to 65B parameters.
The abstract from the paper is the following:
*We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community. *
This model was contributed by zphang with contributions from BlackSamorez. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here.
Usage tips |
Weights for the LLaMA models can be obtained from by filling out this form
After downloading the weights, they will need to be converted to the Hugging Face Transformers format using the conversion script. The script can be called with the following (example) command:
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
After conversion, the model and tokenizer can be loaded via: |
After conversion, the model and tokenizer can be loaded via:
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path") |
thon
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
model = LlamaForCausalLM.from_pretrained("/output/path")
Note that executing the script requires enough CPU RAM to host the whole model in float16 precision (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM). For the 65B model, it's thus 130GB of RAM needed. |
The LLaMA tokenizer is a BPE model based on sentencepiece. One quirk of sentencepiece is that when decoding a sequence, if the first token is the start of the word (e.g. "Banana"), the tokenizer does not prepend the prefix space to the string. |
This model was contributed by zphang with contributions from BlackSamorez. The code of the implementation in Hugging Face is based on GPT-NeoX here. The original code of the authors can be found here. The Flax version of the implementation was contributed by afmck with the code in the implementation based on Hugging Face's Flax GPT-Neo.
Based on the original LLaMA model, Meta AI has released some follow-up works: |
Llama2: Llama2 is an improved version of Llama with some architectural tweaks (Grouped Query Attention), and is pre-trained on 2Trillion tokens. Refer to the documentation of Llama2 which can be found here. |
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with LLaMA. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
A notebook on how to use prompt tuning to adapt the LLaMA model for text classification task. 🌎 |
A notebook on how to use prompt tuning to adapt the LLaMA model for text classification task. 🌎
StackLLaMA: A hands-on guide to train LLaMA with RLHF, a blog post about how to train LLaMA to answer questions on Stack Exchange with RLHF. |
⚗️ Optimization
- A notebook on how to fine-tune LLaMA model using xturing library on GPU which has limited memory. 🌎
⚡️ Inference
- A notebook on how to run the LLaMA Model using PeftModel from the 🤗 PEFT library. 🌎
- A notebook on how to load a PEFT adapter LLaMA model with LangChain. 🌎
🚀 Deploy
- A notebook on how to fine-tune LLaMA model using LoRA method via the 🤗 PEFT library with intuitive UI. 🌎
- A notebook on how to deploy Open-LLaMA model for text generation on Amazon SageMaker. 🌎
LlamaConfig
[[autodoc]] LlamaConfig
LlamaTokenizer
[[autodoc]] LlamaTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabulary
LlamaTokenizerFast
[[autodoc]] LlamaTokenizerFast
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- update_post_processor
- save_vocabulary
LlamaModel
[[autodoc]] LlamaModel
- forward
LlamaForCausalLM
[[autodoc]] LlamaForCausalLM
- forward
LlamaForSequenceClassification
[[autodoc]] LlamaForSequenceClassification
- forward
LlamaForQuestionAnswering
[[autodoc]] LlamaForQuestionAnswering
- forward
FlaxLlamaModel
[[autodoc]] FlaxLlamaModel
- call
FlaxLlamaForCausalLM
[[autodoc]] FlaxLlamaForCausalLM
- call |
XLSR-Wav2Vec2
Overview
The XLSR-Wav2Vec2 model was proposed in Unsupervised Cross-Lingual Representation Learning For Speech Recognition by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael
Auli.
The abstract from the paper is the following:
This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw
waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over
masked latent speech representations and jointly learns a quantization of the latents shared across languages. The
resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly
outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction
of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to
a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong
individual models. Analysis shows that the latent discrete speech representations are shared across languages with
increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing
XLSR-53, a large model pretrained in 53 languages.
The original code can be found here.
Usage tips |
XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be
decoded using [Wav2Vec2CTCTokenizer].
XLSR-Wav2Vec2's architecture is based on the Wav2Vec2 model, so one can refer to Wav2Vec2's documentation page. |
MVP
Overview
The MVP model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
According to the abstract, |
MVP follows a standard Transformer encoder-decoder architecture.
MVP is supervised pre-trained using labeled datasets.
MVP also has task-specific soft prompts to stimulate the model's capacity in performing a certain task.
MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering. |
This model was contributed by Tianyi Tang. The detailed information and instructions can be found here.
Usage tips |
We have released a series of models here, including MVP, MVP with task-specific prompts, and multi-task pre-trained variants.
If you want to use a model without prompts (standard Transformer), you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp').
If you want to use a model with task-specific prompts, such as summarization, you can load it through MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp-summarization').
Our model supports lightweight prompt tuning following Prefix-tuning with method set_lightweight_tuning(). |
Usage examples
For summarization, it is an example to use MVP and MVP with summarization-specific prompts.
thon |
from transformers import MvpTokenizer, MvpForConditionalGeneration
tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_prompt = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization")
inputs = tokenizer(
"Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
return_tensors="pt",
)
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"]
generated_ids = model_with_prompt.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"] |
For data-to-text generation, it is an example to use MVP and multi-task pre-trained variants.
thon |
from transformers import MvpTokenizerFast, MvpForConditionalGeneration
tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp")
model_with_mtl = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
inputs = tokenizer(
"Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
return_tensors="pt",
)
generated_ids = model.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Stan Lee created the character of Iron Man, a fictional superhero appearing in American comic']
generated_ids = model_with_mtl.generate(**inputs)
tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.'] |
For lightweight tuning, i.e., fixing the model and only tuning prompts, you can load MVP with randomly initialized prompts or with task-specific prompts. Our code also supports Prefix-tuning with BART following the original paper.
thon |
from transformers import MvpForConditionalGeneration
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp", use_prompt=True)
the number of trainable parameters (full tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
468116832
lightweight tuning with randomly initialized prompts
model.set_lightweight_tuning()
the number of trainable parameters (lightweight tuning)
sum(p.numel() for p in model.parameters() if p.requires_grad)
61823328
lightweight tuning with task-specific prompts
model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
model.set_lightweight_tuning()
original lightweight Prefix-tuning
model = MvpForConditionalGeneration.from_pretrained("facebook/bart-large", use_prompt=True)
model.set_lightweight_tuning() |
Resources
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Translation task guide
Summarization task guide |
MvpConfig
[[autodoc]] MvpConfig
MvpTokenizer
[[autodoc]] MvpTokenizer
MvpTokenizerFast
[[autodoc]] MvpTokenizerFast
MvpModel
[[autodoc]] MvpModel
- forward
MvpForConditionalGeneration
[[autodoc]] MvpForConditionalGeneration
- forward
MvpForSequenceClassification
[[autodoc]] MvpForSequenceClassification
- forward
MvpForQuestionAnswering
[[autodoc]] MvpForQuestionAnswering
- forward
MvpForCausalLM
[[autodoc]] MvpForCausalLM
- forward |
MBart and MBart-50 |
Overview of MBart
The MBart model was presented in Multilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan
Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual
corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete
sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only
on the encoder, decoder, or reconstructing parts of the text.
This model was contributed by valhalla. The Authors' code can be found here
Training of MBart
MBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for translation task. As the
model is multilingual it expects the sequences in a different format. A special language id token is added in both the
source and target text. The source text format is X [eos, src_lang_code] where X is the source text. The
target text format is [tgt_lang_code] X [eos]. bos is never used.
The regular [~MBartTokenizer.__call__] will encode source text format passed as first argument or with the text
keyword, and target text format passed with the text_label keyword argument. |
Supervised training
thon |
from transformers import MBartForConditionalGeneration, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-en-ro", src_lang="en_XX", tgt_lang="ro_RO")
example_english_phrase = "UN Chief Says There Is No Military Solution in Syria"
expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria"
inputs = tokenizer(example_english_phrase, text_target=expected_translation_romanian, return_tensors="pt")
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-en-ro")
forward pass
model(**inputs) |
Generation
While generating the target text set the decoder_start_token_id to the target language id. The following
example shows how to translate English to Romanian using the facebook/mbart-large-en-ro model.
thon |
Subsets and Splits